Rtila Ollama API Bridge

Description
This templates shows you how to use an RTILA flow to communicate with a local LLM instance using the Ollama API. You can specify which local LLM model you want to use on each query along with your prompt and other parameters.

Pay attention to the end-point used which depends on the LLM Model, it is usually either or:
Generate end-point: http://localhost:11434/api/generate
Chat Completion end-point: http://localhost:11434/api/chat

The response and other information is received back and saved into variables and properties that you can use for the next steps of your flow.
This template scrapes the following data properties:
Status Code
Model
Created at
Response
Done
Done Reason
Context
Total Duration
Load Duration
Prompt Evaluation Count
Prompt Evaluation Duration
Evaluation Count
Evaluation Duration
This template uses the following commands & functions:
Watch Video Demo (Coming soon..) :
Note:
If you find our template useful and like it, please give us a favor by sharing this template to your community for wide circulation. Thanks