Skip to main content

LLM plugin providing access to local Ollama models

Project description

llm-ollama

PyPI Changelog Tests License

LLM plugin providing access to models running on local Ollama server.

Installation

Install this plugin in the same environment as LLM.

llm install llm-ollama

Usage

First, ensure that your Ollama server is running and that you have pulled some models. You can use ollama list to check what is locally available.

The plugin will query the Ollama server for the list of models. You can use llm ollama list-models to see the list; it should be the same as output by ollama list. All these models will be automatically registered with LLM and made available for prompting and chatting.

Assuming you have llama2:latest available, you can run a prompt using:

llm -m llama2:latest 'How much is 2+2?'

The plugin automatically creates a short alias for models that have :latest in the name, so the previous command is equivalent to running:

llm -m llama2 'How much is 2+2?'

To start an interactive chat session:

llm chat -m llama2
Chatting with llama2:latest
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
>

Model aliases

The same Ollama model may be referred by several names with different tags. For example, in the following list, there is a single unique model with three different names:

ollama list
NAME                    ID              SIZE    MODIFIED
stable-code:3b          aa5ab8afb862    1.6 GB  9 hours ago
stable-code:code        aa5ab8afb862    1.6 GB  9 seconds ago
stable-code:latest      aa5ab8afb862    1.6 GB  14 seconds ago

In such cases, the plugin will register a single model and create additional aliases. Continuing the previous example, this is what LLM will have:

llm models
...

Ollama: stable-code:3b (aliases: stable-code:code, stable-code:latest, stable-code)

Model options

All models accept the following options, using -o name value syntax:

  • -o temperature 0.8: The temperature of the model. Increasing the temperature will make the model answer more creatively.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-ollama
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm-ollama-0.2.0.tar.gz (8.4 kB view details)

Uploaded Source

Built Distribution

llm_ollama-0.2.0-py3-none-any.whl (8.3 kB view details)

Uploaded Python 3

File details

Details for the file llm-ollama-0.2.0.tar.gz.

File metadata

  • Download URL: llm-ollama-0.2.0.tar.gz
  • Upload date:
  • Size: 8.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for llm-ollama-0.2.0.tar.gz
Algorithm Hash digest
SHA256 4ab0895235699a53d76ec6aa1fd5c88517ba81bdb6e2191e95f38f2bff91e72c
MD5 10b1a20cf154d1b98e4dbb434861c1e9
BLAKE2b-256 116d4f934f884ea2f4183c4330a737130b2ab76cc50e4477265aeed6ff92dfcf

See more details on using hashes here.

File details

Details for the file llm_ollama-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: llm_ollama-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 8.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for llm_ollama-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 392567c862004df2a5539928d99632174252f866117453004cccac41374fe1fa
MD5 c84f84898407bb84a77ebdb0c633a327
BLAKE2b-256 fb000693d7ac70ced7e54315a708c6c5c8080ac895e89de72e87e29d07844493

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page