Skip to main content

Embedding models using Ollama pulled models

Project description

llm-embed-ollama

PyPI Changelog Tests License

LLM plugin providing access to embedding models running on local Ollama server.

Installation

Install this plugin in the same environment as LLM.

llm install llm-embed-ollama

Background

Ollama provides Few embedding models. This plugin enables the usage of those models using llm and ollama embeddings..

To utilize these models, you need to have an instance of the Ollama server running.

See also Embeddings: What they are and why they matter for background on embeddings and an explanation of the LLM embeddings tool.

See also Ollama Embeddings Models Blog

Usage

This plugin adds support for the following embedding models available in ollama:

  • all-minilm
  • nomic-embed-text
  • mxbai-embed-large
  • bge-large: Embedding model from BAAI mapping texts to vectors.
  • bge-m3: BGE-M3 is a new model from BAAI distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.

The models needs to be downloaded. Using `ollama pull the first time you try to use them.

See the LLM documentation for everything you can do.

To get started embedding a single string, run the following:

Make sure you have the appropriate ollama model.

ollama pull all-minilm
llm embed -m all-minilm -c 'Hello world'

This will output a JSON array of 384 floating point numbers to your terminal.

To calculate and store embeddings for every README in the current directory (try this somewhere with a node_modules directory to get lots of READMEs) run this:

llm embed-multi ollama-readmes \
    -m all-minilm \
    --files . '**/README.md' --store

Then you can run searches against them like this:

llm similar ollama-readmes -c 'utility functions'

Add | jq to pipe it through jq for pretty-printed output, or | jq .id to just see the matching filenames.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-embed-ollama
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

llm install -e '.[test]'

To run the tests:

pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_embed_ollama-0.1.2.tar.gz (7.5 kB view details)

Uploaded Source

Built Distribution

llm_embed_ollama-0.1.2-py3-none-any.whl (7.5 kB view details)

Uploaded Python 3

File details

Details for the file llm_embed_ollama-0.1.2.tar.gz.

File metadata

  • Download URL: llm_embed_ollama-0.1.2.tar.gz
  • Upload date:
  • Size: 7.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.0 CPython/3.12.5

File hashes

Hashes for llm_embed_ollama-0.1.2.tar.gz
Algorithm Hash digest
SHA256 43004830b7d50578a43354bda0ad1497d100494dab103c504e0568e7c7bbeb94
MD5 592e6559b32b8638c4485d2fc463a317
BLAKE2b-256 c12e863187e582a8931d490952215ec35ac4b19d044f2a74f252af0a3541bd90

See more details on using hashes here.

File details

Details for the file llm_embed_ollama-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_embed_ollama-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 1607ec25e68c455ab7f148f9b2b2f858766acf206f58ec3bbfbcf819a5cad15b
MD5 dae6ad3e19332f3ac6127ac1c2716cbd
BLAKE2b-256 62576478bfd92fc1da238323ed562145e4315fb2c04f3aa375159e98bb05dc4c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page