Skip to main content

llama-index llms ollama integration

Project description

LlamaIndex Llms Integration: Ollama

Installation

To install the required package, run:

pip install llama-index-llms-ollama

Setup

  1. Follow the Ollama README to set up and run a local Ollama instance.
  2. When the Ollama app is running on your local machine, it will serve all of your local models on localhost:11434.
  3. Select your model when creating the Ollama instance by specifying model=":".
  4. You can increase the default timeout (30 seconds) by setting Ollama(..., request_timeout=300.0).
  5. If you set llm = Ollama(..., model="<model family>") without a version, it will automatically look for the latest version.

Usage

Initialize Ollama

from llama_index.llms.ollama import Ollama

llm = Ollama(model="llama3.1:latest", request_timeout=120.0)

Generate Completions

To generate a text completion for a prompt, use the complete method:

resp = llm.complete("Who is Paul Graham?")
print(resp)

Chat Responses

To send a chat message and receive a response, create a list of ChatMessage instances and use the chat method:

from llama_index.core.llms import ChatMessage

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality."
    ),
    ChatMessage(role="user", content="What is your name?"),
]
resp = llm.chat(messages)
print(resp)

Streaming Responses

Stream Complete

To stream responses for a prompt, use the stream_complete method:

response = llm.stream_complete("Who is Paul Graham?")
for r in response:
    print(r.delta, end="")

Stream Chat

To stream chat responses, use the stream_chat method:

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality."
    ),
    ChatMessage(role="user", content="What is your name?"),
]
resp = llm.stream_chat(messages)
for r in resp:
    print(r.delta, end="")

JSON Mode

Ollama supports a JSON mode to ensure all responses are valid JSON, which is useful for tools that need to parse structured outputs:

llm = Ollama(model="llama3.1:latest", request_timeout=120.0, json_mode=True)
response = llm.complete(
    "Who is Paul Graham? Output as a structured JSON object."
)
print(str(response))

Structured Outputs

You can attach a Pydantic class to the LLM to ensure structured outputs:

from llama_index.core.bridge.pydantic import BaseModel
from llama_index.core.tools import FunctionTool


class Song(BaseModel):
    """A song with name and artist."""

    name: str
    artist: str


llm = Ollama(model="llama3.1:latest", request_timeout=120.0)
sllm = llm.as_structured_llm(Song)

response = sllm.chat([ChatMessage(role="user", content="Name a random song!")])
print(
    response.message.content
)  # e.g., {"name": "Yesterday", "artist": "The Beatles"}

Asynchronous Chat

You can also use asynchronous chat:

response = await sllm.achat(
    [ChatMessage(role="user", content="Name a random song!")]
)
print(response.message.content)

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/ollama/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_ollama-0.10.1.tar.gz (9.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_ollama-0.10.1-py3-none-any.whl (8.8 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_ollama-0.10.1.tar.gz.

File metadata

  • Download URL: llama_index_llms_ollama-0.10.1.tar.gz
  • Upload date:
  • Size: 9.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_ollama-0.10.1.tar.gz
Algorithm Hash digest
SHA256 470ed836dee43bc0171dc05c68c2daa3618a7c38166b8044d7f8360cd8cd8fa6
MD5 4479df9e1c5e9692d31d6bb0158ecb7b
BLAKE2b-256 178cdcda55d685d1094bcffa3f4f2b30bb45ade5aa93694617272bf6c81d5822

See more details on using hashes here.

File details

Details for the file llama_index_llms_ollama-0.10.1-py3-none-any.whl.

File metadata

  • Download URL: llama_index_llms_ollama-0.10.1-py3-none-any.whl
  • Upload date:
  • Size: 8.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_ollama-0.10.1-py3-none-any.whl
Algorithm Hash digest
SHA256 8badbebba5a8b7b36912aa1399319652e2110c36ce58485b9965a1b8238b4bdb
MD5 8d7cb1b039be0122e1dd41f8dbd605f1
BLAKE2b-256 c087ecb53ae8f4a79e9e8b2e2f9ef649840ab985c5a095438b2f53b72674b66f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page