Skip to main content

llama-index llms ollama integration

Project description

LlamaIndex Llms Integration: Ollama

Installation

To install the required package, run:

pip install llama-index-llms-ollama

Setup

  1. Follow the Ollama README to set up and run a local Ollama instance.
  2. When the Ollama app is running on your local machine, it will serve all of your local models on localhost:11434.
  3. Select your model when creating the Ollama instance by specifying model=":".
  4. You can increase the default timeout (30 seconds) by setting Ollama(..., request_timeout=300.0).
  5. If you set llm = Ollama(..., model="<model family>") without a version, it will automatically look for the latest version.

Usage

Initialize Ollama

from llama_index.llms.ollama import Ollama

llm = Ollama(model="llama3.1:latest", request_timeout=120.0)

Generate Completions

To generate a text completion for a prompt, use the complete method:

resp = llm.complete("Who is Paul Graham?")
print(resp)

Chat Responses

To send a chat message and receive a response, create a list of ChatMessage instances and use the chat method:

from llama_index.core.llms import ChatMessage

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality."
    ),
    ChatMessage(role="user", content="What is your name?"),
]
resp = llm.chat(messages)
print(resp)

Streaming Responses

Stream Complete

To stream responses for a prompt, use the stream_complete method:

response = llm.stream_complete("Who is Paul Graham?")
for r in response:
    print(r.delta, end="")

Stream Chat

To stream chat responses, use the stream_chat method:

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality."
    ),
    ChatMessage(role="user", content="What is your name?"),
]
resp = llm.stream_chat(messages)
for r in resp:
    print(r.delta, end="")

JSON Mode

Ollama supports a JSON mode to ensure all responses are valid JSON, which is useful for tools that need to parse structured outputs:

llm = Ollama(model="llama3.1:latest", request_timeout=120.0, json_mode=True)
response = llm.complete(
    "Who is Paul Graham? Output as a structured JSON object."
)
print(str(response))

Structured Outputs

You can attach a Pydantic class to the LLM to ensure structured outputs:

from llama_index.core.bridge.pydantic import BaseModel
from llama_index.core.tools import FunctionTool


class Song(BaseModel):
    """A song with name and artist."""

    name: str
    artist: str


llm = Ollama(model="llama3.1:latest", request_timeout=120.0)
sllm = llm.as_structured_llm(Song)

response = sllm.chat([ChatMessage(role="user", content="Name a random song!")])
print(
    response.message.content
)  # e.g., {"name": "Yesterday", "artist": "The Beatles"}

Asynchronous Chat

You can also use asynchronous chat:

response = await sllm.achat(
    [ChatMessage(role="user", content="Name a random song!")]
)
print(response.message.content)

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/ollama/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_ollama-0.3.6.tar.gz (5.5 kB view details)

Uploaded Source

Built Distribution

llama_index_llms_ollama-0.3.6-py3-none-any.whl (5.9 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_ollama-0.3.6.tar.gz.

File metadata

  • Download URL: llama_index_llms_ollama-0.3.6.tar.gz
  • Upload date:
  • Size: 5.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.5.0-1025-azure

File hashes

Hashes for llama_index_llms_ollama-0.3.6.tar.gz
Algorithm Hash digest
SHA256 1284ef03df171fefee949f8d75e9d1ce3a38172f990269a38f39328f5a93ea47
MD5 2edf74e5547b758f1ee8120d36332e3e
BLAKE2b-256 cda4e9f83b230284775a4967ec8dcf32ac3862608c25bf759fb06976cebcca4a

See more details on using hashes here.

File details

Details for the file llama_index_llms_ollama-0.3.6-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_ollama-0.3.6-py3-none-any.whl
Algorithm Hash digest
SHA256 0a70c2d35ba920032afb04345502776d371466ef9c489f88b71c027a7d3432a7
MD5 d504ff833d222166a41652f327b7a08c
BLAKE2b-256 0102b35019650e66807f2cd5b557018b516dc38ccdaa87d42e78672d221c5c9e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page