Skip to main content

llama-index llms ollama integration

Project description

LlamaIndex Llms Integration: Ollama

Installation

To install the required package, run:

pip install llama-index-llms-ollama

Setup

  1. Follow the Ollama README to set up and run a local Ollama instance.
  2. When the Ollama app is running on your local machine, it will serve all of your local models on localhost:11434.
  3. Select your model when creating the Ollama instance by specifying model=":".
  4. You can increase the default timeout (30 seconds) by setting Ollama(..., request_timeout=300.0).
  5. If you set llm = Ollama(..., model="<model family>") without a version, it will automatically look for the latest version.

Usage

Initialize Ollama

from llama_index.llms.ollama import Ollama

llm = Ollama(model="llama3.1:latest", request_timeout=120.0)

Generate Completions

To generate a text completion for a prompt, use the complete method:

resp = llm.complete("Who is Paul Graham?")
print(resp)

Chat Responses

To send a chat message and receive a response, create a list of ChatMessage instances and use the chat method:

from llama_index.core.llms import ChatMessage

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality."
    ),
    ChatMessage(role="user", content="What is your name?"),
]
resp = llm.chat(messages)
print(resp)

Streaming Responses

Stream Complete

To stream responses for a prompt, use the stream_complete method:

response = llm.stream_complete("Who is Paul Graham?")
for r in response:
    print(r.delta, end="")

Stream Chat

To stream chat responses, use the stream_chat method:

messages = [
    ChatMessage(
        role="system", content="You are a pirate with a colorful personality."
    ),
    ChatMessage(role="user", content="What is your name?"),
]
resp = llm.stream_chat(messages)
for r in resp:
    print(r.delta, end="")

JSON Mode

Ollama supports a JSON mode to ensure all responses are valid JSON, which is useful for tools that need to parse structured outputs:

llm = Ollama(model="llama3.1:latest", request_timeout=120.0, json_mode=True)
response = llm.complete(
    "Who is Paul Graham? Output as a structured JSON object."
)
print(str(response))

Structured Outputs

You can attach a Pydantic class to the LLM to ensure structured outputs:

from llama_index.core.bridge.pydantic import BaseModel
from llama_index.core.tools import FunctionTool


class Song(BaseModel):
    """A song with name and artist."""

    name: str
    artist: str


llm = Ollama(model="llama3.1:latest", request_timeout=120.0)
sllm = llm.as_structured_llm(Song)

response = sllm.chat([ChatMessage(role="user", content="Name a random song!")])
print(
    response.message.content
)  # e.g., {"name": "Yesterday", "artist": "The Beatles"}

Asynchronous Chat

You can also use asynchronous chat:

response = await sllm.achat(
    [ChatMessage(role="user", content="Name a random song!")]
)
print(response.message.content)

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/ollama/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_ollama-0.7.4.tar.gz (8.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_ollama-0.7.4-py3-none-any.whl (8.1 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_ollama-0.7.4.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_ollama-0.7.4.tar.gz
Algorithm Hash digest
SHA256 006dbf407a6bf49612d3ff25a17870d904d883d3127ab163460b89d165250ca2
MD5 d82d0f8b0d7c11932de99be6965722a7
BLAKE2b-256 28d0cf96d0ec3e7221466ff3322838e7bbf7e132bde0622c0574a347dafe8619

See more details on using hashes here.

File details

Details for the file llama_index_llms_ollama-0.7.4-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_ollama-0.7.4-py3-none-any.whl
Algorithm Hash digest
SHA256 1eed65cf300cac56678ad4fb846ecc84daa1f01e83e437dc7b35a5def0b6fd83
MD5 00b0e6c82b34d7aa2b63fffbba85bc1f
BLAKE2b-256 173a9f8b27fadf8bac93be3fe61e3fa5049b401a93f253f27d28ff479d5e2144

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page