llama-index llms ollama integration
Project description
LlamaIndex Llms Integration: Ollama
Installation
To install the required package, run:
pip install llama-index-llms-ollama
Setup
- Follow the Ollama README to set up and run a local Ollama instance.
- When the Ollama app is running on your local machine, it will serve all of your local models on
localhost:11434
. - Select your model when creating the
Ollama
instance by specifyingmodel=":"
. - You can increase the default timeout (30 seconds) by setting
Ollama(..., request_timeout=300.0)
. - If you set
llm = Ollama(..., model="<model family>")
without a version, it will automatically look for the latest version.
Usage
Initialize Ollama
from llama_index.llms.ollama import Ollama
llm = Ollama(model="llama3.1:latest", request_timeout=120.0)
Generate Completions
To generate a text completion for a prompt, use the complete
method:
resp = llm.complete("Who is Paul Graham?")
print(resp)
Chat Responses
To send a chat message and receive a response, create a list of ChatMessage
instances and use the chat
method:
from llama_index.core.llms import ChatMessage
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality."
),
ChatMessage(role="user", content="What is your name?"),
]
resp = llm.chat(messages)
print(resp)
Streaming Responses
Stream Complete
To stream responses for a prompt, use the stream_complete
method:
response = llm.stream_complete("Who is Paul Graham?")
for r in response:
print(r.delta, end="")
Stream Chat
To stream chat responses, use the stream_chat
method:
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality."
),
ChatMessage(role="user", content="What is your name?"),
]
resp = llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")
JSON Mode
Ollama supports a JSON mode to ensure all responses are valid JSON, which is useful for tools that need to parse structured outputs:
llm = Ollama(model="llama3.1:latest", request_timeout=120.0, json_mode=True)
response = llm.complete(
"Who is Paul Graham? Output as a structured JSON object."
)
print(str(response))
Structured Outputs
You can attach a Pydantic class to the LLM to ensure structured outputs:
from llama_index.core.bridge.pydantic import BaseModel
from llama_index.core.tools import FunctionTool
class Song(BaseModel):
"""A song with name and artist."""
name: str
artist: str
llm = Ollama(model="llama3.1:latest", request_timeout=120.0)
sllm = llm.as_structured_llm(Song)
response = sllm.chat([ChatMessage(role="user", content="Name a random song!")])
print(
response.message.content
) # e.g., {"name": "Yesterday", "artist": "The Beatles"}
Asynchronous Chat
You can also use asynchronous chat:
response = await sllm.achat(
[ChatMessage(role="user", content="Name a random song!")]
)
print(response.message.content)
LLM Implementation example
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llama_index_llms_ollama-0.3.5.tar.gz
.
File metadata
- Download URL: llama_index_llms_ollama-0.3.5.tar.gz
- Upload date:
- Size: 5.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 52164937c391989b0dff0c195a948e3680d1ac726e06722c71623352563bfcc8 |
|
MD5 | 1261759c7c7771d95e47ea5bc1353a4e |
|
BLAKE2b-256 | c7f05cb17348281bad7e8f3ffc7f5dc0af03f7d39a48bbc78263e66035e733dd |
File details
Details for the file llama_index_llms_ollama-0.3.5-py3-none-any.whl
.
File metadata
- Download URL: llama_index_llms_ollama-0.3.5-py3-none-any.whl
- Upload date:
- Size: 5.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 506897d6843521ae0dc7cebe218dc3149c8ccacbda04adf1c8f78485038897de |
|
MD5 | 4a7f4403a61e877cc83921e428e4dcf3 |
|
BLAKE2b-256 | 1a549e4c6bdac17d384aef4745b54f78876d334c1e781232b5ea9a1399eefea1 |