Skip to main content

The official Python client for Ollama.

Project description

Ollama Python Library

The Ollama Python library provides the easiest way to integrate Python 3.8+ projects with Ollama.

Prerequisites

  • Ollama should be installed and running
  • Pull a model to use with the library: ollama pull <model> e.g. ollama pull llama3.2
    • See Ollama.com for more information on the models available.

Install

pip install ollama

Usage

from ollama import chat
from ollama import ChatResponse

response: ChatResponse = chat(model='llama3.2', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])
print(response['message']['content'])
# or access fields directly from the response object
print(response.message.content)

See _types.py for more information on the response types.

Streaming responses

Response streaming can be enabled by setting stream=True.

from ollama import chat

stream = chat(
    model='llama3.2',
    messages=[{'role': 'user', 'content': 'Why is the sky blue?'}],
    stream=True,
)

for chunk in stream:
  print(chunk['message']['content'], end='', flush=True)

Custom client

A custom client can be created by instantiating Client or AsyncClient from ollama.

All extra keyword arguments are passed into the httpx.Client.

from ollama import Client
client = Client(
  host='http://localhost:11434',
  headers={'x-some-header': 'some-value'}
)
response = client.chat(model='llama3.2', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])

Async client

The AsyncClient class is used to make asynchronous requests. It can be configured with the same fields as the Client class.

import asyncio
from ollama import AsyncClient

async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  response = await AsyncClient().chat(model='llama3.2', messages=[message])

asyncio.run(chat())

Setting stream=True modifies functions to return a Python asynchronous generator:

import asyncio
from ollama import AsyncClient

async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  async for part in await AsyncClient().chat(model='llama3.2', messages=[message], stream=True):
    print(part['message']['content'], end='', flush=True)

asyncio.run(chat())

API

The Ollama Python library's API is designed around the Ollama REST API

Chat

ollama.chat(model='llama3.2', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])

Generate

ollama.generate(model='llama3.2', prompt='Why is the sky blue?')

List

ollama.list()

Show

ollama.show('llama3.2')

Create

ollama.create(model='example', from_='llama3.2', system="You are Mario from Super Mario Bros.")

Copy

ollama.copy('llama3.2', 'user/llama3.2')

Delete

ollama.delete('llama3.2')

Pull

ollama.pull('llama3.2')

Push

ollama.push('user/llama3.2')

Embed

ollama.embed(model='llama3.2', input='The sky is blue because of rayleigh scattering')

Embed (batch)

ollama.embed(model='llama3.2', input=['The sky is blue because of rayleigh scattering', 'Grass is green because of chlorophyll'])

Ps

ollama.ps()

Errors

Errors are raised if requests return an error status or if an error is detected while streaming.

model = 'does-not-yet-exist'

try:
  ollama.chat(model)
except ollama.ResponseError as e:
  print('Error:', e.error)
  if e.status_code == 404:
    ollama.pull(model)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ollama-0.4.8.tar.gz (13.0 kB view details)

Uploaded Source

Built Distribution

ollama-0.4.8-py3-none-any.whl (13.3 kB view details)

Uploaded Python 3

File details

Details for the file ollama-0.4.8.tar.gz.

File metadata

  • Download URL: ollama-0.4.8.tar.gz
  • Upload date:
  • Size: 13.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for ollama-0.4.8.tar.gz
Algorithm Hash digest
SHA256 1121439d49b96fa8339842965d0616eba5deb9f8c790786cdf4c0b3df4833802
MD5 50c575803b5417179be93899a60180a1
BLAKE2b-256 e264709dc99030f8f46ec552f0a7da73bbdcc2da58666abfec4742ccdb2e800e

See more details on using hashes here.

Provenance

The following attestation bundles were made for ollama-0.4.8.tar.gz:

Publisher: publish.yaml on ollama/ollama-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ollama-0.4.8-py3-none-any.whl.

File metadata

  • Download URL: ollama-0.4.8-py3-none-any.whl
  • Upload date:
  • Size: 13.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for ollama-0.4.8-py3-none-any.whl
Algorithm Hash digest
SHA256 04312af2c5e72449aaebac4a2776f52ef010877c554103419d3f36066fe8af4c
MD5 59205d24defe58d861caa2e0b84f1ce0
BLAKE2b-256 333f164de150e983b3a16e8bf3d4355625e51a357e7b3b1deebe9cc1f7cb9af8

See more details on using hashes here.

Provenance

The following attestation bundles were made for ollama-0.4.8-py3-none-any.whl:

Publisher: publish.yaml on ollama/ollama-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page