Skip to main content

The official Python client for Ollama.

Project description

Ollama Python Library

The Ollama Python library provides the easiest way to integrate your Python 3 project with Ollama.

Getting Started

Requires Python 3.8 or higher.

pip install ollama

A global default client is provided for convenience and can be used in the same way as the synchronous client.

import ollama
response = ollama.chat(model='llama2', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])
import ollama
message = {'role': 'user', 'content': 'Why is the sky blue?'}
for part in ollama.chat(model='llama2', messages=[message], stream=True):
  print(part['message']['content'], end='', flush=True)

Using the Synchronous Client

from ollama import Client
message = {'role': 'user', 'content': 'Why is the sky blue?'}
response = Client().chat(model='llama2', messages=[message])

Response streaming can be enabled by setting stream=True. This modifies the function to return a Python generator where each part is an object in the stream.

from ollama import Client
message = {'role': 'user', 'content': 'Why is the sky blue?'}
for part in Client().chat(model='llama2', messages=[message], stream=True):
  print(part['message']['content'], end='', flush=True)

Using the Asynchronous Client

import asyncio
from ollama import AsyncClient

async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  response = await AsyncClient().chat(model='llama2', messages=[message])

asyncio.run(chat())

Similar to the synchronous client, setting stream=True modifies the function to return a Python asynchronous generator.

import asyncio
from ollama import AsyncClient

async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  async for part in await AsyncClient().chat(model='llama2', messages=[message], stream=True):
    print(part['message']['content'], end='', flush=True)

asyncio.run(chat())

Handling Errors

Errors are raised if requests return an error status or if an error is detected while streaming.

model = 'does-not-yet-exist'

try:
  ollama.chat(model)
except ollama.ResponseError as e:
  print('Error:', e.content)
  if e.status_code == 404:
    ollama.pull(model)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ollama-0.0.1.tar.gz (7.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ollama-0.0.1-py3-none-any.whl (7.4 kB view details)

Uploaded Python 3

File details

Details for the file ollama-0.0.1.tar.gz.

File metadata

  • Download URL: ollama-0.0.1.tar.gz
  • Upload date:
  • Size: 7.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for ollama-0.0.1.tar.gz
Algorithm Hash digest
SHA256 3b01aad6fbbf46781676988459fbe1c77ae029d4f1c0c29b733692249c81ecbb
MD5 99d4f3647228857ce7dc9c510c274482
BLAKE2b-256 c8288876a16479edffa6c15323b8fe1f7c7600fe8ed61446658a1518bb7603e0

See more details on using hashes here.

File details

Details for the file ollama-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: ollama-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 7.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for ollama-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d2187efd788dc7c60dc88cf9eb1b30ab792c5c9d39f029cb35f8d62a69c3f13e
MD5 653c653beef857b3ac22c32d7d411a62
BLAKE2b-256 3cc4b248d42cc00d623f99b3b262cf901dddd4e5e8dfef99c8afd8d8c1146ee8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page