Skip to main content

The official Python client for Ollama.

Project description

Ollama Python Library

The Ollama Python library provides the easiest way to integrate your Python 3 project with Ollama.

Getting Started

Requires Python 3.8 or higher.

pip install ollama

A global default client is provided for convenience and can be used in the same way as the synchronous client.

import ollama
response = ollama.chat(model='llama2', messages=[{'role': 'user', 'content': 'Why is the sky blue?'}])
import ollama
message = {'role': 'user', 'content': 'Why is the sky blue?'}
for part in ollama.chat(model='llama2', messages=[message], stream=True):
  print(part['message']['content'], end='', flush=True)

Using the Synchronous Client

from ollama import Client
message = {'role': 'user', 'content': 'Why is the sky blue?'}
response = Client().chat(model='llama2', messages=[message])

Response streaming can be enabled by setting stream=True. This modifies the function to return a Python generator where each part is an object in the stream.

from ollama import Client
message = {'role': 'user', 'content': 'Why is the sky blue?'}
for part in Client().chat(model='llama2', messages=[message], stream=True):
  print(part['message']['content'], end='', flush=True)

Using the Asynchronous Client

import asyncio
from ollama import AsyncClient

async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  response = await AsyncClient().chat(model='llama2', messages=[message])

asyncio.run(chat())

Similar to the synchronous client, setting stream=True modifies the function to return a Python asynchronous generator.

import asyncio
from ollama import AsyncClient

async def chat():
  message = {'role': 'user', 'content': 'Why is the sky blue?'}
  async for part in await AsyncClient().chat(model='llama2', messages=[message], stream=True):
    print(part['message']['content'], end='', flush=True)

asyncio.run(chat())

Handling Errors

Errors are raised if requests return an error status or if an error is detected while streaming.

model = 'does-not-yet-exist'

try:
  ollama.chat(model)
except ollama.ResponseError as e:
  print('Error:', e.content)
  if e.status_code == 404:
    ollama.pull(model)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ollama-0.1.3.tar.gz (7.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ollama-0.1.3-py3-none-any.whl (7.6 kB view details)

Uploaded Python 3

File details

Details for the file ollama-0.1.3.tar.gz.

File metadata

  • Download URL: ollama-0.1.3.tar.gz
  • Upload date:
  • Size: 7.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for ollama-0.1.3.tar.gz
Algorithm Hash digest
SHA256 47a261a23a03c568b978dd06373e02e4ce143d57f7b77ed1e4b7b9de4a3938fa
MD5 7b0eb9cac2ff09bc50b03002f4f84360
BLAKE2b-256 d84b594abb74580a4182fa896ffeb0efb4a4b721700efe67bcb060d3a77981e3

See more details on using hashes here.

File details

Details for the file ollama-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: ollama-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 7.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for ollama-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 e7a778d5f16ba39355359a028343d373e02f281f7fe7f0611a6542e5e35dd1cc
MD5 1847dab3913b13ff038512936dbafa92
BLAKE2b-256 57cab6a05153d895f2bd2e559a34a05168c915f42f8b491722a502b75ab5173e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page