Skip to main content

A light wrapper around ollama-python that introduces caching, syntax sugar and increased `think` compatibility

Project description

Ollama-Think Library

Tests Python Tests

A thin wrapper around the ollama-python library with the addition of caching, increased think model compatibility and a little syntax sugar.

Features

  • Caching: Automatically caches responses to significantly speed up repeated requests.
  • Thinking: Enables some officially unsupported models to use thinking mode. Why hack?
  • Streaming and Non-streaming: Separates the underlying streaming and non-streaming interface to provide clean type hints.
  • Syntax Sugar: Less boiler-plate, so that you can maintain your flow.

Quickstart

Get up and running in less than a minute.

1. Install the library:

pip install ollama-think

2. Use:

from ollama_think import Client

# Initialize the client
client = Client(host="http://localhost:11434", cache_dir=".ollama_cache", clear_cache=False)

# unpack the response into thinking and content
thinking, content = client.call(
    model="qwen3",                 # or any other model
    prompt="Why is the sky blue?", # shortcut for messages=[{'role': 'user', 'content': 'Why is the sky blue?'}]
    think=True                     # Set to True to see the model's thinking process or ('low', 'medium', 'high' for gpt-oss)
)

print(f"Thinking: {thinking}, Content: {content}")

Detailed Usage

Non-streaming

The call method provides strongly typed access to the underlying Chat method in non-streaming mode. It returns a ThinkResponse object which is a subclass of ollama.ChatResponse and adds some convenience properties. You can use prompt or messages as you prefer.

from ollama_think import Client
client = Client()

# Make a non-streaming call
response: ThinkResponse = client.call(
    model="qwen3",         # The model to use
    prompt="Hello, world!" # A single user message
    messages = None,       # or a list of messages
    tools = None,          # A list of tools available
    think = True,          # Enable thinking mode
    format = None,         # The format to return a response in: None | 'json' | your_obj.model_json_schema()
    options = None,        # Additional model parameter dict, such as {'temperature': 0.1, 'num_ctx': 8192}
    keep_alive = None,     # Controls how long the model will stay loaded in memory following the request.
    use_cache = True)      # If True, attempts to retrieve the response from cache.

# The response object contains all the original data from the Ollama ChatResponse
print(response)
# ThinkResponse(
#     model='qwen3',
#     created_at='2025-07-03T14:16:05.8452406Z',
#     done=True,
#     done_reason='stop',
#     total_duration=2461619200,
#     load_duration=2111438400,
#     prompt_eval_count=20,
#     prompt_eval_duration=78409600,
#     eval_count=16,
#     eval_duration=271104600,
#     message=Message(role='assistant', content='Hello, world! How can I assist you today?', thinking='...',
#                     images=None, tool_calls=None))

# For convenience, you can access the content and thinking as properties
print(response.thinking)
# '...'
print(response.content)
# 'Hello, world! ...'

# The response object can be used as a string which will show just the 'content'
print(f"The model said: {response}")  # same as response.content
# The model said: Hello, world! ...

# or unpack the response into thinking and content for single line access
thinking, content = response
print(f"Thinking: {thinking}, Content: {content}")

Streaming

The stream method provides a strongly typed access to the underlying Chat method in streaming mode. It returns a an iterator of ThinkResponse chunks

from ollama_think import Client
client = Client()

stream = client.stream(model="qwen3", prompt="Tell me a short story about italian chimpanzees and bananas", think=True)
for thinking, content in stream:
    print(thinking, end="")
    print(content, end="")  # empty until thinking is finished for most models

Thinking Mode

The think parameter tells ollama to enable thinking for models that support this. For other models that use non-standard ways of enabling thinking we do the neccesary. Why hack? Default config: src/ollama_think/config.yaml Results: model_capabilities.md

Some models will think, even without 'enabling' thinking. This output is separated out of the content into thinking.

Note: Not all models officially or unofficially support thinking. They will throw a 400 error if you try to enable thinking.

Caching

The client automatically caches responses using the light-weight DiskCache library to avoid re-generating them for the same request. You can disable this behavior by setting use_cache=False.

# This call will be cached
response1 = client.call(model="qwen3", prompt="Hello, world!") # 0.31 seconds

# This call will use the cached response
response2 = client.call(model="qwen3", prompt="Hello, world!") # 0.0001 seconds

# This call will not attempt to get from the cache and will not store the result
response3 = client.call(model="qwen3", prompt="Hello, world!", use_cache=False)

You can clear the cache by passing clear_cache=True when initializing the client:

client = Client(clear_cache=True)

Options

The options parameter of the underlying chat method can be used to change how the model responds. The most commonly used parameters are

  • temperature Low values keep the model deterministic, Higher values for more creativity Typically 0.1 -> 1.0
  • num_ctx Ollama has a default context length of 2048, which can be increased if you have enough VRAM. If you send in more than num_ctx tokens, ollama will silently truncate your message, which can lead to lost instructions.
from ollama_think import Client
client = Client()

prompt="Describe the earth to an alien who has just arrived."
options={'num_ctx': 8192, 'temperature': 0.9}

print("Using prompt:", prompt)
print("Using options:", options)

thinking, content = client.call(model="qwen3", prompt=prompt, think=True, options=options)
print(f"Thinking: {thinking}, Content: {content}")

See examples/options_example.py for a full list of options

Tool Calling

Before, and underneath the concept of MCP servers are the humble tool_calls. By telling the model that you have a tool available, the model can choose to reply with a special format that indicates that it wants to call a tool. Typically, this call is intercepted, the tool is excecuted and the result sent back to the model. The model's second response can then be shown to a user.

See examples/tool_calling_example.py

Response Formats

Forcing JSON format can encourage some models to behave. It is usualy a good idea to mention JSON in the prompt.

from ollama_think import Client
import json

client = Client()

text_json = client.call(
    model="qwen3",
    prompt="Design a json representation of a spiral galaxy",
    format="json",
).content

my_object = json.loads(text_json)  # might explode if invalid json was returned

You can use pydantic models to describe more exactly the format you want.

from ollama_think import Client
from pydantic import BaseModel, Field
client = Client()

class Heat(BaseModel):
    """A specially crafted response object to capture an iterpretation of heat"""
    reaoning: str = Field(..., description="your reasoning for the response")
    average_temperature: float = Field(..., description="average temperature")

text_obj = client.call(model="qwen3", prompt="How hot is the world?",
        format=Heat.model_json_schema()).content

my_obj = Heat.model_validate_json(text_obj) # might explode it the format is invalid

See examples/response_format_example.py

Access to the underlying ollama client

Since the ollama_think is a thin wrapper around the ollama.client, you can still access the all the underlying ollama client methods.

from ollama_think import Client
from ollama import ChatResponse

client = Client()
response: ChatResponse = client.chat(model='llama3.2', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])
print(response['message']['content'])

Prompts and Messages

from ollama_think import Client

client = Client()
# the prompt parameter in `call` and `stream` is just a shortcut for
prompt = 'Why is the sky blue?'
message =  {'role': 'user', 'content': prompt}
client.call(model='llama3.2', messages=[message])  # shortcut
client.call(model='llama3.2', prompt=prompt)       # same thing

Credit to

Reference docs

Contributing

Contributions are welcome! Please open an issue or submit a pull request.

Development Setup

This project uses uv for package management, but pip should work too.

  1. Clone the repository:

    git clone https://github.com/your-username/ollama-think.git
    cd ollama-think
    
  2. Create a virtual environment and install dependencies: This command creates a virtual environment in .venv and installs all dependencies, including development tools.

    uv sync --extra dev
    

Running Checks

  • Linting and Formatting: To automatically format and lint the code, run:

    uv run ruff format .
    uv run ruff check . --fix
    
  • Running Tests:

    • To run the default (fast) unit tests:
      uv run pytest
      
    • To run the full test suite, including slow integration tests that require a running Ollama instance:
      uv run pytest -m "slow or not slow"
      
    • To pass a custom host to the integration tests:
      uv run pytest -m "slow or not slow" --host http://localhost:11434
      
  • Testing new models:

    # edit /src/ollama_think/config.yaml
    # check the output from non-streaming and streaming
    uv run ./tests/test_hacks.py --host http://localhost:11434 --model "model_name"
    
    # check that this makes a difference
    uv run pytest ./tests/test_model_capabilities.py --host http://localhost:11434 -m "slow" --model "model_name"
    
    # re-generate doc
    uv run tests/generate_model_capabilities_report.py
    
    # submit a PR
    

License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ollama_think-0.1.11.tar.gz (63.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ollama_think-0.1.11-py3-none-any.whl (18.3 kB view details)

Uploaded Python 3

File details

Details for the file ollama_think-0.1.11.tar.gz.

File metadata

  • Download URL: ollama_think-0.1.11.tar.gz
  • Upload date:
  • Size: 63.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ollama_think-0.1.11.tar.gz
Algorithm Hash digest
SHA256 c3cb8343a8e687db5836933c941faf89a02fd7c16b5e929604d0dec7587275d6
MD5 8d6f76dbd2149b49938f355ec6a4dbe0
BLAKE2b-256 a9efad61f905c23e0055b278194a7a444ab6020eb173fb3f94b88a5394910eb4

See more details on using hashes here.

Provenance

The following attestation bundles were made for ollama_think-0.1.11.tar.gz:

Publisher: python-publish.yml on rhiza-fr/ollama-think

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file ollama_think-0.1.11-py3-none-any.whl.

File metadata

  • Download URL: ollama_think-0.1.11-py3-none-any.whl
  • Upload date:
  • Size: 18.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ollama_think-0.1.11-py3-none-any.whl
Algorithm Hash digest
SHA256 fdc3b3c46208918d8dc163c4f64907035e34b729bd747f74b70e8c9ca3f835dc
MD5 b66dfafe6ab33ad7f91cbdc33b6ffc58
BLAKE2b-256 5cbd1def77f2e11e16973a8fef427ca0f1dc29171d7c117ea9641147b83defe6

See more details on using hashes here.

Provenance

The following attestation bundles were made for ollama_think-0.1.11-py3-none-any.whl:

Publisher: python-publish.yml on rhiza-fr/ollama-think

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page