Skip to main content

A Python library for running Ollama agents with automated server management

Project description

ollama2a

A Python library for running Ollama agents with automated server management.

Features

  • 🚀 Automated Ollama Server Management: Automatically starts and manages Ollama servers
  • 🤖 Pydantic AI Integration: Seamless integration with pydantic-ai for agent creation
  • 🔧 Configurable: Easy configuration of host, port, models, and tools
  • 🧪 Well Tested: Comprehensive test suite with high coverage
  • 📦 Production Ready: Robust error handling and resource management

Installation

pip install ollama2a

Development Installation

pip install ollama2a[dev]

Quick Start

Basic Usage

from ollama2a.agent_executor import OllamaAgentExecutor

# Create an agent executor with default settings
executor = OllamaAgentExecutor(
    ollama_model="qwen3:0.6b",
    system_prompt="You are a helpful assistant."
)

# The server starts automatically and the agent is ready to use
result = executor.agent.run_sync(user_prompt="What is the capital of France?")
print(result)

With Custom Tools

from pydantic_ai import Tool, RunContext
from ollama2a.agent_executor import OllamaAgentExecutor

# Define a custom tool
async def my_tool(ctx: RunContext[int], x: int, y: int) -> str:
    return f"Result: {x + y}"

# Create executor with custom tools
executor = OllamaAgentExecutor(
    ollama_model="qwen3:0.6b",
    system_prompt="You are a math assistant.",
    tools=[Tool(my_tool)]
)

FastAPI Integration

from pydantic_ai import Tool
from ollama2a.agent_executor import OllamaAgentExecutor
def my_tool(ctx: RunContext[int], x: int, y: int) -> str:
    return f"Result: {x + y}"

# Create the agent executor
executor = OllamaAgentExecutor(
    ollama_host="localhost",
    ollama_port=11434,
    ollama_model="qwen3:0.6b",
    system_prompt="You are a helpful assistant.",
    tools=[Tool(my_tool)]
)

# Get the FastAPI app
app = executor.app

# Run with: uvicorn main:app --host 0.0.0.0 --port 8000

Configuration

OllamaAgentExecutor Parameters

Parameter Type Default Description
ollama_host str "localhost" Ollama server host
ollama_port int 11434 Ollama server port
ollama_model str "qwen3:0.6b" Model to use
system_prompt str "You are a helpful assistant." System prompt for the agent
description str "An agent that uses the Ollama API to execute tasks." Agent description
tools List[Tool] [] Custom tools for the agent
a2a_port int 8000 Port for the A2A server

Server Management

The HybridOllamaManager automatically handles:

  • Server startup: Starts Ollama server if not running
  • Model downloading: Downloads models if not available locally
  • Health checks: Monitors server health
  • Graceful shutdown: Properly terminates processes
  • Error handling: Robust error handling and retries

Manual Server Management

from ollama2a.ollama_manager import HybridOllamaManager

manager = HybridOllamaManager(host="localhost", port=11434)
manager.ensure_server_running()

# Use the manager
response = manager.run_model("qwen3:0.6b", "Hello world!")
print(response)

# Cleanup when done
manager.cleanup()

Requirements

  • Python 3.9+
  • Ollama installed on your system

Installation of Ollama

Follow the official Ollama installation guide for your operating system.

Development

Setup Development Environment

# Clone the repository
git clone https://github.com/yourusername/ollama2a.git
cd ollama2a

# Install in development mode
pip install -e .[dev]

Running Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=ollama2a --cov-report=html

# Run specific test file
pytest tests/test_ollama_manager.py -v

Code Quality

# Format code
black .

# Sort imports
isort .

# Lint code
flake8 .

# Type checking
mypy ollama2a/

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Changelog

0.1.0 (2025-01-XX)

  • Initial release
  • Basic Ollama server management
  • Pydantic AI integration
  • Starlette app generation
  • Comprehensive test suite

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ollama2a-0.1.0.tar.gz (7.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ollama2a-0.1.0-py3-none-any.whl (7.1 kB view details)

Uploaded Python 3

File details

Details for the file ollama2a-0.1.0.tar.gz.

File metadata

  • Download URL: ollama2a-0.1.0.tar.gz
  • Upload date:
  • Size: 7.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.0

File hashes

Hashes for ollama2a-0.1.0.tar.gz
Algorithm Hash digest
SHA256 b8d3ecdb883cc05e8d27243bde1454f55b9bebe498ce2175288e537e86e966f1
MD5 2308a2a9d6c8e96a21bbebd6192ad736
BLAKE2b-256 6144a0b948a8d650400dcfe49dfa76dd38369d240a70329cb1015e0da3b403af

See more details on using hashes here.

File details

Details for the file ollama2a-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: ollama2a-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 7.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.0

File hashes

Hashes for ollama2a-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 83e766be0536227ca228d5a9e41ef82197a6fd904008eed24061c92c8f27e001
MD5 654cc464ae71f879839c9763b54e242a
BLAKE2b-256 7c42d2991b553ce735c08e8fa07dac4c3a09b4be5372d380bf6c66ce2f7c3f5c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page