Skip to main content

Lightweight Python abstractions and connectors for LLM providers (OpenAI, Claude, Gemini, Ollama).

Project description

modelito

Modelito is a compact, dependency-light Python library that provides provider- agnostic abstractions and connectors for large language models (LLMs). It offers lightweight shims for OpenAI, Claude, Gemini and local Ollama deployments, plus utilities for token counting, timeout estimation, and small helpers to manage Ollama servers when needed. The library is designed for easy integration into applications and CI pipelines.

Quick start

Install in editable mode for development (install optional extras as needed):

pip install -e .[dev]
pip install -r dev-requirements.txt

# Optional extras
pip install -e .[ollama,tokenization,openai,anthropic]

Run tests:

pytest -q

Build and install

To build a source distribution and wheel locally:

python -m pip install --upgrade build
python -m build

Install from the built wheel:

pip install dist/modelito-0.3.0-py3-none-any.whl

See the docs/ folder for more details on calibration and migration.

Providers

This package provides compatibility shims and small, dependency-light implementations for common provider interfaces. When optional extras are installed the package will attempt to use real SDK clients; otherwise the shims provide safe offline-friendly fallbacks suitable for testing.

Provided shims and utilities:

  • OllamaProvider — HTTP-aware provider that will call a local Ollama HTTP API when available. If the HTTP API is unavailable the provider will attempt to use the local Ollama CLI as a best-effort fallback before returning a deterministic stub useful for tests and examples.
  • GeminiProvider, GrokProvider — lightweight shims.
  • OpenAIProvider, ClaudeProvider — will use the official SDKs when installed, falling back to deterministic behavior otherwise.

License / AS IS

This software is provided "AS IS" and without warranties of any kind. See the included LICENSE file for the full MIT license text.

CI / Integration Tests

This repository includes a GitHub Actions workflow at .github/workflows/ci.yml. The workflow runs mypy and the unit test suite on push and pull requests.

Ollama integration tests are intentionally gated and will only run when you explicitly enable them. To run integration tests locally or in CI set the environment variable RUN_OLLAMA_INTEGRATION=1. Additional optional flags:

  • ALLOW_OLLAMA_INSTALL=1 — permit the integration tests to attempt installing Ollama when missing.
  • ALLOW_OLLAMA_DOWNLOAD=1 — permit downloading remote models during integration tests.
  • ALLOW_OLLAMA_UPDATE=1 — permit running update flows during integration tests.

Example (local):

RUN_OLLAMA_INTEGRATION=1 pytest tests/test_ollama_integration.py -q

Provider interface

modelito exposes a minimal structural Provider Protocol that codifies the small runtime surface expected from provider implementations and third-party adapters. The Protocol is intentionally small to remain compatible with existing duck-typed providers — it requires only:

  • list_models() -> list[str]
  • summarize(messages, settings=None) -> str

All built-in providers shipped with the package (OpenAIProvider, ClaudeProvider, GeminiProvider, OllamaProvider, GrokProvider) now explicitly subclass Provider. The Provider Protocol is decorated with @runtime_checkable, so you can use isinstance() checks at runtime when you need to enforce the contract in application code.

Example usage:

from modelito import Provider, OllamaProvider

p: Provider = OllamaProvider()
if isinstance(p, Provider):
    resp = p.summarize([{"role": "user", "content": "hello"}])
    print(resp)

This release introduces typed Message/Response dataclasses and expands the provider surface into a small set of optional Protocols:

  • SyncProvider (alias: Provider) — existing synchronous summarize()/list_models() surface.
  • AsyncProvider — async acomplete() surface for providers that support awaitable calls.
  • StreamingProvider — streaming stream() generator surface.
  • EmbeddingProviderembed() surface for vector embeddings.

modelito exposes Message and Response dataclasses; connectors now prefer these types while still accepting legacy dict-shaped messages for compatibility. Example usage with the new API:

from modelito import Provider, Message, OllamaProvider, OllamaConnector

p: Provider = OllamaProvider()
if isinstance(p, Provider):
    resp_text = p.summarize([Message(role="user", content="hello")])
    print(resp_text)

conn = OllamaConnector(provider=p)
res = conn.complete(conv_id="example", new_messages=[Message(role="user", content="hello")])
print(res.text)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

modelito-1.0.0.tar.gz (36.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

modelito-1.0.0-py3-none-any.whl (36.0 kB view details)

Uploaded Python 3

File details

Details for the file modelito-1.0.0.tar.gz.

File metadata

  • Download URL: modelito-1.0.0.tar.gz
  • Upload date:
  • Size: 36.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.13

File hashes

Hashes for modelito-1.0.0.tar.gz
Algorithm Hash digest
SHA256 8eb6e06e52c630a64c0d9d9ff5f0297f71bcb485db3968f5d1740457ff2f8833
MD5 48143c1b9af54bfb6e1d95e71f0cd6d2
BLAKE2b-256 1ef05817c8a4aad630689a34b41ccd1eede7cace7282fcf7cec6d7363fdfaeb8

See more details on using hashes here.

File details

Details for the file modelito-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: modelito-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 36.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.13

File hashes

Hashes for modelito-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 477144cf3163955f3f2f922b294cd9145f77ff61a3db0db7d4b76c9c12e1e0a1
MD5 0b9111e0bcbf210c77f7f50af184f68c
BLAKE2b-256 9b7eeb2e3826e5e9c388bb16c8c4ade68169d7708a4dee2dcb00f240b643b9dc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page