Skip to main content

Pure-Python AI backbone for Mito – providers, models, prompts, and utilities with zero Jupyter Server dependencies.

Project description

mito-ai-core

The shared Python AI layer for Mito -- LLM providers, models, prompts, and utils. No Jupyter Server or Tornado dependency.

mito-ai (the JupyterLab extension) depends on this package for all its AI logic. If you're building something else on top of Mito's AI stack, this is the package you want.

Install

pip install mito-ai-core

Development

Working on mito-ai-core only

cd mito-ai-core
pip install -e ".[test]"

Working on both mito-ai-core and mito-ai at the same time

Install both packages in editable mode so changes in either package are picked up immediately without reinstalling:

# From the repo root
pip install -e ./mito-ai-core
pip install -e "./mito-ai[test]"

Order matters! Install mito-ai-core first since mito-ai depends on it. After this, any change you make to a .py file in either package takes effect on the next import (no rebuild needed).

How it relates to mito-ai

mito-ai-core contains all the pure-Python AI logic -- providers, models, prompts, message history, utils.

mito-ai is the JupyterLab extension that adds WebSocket/REST handlers, UI integration, and Streamlit conversion on top of it.

If you're changing provider logic, prompt templates, or models, you're working here. If you're changing how the extension talks to the frontend or handles HTTP requests, you're working in mito-ai.

Usage

Everything goes through ProviderManager:

from mito_ai_core.provider_manager import ProviderManager
from mito_ai_core.completions.models import MessageType

pm = ProviderManager()
pm.set_selected_model("gpt-4.1")

# One-shot
response = await pm.request_completions(
    message_type=MessageType.CHAT,
    messages=[{"role": "user", "content": "Hello!"}],
)

# Streaming -- pass a callback
response = await pm.stream_completions(
    message_type=MessageType.CHAT,
    messages=[{"role": "user", "content": "Hello!"}],
    message_id="msg-1",
    thread_id="thread-1",
    reply_fn=lambda chunk: print(chunk),
)

ProviderManager picks the right client (OpenAI, Anthropic, Gemini, Copilot, LiteLLM, Abacus) based on the selected model. It handles retries, telemetry, and token logging.

This is a library, not a server. The caller owns the transport (WebSocket, HTTP, CLI, whatever).

Key modules

  • provider_manager -- routes requests to the right LLM client
  • clients/ -- OpenAI, Anthropic, Gemini, Copilot wrappers
  • completions/models -- all the dataclasses and Pydantic models
  • completions/message_history -- thread-safe chat persistence
  • completions/prompt_builders/ -- prompt construction for chat, agent, debug, inline completion, charts
  • utils/ -- model resolution, token estimation, telemetry, rate limits

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mito_ai_core-0.1.1.tar.gz (142.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mito_ai_core-0.1.1-py3-none-any.whl (206.0 kB view details)

Uploaded Python 3

File details

Details for the file mito_ai_core-0.1.1.tar.gz.

File metadata

  • Download URL: mito_ai_core-0.1.1.tar.gz
  • Upload date:
  • Size: 142.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for mito_ai_core-0.1.1.tar.gz
Algorithm Hash digest
SHA256 873c78ab32dcaed0292e2664083d2780b42353e792b2a0408f3091b1d6bb49d7
MD5 7d08a10bec0a51c2e39312f911236c08
BLAKE2b-256 cbdbca08abee1e58d09319b37434ecb4209c6ce97a295e51ca115cef12d92827

See more details on using hashes here.

File details

Details for the file mito_ai_core-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: mito_ai_core-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 206.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for mito_ai_core-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5c3e21329516f371489363c58e1cc0e5ac14b5bbca8c0376c828c92cffc91afb
MD5 1b0b2d76b9d5a11820bcc1172071f8e4
BLAKE2b-256 f87bb15e8b321ce5abd0ea048889fe392365160d71928ce318c23014dcce4256

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page