Skip to main content

Minimal LLM client getter for OpenAI Responses + OpenAI-compatible Chat Completions.

Project description

kantan-llm 😺✨

A tiny Python library that removes the boring boilerplate (keys/URLs/provider selection) so you can call LLMs with a single get_llm() 💨

Big idea: set env vars for the providers/models you use, then just do get_llm("model-name") and it “just connects” 😺✨

Supported providers (roughly) 🌍

  • OpenAI (Responses)
  • Anthropic (Claude via OpenAI-compatible SDK)
  • OpenRouter (OpenAI-compatible Chat)
  • Google (Gemini via OpenAI-compatible Chat)
  • LMStudio / Ollama / any OpenAI-compatible Chat

Install 📦

pip install kantan-llm

Quickstart 🚀

OpenAI (Responses API is the source of truth)

export OPENAI_API_KEY="sk-..."
from kantan_llm import get_llm

llm = get_llm("gpt-4.1-mini")
res = llm.responses.create(input="Say hi in one short line.")
print(res.output_text)

llm is OpenAI SDK compatible (unknown attributes delegate to the underlying client).

OpenAI-compatible (Chat Completions is the source of truth)

LMStudio (example: openai/gpt-oss-20b)

export LMSTUDIO_BASE_URL="http://192.168.11.16:1234"  # `/v1` is optional
from kantan_llm import get_llm

llm = get_llm("openai/gpt-oss-20b", provider="lmstudio")
cc = llm.chat.completions.create(messages=[{"role": "user", "content": "Return exactly: OK"}], max_tokens=16)
print(cc.choices[0].message.content)

Ollama (example)

export OLLAMA_BASE_URL="http://localhost:11434"  # `/v1` is optional
from kantan_llm import get_llm

llm = get_llm("llama3.2", provider="ollama")
cc = llm.chat.completions.create(messages=[{"role": "user", "content": "Return exactly: OK"}], max_tokens=16)
print(cc.choices[0].message.content)

Anthropic (Claude via OpenAI-compatible SDK)

export CLAUDE_API_KEY="sk-ant-..."
from kantan_llm import get_llm

llm = get_llm("claude-3-5-sonnet-latest")  # if `CLAUDE_API_KEY` exists -> provider=anthropic (inferred)
cc = llm.chat.completions.create(messages=[{"role": "user", "content": "Return exactly: OK"}], max_tokens=16)
print(cc.choices[0].message.content)

OpenRouter (includes Claude, etc.)

export OPENROUTER_API_KEY="..."
from kantan_llm import get_llm

llm = get_llm("anthropic/claude-3.5-sonnet", provider="openrouter")  # explicit is recommended (Anthropic takes precedence)
cc = llm.chat.completions.create(messages=[{"role": "user", "content": "Return exactly: OK"}], max_tokens=16)
print(cc.choices[0].message.content)

Google (Gemini via an OpenAI-compatible endpoint)

export GOOGLE_API_KEY="..."
from kantan_llm import get_llm

llm = get_llm("gemini-2.0-flash")
cc = llm.chat.completions.create(messages=[{"role": "user", "content": "Return exactly: OK"}], max_tokens=16)
print(cc.choices[0].message.content)

Provider rules 🧭

  • gpt-*openai
  • gemini-*google
  • claude-*anthropic (if CLAUDE_API_KEY is set) → openrouter (if OPENROUTER_API_KEY is set) → otherwise compat
  • If the model name is not recognizable, it picks the first available provider by env vars: lmstudioollamaopenrouteranthropicgoogle

Explicit provider 🎯

from kantan_llm import get_llm

llm = get_llm("gpt-4.1-mini", provider="openai")

Fallback (order = priority) 🧯

from kantan_llm import get_llm

llm = get_llm("gpt-4.1-mini", providers=["openai", "lmstudio", "openrouter"])

Tracing / Tracer 🧵

By default, get_llm() enables a simple tracer that prints input/output (colorized) for each LLM call.

from kantan_llm import get_llm
from kantan_llm.tracing import trace

llm = get_llm("gpt-4.1-mini")
with trace("workflow"):
    llm.responses.create(input="Say hi.")

More: docs/tracing.md

Search (SQLite) 🔎

Use SQLiteTracer as a lightweight search backend for traces/spans.

from kantan_llm.tracing import SpanQuery, TraceQuery
from kantan_llm.tracing.processors import SQLiteTracer

tracer = SQLiteTracer("traces.sqlite3")
traces = tracer.search_traces(query=TraceQuery(keywords=["hello"], limit=10))
spans = tracer.search_spans(query=SpanQuery(keywords=["hello"], limit=10))

More: docs/search.md Tutorial: docs/tutorial_trace_analysis.md

Examples 📚

  • examples/tracing_basic.py
  • examples/search_sqlite.py

Environment variables 🔐

  • OpenAI
    • OPENAI_API_KEY (required)
    • OPENAI_BASE_URL (optional)
  • Generic compatible (compat)
    • KANTAN_LLM_BASE_URL (required)
    • KANTAN_LLM_API_KEY (optional; falls back to a dummy value)
  • LMStudio
    • LMSTUDIO_BASE_URL (required)
  • Ollama
    • OLLAMA_BASE_URL (required)
  • OpenRouter
    • OPENROUTER_API_KEY (required)
  • Anthropic
    • CLAUDE_API_KEY (required)
    • CLAUDE_BASE_URL (optional)
  • Google
    • GOOGLE_API_KEY (required)
    • GOOGLE_BASE_URL (optional)

Error example 💥

  • Missing OpenAI key: python -c 'from kantan_llm import get_llm; get_llm(\"gpt-4.1-mini\")'[kantan-llm][E2] Missing OPENAI_API_KEY for provider: openai

Tests 🧪

Live integration tests (real APIs) are opt-in:

KANTAN_LLM_RUN_LIVE_TESTS=1 pytest -q -m integration

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kantan_llm-0.1.4.tar.gz (28.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kantan_llm-0.1.4-py3-none-any.whl (28.6 kB view details)

Uploaded Python 3

File details

Details for the file kantan_llm-0.1.4.tar.gz.

File metadata

  • Download URL: kantan_llm-0.1.4.tar.gz
  • Upload date:
  • Size: 28.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for kantan_llm-0.1.4.tar.gz
Algorithm Hash digest
SHA256 5bd054815f015e9aa4b8dd58eb48f51922aac9ecb84cc58f1e6eaf8157e57d2d
MD5 b9e05c89993415721726008d7a09893b
BLAKE2b-256 70634770686543f53751f547985cf7531bd136bd1c2e78bb924f8d5c8cc6b3a3

See more details on using hashes here.

File details

Details for the file kantan_llm-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: kantan_llm-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 28.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.10

File hashes

Hashes for kantan_llm-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 78fd1dc67f1bcbff26cfc61f3d72d614b9896c98165749cdc1bf25f162b6a3e0
MD5 ffe141610adf2a39ce7ac7411808fa8b
BLAKE2b-256 908e5a9f8cdbd35414a82ba64a2613a839c42c4668a012148b1889347b897b7c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page