Unified interface to all LLM providers with essential infrastructure for tool calling, streaming, and model management
Project description
AbstractCore
Unified LLM Interface
Write once, run everywhere
AbstractCore is an offline-capable, open-source-first LLM infrastructure layer
for Python applications. It gives you one create_llm(...) API across local
runtimes, self-hosted servers, cloud APIs, and OpenAI-compatible gateways.
Use it in-process from Python, or run it as a universal /v1 endpoint for apps
that already speak the OpenAI API. The same application can run fully offline
once local model assets are installed, stay private on your own inference
server, or route to hosted providers when you want managed capacity.
The goal is simple: put LLM capability at your fingertips without tying your product to a vendor, network connection, or model family. AbstractCore keeps application code portable while the model underneath moves between OpenAI, Anthropic, Ollama, LM Studio, MLX, HuggingFace/GGUF, vLLM, OpenRouter, Portkey, or any OpenAI-compatible backend.
The default install is intentionally lightweight; add providers and optional subsystems via explicit install extras. For local runtimes, AbstractCore is cache-first and offline-first: it will not silently download model weights; you pull or prefetch the models you want, then run without internet when your chosen provider and tools are local.
First-class support for:
- offline-capable local operation with explicit model setup (no silent downloads)
- local/open-weight model backends (Ollama, LM Studio, MLX, HuggingFace/GGUF, vLLM)
- cloud, hosted gateway, and generic OpenAI-compatible providers
- sync + async
- streaming + non-streaming
- universal tool calling (native + prompted tool syntax)
- structured output (Pydantic)
- unified generation parameters, capability detection, and provider quirks
- session memory, prompt caching, events, tracing, and retry-aware reliability hooks
- media input (images/audio/video + documents) with explicit, policy-driven fallbacks (*)
- optional capability plugins (
core.voice/core.audio/core.vision) for deterministic TTS/STT and generative vision (viaabstractvoice/abstractvision) - glyph visual-text compression for long documents (**)
- optional OpenAI-compatible
/v1gateway server (multi-provider) and single-model endpoint
(*) Media input is policy-driven (no silent semantic changes). If a model doesn’t support images, AbstractCore can use a configured vision model to generate short visual observations and inject them into your text-only request (vision fallback). Audio/video attachments are also policy-driven (audio_policy, video_policy) and may require capability plugins for fallbacks. See Media Handling and Centralized Config.
(**) Optional visual-text compression: render long text/PDFs into images and process them with a vision model to reduce token usage. See Glyph Visual-Text Compression (install pip install "abstractcore[compression]"; for PDFs also install pip install "abstractcore[media]").
Docs: Getting Started · FAQ · Docs Index · https://lpalbou.github.io/AbstractCore
Why AbstractCore
Many libraries can call an LLM. AbstractCore is for the messy middle of real applications, where you need the same product code to survive different model families, local inference servers, API dialects, offline deployments, and capability gaps.
Open-source and self-hosted models are first-class, not a demo path. AbstractCore handles the things that often break when you move beyond a single hosted API: prompted vs native tools, schema-following differences, structured-output retry, reasoning text, media support, token budget vocabulary, local server discovery, and prompt/cache behavior.
That makes it a practical foundation for privacy-sensitive assistants, local developer tools, document workflows, research machines, edge deployments, and cloud-backed production services. You can build remote-first products, fully local products, or hybrid products that move between the two as cost, privacy, latency, and hardware constraints change.
Use AbstractCore when you want a focused provider layer that stays close to your application code. Use the wider AbstractFramework stack when you also need durable runtime execution, agents, flows, gateways, agentic CLI surfaces, memory, or assistant applications such as AbstractAssistant.
AbstractFramework ecosystem
AbstractCore is part of the AbstractFramework ecosystem:
- AbstractFramework (umbrella): https://github.com/lpalbou/AbstractFramework
- AbstractCore (this package): provider-agnostic LLM I/O + reliability primitives
- AbstractRuntime: durable tool/effect execution, workflows, and state persistence (recommended host runtime) — https://github.com/lpalbou/abstractruntime
- Wider stack: agents, flows, gateway control, agentic CLI integrations, memory, semantics, coding tools, and digital assistant surfaces built on the same foundation
By default, AbstractCore is pass-through for tools (execute_tools=False): it returns structured tool calls in response.tool_calls, and your runtime decides whether/how to execute them (policy, sandboxing, retries, persistence). See Tool Calling and Architecture.
graph LR
APP["Your app"] --> AC["AbstractCore"]
AF["AbstractFramework optional"] --> AC
AF --> RT["AbstractRuntime / Agent / Flow / Gateway"]
AC --> P["Provider adapter"]
P --> LLM["LLM backend"]
AC -.->|tool calls| RT
RT -.->|tool results| AC
Install
Choose the smallest install that matches where your models run. Extras compose,
so you can start with abstractcore[remote] and add media, tools, server,
or local runtime extras as your app grows.
# Core: local HTTP servers and gateways that need no SDK
# Includes Ollama, LM Studio, OpenRouter, Portkey, and OpenAI-compatible /v1 endpoints
pip install abstractcore
# Hosted API SDKs (OpenAI + Anthropic). OpenRouter/Portkey still work from core.
pip install "abstractcore[remote]"
# Individual provider SDKs / local runtimes
pip install "abstractcore[openai]" # OpenAI SDK
pip install "abstractcore[anthropic]" # Anthropic SDK
pip install "abstractcore[huggingface]" # Transformers / torch (heavy)
pip install "abstractcore[mlx]" # Apple Silicon local inference (heavy)
pip install "abstractcore[vllm]" # NVIDIA CUDA / ROCm (heavy)
# Optional application features
pip install "abstractcore[tools]" # built-in web tools (web_search, skim_websearch, skim_url, fetch_url)
pip install "abstractcore[media]" # images, PDFs, Office docs
pip install "abstractcore[compression]" # glyph visual-text compression (Pillow-only)
pip install "abstractcore[embeddings]" # EmbeddingManager + local embedding models
pip install "abstractcore[tokens]" # precise token counting (tiktoken)
pip install "abstractcore[server]" # OpenAI-compatible HTTP gateway
# Combine extras (zsh: keep quotes)
pip install "abstractcore[remote,media,tools]"
# Turnkey local-runtime installs
pip install "abstractcore[all-apple]" # Apple Silicon: remote SDKs + HF/GGUF + MLX + features + server
pip install "abstractcore[all-gpu]" # NVIDIA GPU: remote SDKs + HF/GGUF + vLLM + features + server
Quickstart
Local/offline example (requires Ollama running with ollama pull qwen3:4b
already done):
from abstractcore import create_llm
llm = create_llm("ollama", model="qwen3:4b")
response = llm.generate("Draft a privacy-preserving onboarding checklist.")
print(response.content)
Remote API example (requires pip install "abstractcore[openai]"):
from abstractcore import create_llm
llm = create_llm("openai", model="gpt-4o-mini")
response = llm.generate("What is the capital of France?")
print(response.content)
Conversation state (BasicSession)
from abstractcore import create_llm, BasicSession
session = BasicSession(create_llm("anthropic", model="claude-haiku-4-5"))
print(session.generate("Give me 3 bakery name ideas.").content)
print(session.generate("Pick the best one and explain why.").content)
Streaming
from abstractcore import create_llm
llm = create_llm("ollama", model="qwen3:4b")
for chunk in llm.generate("Write a short poem about distributed systems.", stream=True):
print(chunk.content or "", end="", flush=True)
Async
import asyncio
from abstractcore import create_llm
async def main():
llm = create_llm("openai", model="gpt-4o-mini")
resp = await llm.agenerate("Give me 5 bullet points about HTTP caching.")
print(resp.content)
asyncio.run(main())
Token budgets (unified)
from abstractcore import create_llm
llm = create_llm(
"openai",
model="gpt-4o-mini",
max_tokens=8000, # total budget (input + output)
max_output_tokens=1200, # output cap
)
Providers (common)
Open-source-first: local providers (Ollama, LMStudio, vLLM, openai-compatible, HuggingFace, MLX) are first-class. Cloud and gateway providers are optional.
openai:OPENAI_API_KEY, optionalOPENAI_BASE_URLanthropic:ANTHROPIC_API_KEY, optionalANTHROPIC_BASE_URLopenrouter:OPENROUTER_API_KEY, optionalOPENROUTER_BASE_URL(default:https://openrouter.ai/api/v1)portkey:PORTKEY_API_KEY,PORTKEY_CONFIG(config id), optionalPORTKEY_BASE_URL(default:https://api.portkey.ai/v1)ollama: local server atOLLAMA_BASE_URL(or legacyOLLAMA_HOST)lmstudio: OpenAI-compatible local server atLMSTUDIO_BASE_URL(default:http://localhost:1234/v1)vllm: OpenAI-compatible server atVLLM_BASE_URL(default:http://localhost:8000/v1)openai-compatible: generic OpenAI-compatible endpoints viaOPENAI_COMPATIBLE_BASE_URL(default:http://localhost:1234/v1)huggingface: local models via Transformers (optionalHUGGINGFACE_TOKENfor gated downloads)mlx: Apple Silicon local models (optionalHUGGINGFACE_TOKENfor gated downloads)
You can also persist settings (including API keys) via the config CLI:
abstractcore --statusabstractcore --configure(alias:--config)abstractcore --set-api-key openai sk-...abstractcore --set-server-api-key acore-server-secret
What’s inside (quick tour)
- Tools: universal tool calling across providers → Tool Calling
- Built-in tools (optional): web + filesystem helpers (
skim_websearch,skim_url,fetch_url,read_file, …) → Tool Calling - Tool syntax rewriting:
tool_call_tags(Python) andagent_format(server) → Tool Syntax Rewriting - Structured output: Pydantic-first with provider-aware strategies → Structured Output
- Media input: images/audio/video + documents (policies + fallbacks) → Media Handling and Vision Capabilities
- Capability plugins (optional): deterministic
llm.voice/llm.audio/llm.visionsurfaces → Capabilities - Glyph visual-text compression: scale long-context document analysis via VLMs → Glyph Visual-Text Compression
- Embeddings and semantic search → Embeddings
- Observability: global event bus + interaction traces → Architecture, API Reference (Events), Interaction Tracing
- MCP (Model Context Protocol): discover tools from MCP servers (HTTP/stdio) → MCP
- OpenAI-compatible server: one
/v1gateway for chat + optional/v1/images/*and/v1/audio/*endpoints → Server
Tool calling (passthrough by default)
By default (execute_tools=False), AbstractCore:
- returns clean assistant text in
response.content - returns structured tool calls in
response.tool_calls(host/runtime executes them)
from abstractcore import create_llm, tool
@tool
def get_weather(city: str) -> str:
return f"{city}: 22°C and sunny"
llm = create_llm("openai", model="gpt-4o-mini")
resp = llm.generate("What's the weather in Paris? Use the tool.", tools=[get_weather])
print(resp.content)
print(resp.tool_calls)
If you need tool-call markup preserved/re-written in content for downstream parsers, pass
tool_call_tags=... (e.g. "qwen3", "llama3", "xml"). See Tool Syntax Rewriting.
Structured output
from pydantic import BaseModel
from abstractcore import create_llm
class Answer(BaseModel):
title: str
bullets: list[str]
llm = create_llm("openai", model="gpt-4o-mini")
answer = llm.generate("Summarize HTTP/3 in 3 bullets.", response_model=Answer)
print(answer.bullets)
Media input (images/audio/video)
Requires pip install "abstractcore[media]".
from abstractcore import create_llm
llm = create_llm("anthropic", model="claude-haiku-4-5")
resp = llm.generate("Describe the image.", media=["./image.png"])
print(resp.content)
Notes:
- Images: use a vision-capable model, or configure vision fallback for text-only models (
abstractcore --config;abstractcore --set-vision-provider PROVIDER MODEL). - Video:
video_policy="auto"(default) uses native video when supported, otherwise samples frames (requiresffmpeg/ffprobe) and routes them through image/vision handling (so you still need a vision-capable model or vision fallback configured). - Audio: use an audio-capable model, or set
audio_policy="auto"/"speech_to_text"and installabstractvoicefor speech-to-text.
Configure defaults (optional):
abstractcore --status
abstractcore --set-vision-provider lmstudio qwen/qwen3-vl-4b
abstractcore --set-audio-strategy auto
abstractcore --set-video-strategy auto
See Media Handling and Vision Capabilities.
HTTP server (OpenAI-compatible gateway)
pip install "abstractcore[server]"
python -m abstractcore.server.app
Use any OpenAI-compatible client, and route to any provider/model via model="provider/model":
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="unused")
resp = client.chat.completions.create(
model="ollama/qwen3:4b",
messages=[{"role": "user", "content": "Hello from the gateway!"}],
)
print(resp.choices[0].message.content)
See Server.
Single-model /v1 endpoint (one provider/model per worker): see Endpoint (abstractcore-endpoint).
CLI (optional)
Interactive chat:
abstractcore-chat --provider openai --model gpt-4o-mini
abstractcore-chat --provider lmstudio --model qwen/qwen3-4b-2507 --base-url http://localhost:1234/v1
abstractcore-chat --provider openrouter --model openai/gpt-4o-mini
Token limits:
- startup:
abstractcore-chat --max-tokens 8192 --max-output-tokens 1024 ... - in-REPL:
/max-tokens 8192and/max-output-tokens 1024
Built-in CLI apps
AbstractCore also ships with ready-to-use CLI apps:
summarizer,extractor,judge,intent,deepsearch(see docs/apps/)
Documentation map
Start here:
- Docs Index — navigation for all docs
- Prerequisites — provider setup (keys, local servers, hardware notes)
- Getting Started — first call + core concepts
- FAQ — common questions and setup gotchas
- Examples — end-to-end patterns and recipes
- Framework Comparison — where AbstractCore and AbstractFramework fit next to LiteLLM, LangChain, LangGraph, and LlamaIndex
- Troubleshooting — common failures and fixes
Core features:
- Tool Calling — universal tools across providers (native + prompted)
- Tool Syntax Rewriting — rewrite tool-call syntax for different runtimes/clients
- Structured Output — schema enforcement + retry strategies
- Media Handling — images/audio/video + documents (policies + fallbacks)
- Vision Capabilities — image/video input, vision fallback, and how this differs from generative vision
- Glyph Visual-Text Compression — compress long documents into images for VLMs
- Generation Parameters — unified parameter vocabulary and provider quirks
- Session Management — conversation history, persistence, and compaction
- Embeddings — embeddings API and RAG building blocks
- Async Guide — async patterns, concurrency, best practices
- Centralized Config —
~/.abstractcore/config/abstractcore.json+ CLI config commands - Capabilities — supported features and current limitations
- Interaction Tracing — inspect prompts/responses/usage for observability
- MCP — consume MCP tool servers (HTTP/stdio) as tool sources
Reference and internals:
- Architecture — system overview + event system
- API (Python) — how to use the public API
- API Reference — Python API (including events)
- Server — OpenAI-compatible gateway with tool/media support
- CLI Guide — interactive
abstractcore-chatwalkthrough
Project:
- Changelog — version history and upgrade notes
- Contributing — dev setup and contribution guidelines
- Security — responsible vulnerability reporting
- Acknowledgements — upstream projects and communities
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file abstractcore-2.13.3.tar.gz.
File metadata
- Download URL: abstractcore-2.13.3.tar.gz
- Upload date:
- Size: 849.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1c093c9725bada97c2696c229d5c02faef0145dcd9028485aa667cd82a6c9e89
|
|
| MD5 |
d13cfc1156e7a94b2758c1407dc7e615
|
|
| BLAKE2b-256 |
1e5e8a60693570681007e882789b0aee428bac632a93e7124deac894751326a0
|
Provenance
The following attestation bundles were made for abstractcore-2.13.3.tar.gz:
Publisher:
release.yml on lpalbou/AbstractCore
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
abstractcore-2.13.3.tar.gz -
Subject digest:
1c093c9725bada97c2696c229d5c02faef0145dcd9028485aa667cd82a6c9e89 - Sigstore transparency entry: 1436810271
- Sigstore integration time:
-
Permalink:
lpalbou/AbstractCore@d82b549352eae751e9cc40d138eb32ac4fc63977 -
Branch / Tag:
refs/tags/v2.13.3 - Owner: https://github.com/lpalbou
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@d82b549352eae751e9cc40d138eb32ac4fc63977 -
Trigger Event:
push
-
Statement type:
File details
Details for the file abstractcore-2.13.3-py3-none-any.whl.
File metadata
- Download URL: abstractcore-2.13.3-py3-none-any.whl
- Upload date:
- Size: 854.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c93282dd66248c9be7b4a60b9a9c904c6f07c4e92f7ae69c05433597ffd7bf4e
|
|
| MD5 |
93a85c7bee0afa17c047745b9c845dd9
|
|
| BLAKE2b-256 |
5de0e9104745d33536f6ca53f0eee582f564a694c776232dd0a1c0356197cc22
|
Provenance
The following attestation bundles were made for abstractcore-2.13.3-py3-none-any.whl:
Publisher:
release.yml on lpalbou/AbstractCore
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
abstractcore-2.13.3-py3-none-any.whl -
Subject digest:
c93282dd66248c9be7b4a60b9a9c904c6f07c4e92f7ae69c05433597ffd7bf4e - Sigstore transparency entry: 1436810287
- Sigstore integration time:
-
Permalink:
lpalbou/AbstractCore@d82b549352eae751e9cc40d138eb32ac4fc63977 -
Branch / Tag:
refs/tags/v2.13.3 - Owner: https://github.com/lpalbou
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@d82b549352eae751e9cc40d138eb32ac4fc63977 -
Trigger Event:
push
-
Statement type: