Skip to main content

Low-Level Language Model framework for building LLM agentic systems.

Project description

LLLM Logo

Low-Level Language Model (LLLM)

Lightweight framework for building complex agentic systems

Docs Examples Pypi GitHub License Discord

LLLM is a lightweight framework for developing advanced agentic systems. Allows users to build a complex agentic system with <100 LoC. Prioritizing minimalism, modularity, and reliability, it is specifically suitable for complex and frontier agentic systems beyond daily chat. While these fields require deep architectural customization and highly diverse demands, developers and researchers often face the burden of managing low-level complexities such as exception handling, output parsing, and API error management. LLLM bridges this gap by offering necessary abstractions that balance high-level encapsulation with the simplicity required for flexible experimentation. It also tries to make the code plain, compact, easy-to-understand, with less unnecessary indirection, thus easy for customization for different projects' needs, to allow researchers and developers to focus on the core research questions. See https://lllm.one for detailed documentation.

Key design ideas: agentic system as a program (agents + prompts + tactics), dialog as each agent's internal mental state, configuration as declaration. See the Architecture Overview for the full design philosophy.

Installation

pip install lllm-core

Quick Start

No configuration needed. Set your API key and run:

pip install lllm-core
export OPENAI_API_KEY=sk-...   # or ANTHROPIC_API_KEY, etc.
from lllm import Tactic

# One-line chat
response = Tactic.quick("What is the capital of France?")
print(response.content)

# Get the agent and chat
response, agent = Tactic.quick("What is the capital of France?", return_agent=True)
print(response.content)
print(agent.name)

# Get the agent only
agent = Tactic.quick() # by default the system prompt is "You are a helpful assistant."
print(agent.name)

# Get the agent and chat with a custom system prompt
agent = Tactic.quick(system_prompt="You are a helpful assistant.", model="gpt-4o")
agent.open("chat")
agent.receive("What is the capital of France?")
print(agent.respond().content)

# Chat with a custom system prompt 
response = Tactic.quick("What is the capital of France?", system_prompt="You are a helpful assistant.")
print(response.content)

# Chat with a custom system prompt and get the agent
response, agent = Tactic.quick("What is the capital of France?", system_prompt="You are a helpful assistant.", return_agent=True)
print(response.content)
print(agent.name)

That's it — no lllm.toml, no folder structure, no subclassing.

Supported providers (via LiteLLM):

  • OpenAI: model="gpt-4o" + OPENAI_API_KEY
  • Anthropic: model="claude-opus-4-6" + ANTHROPIC_API_KEY
  • Any other LiteLLM-supported provider

Growing your project

As your project grows, you can gradually introduce structure:

  1. Add a config file — copy lllm.toml.example to lllm.toml and point it at your prompt/proxy folders
  2. Move prompts to files — put .md files under a prompts/ folder; they auto-register via discovery
  3. Define agents in YAML — use AgentSpec configs for multi-agent tactics
  4. Subclass Tactic — implement call() to orchestrate multiple agents

See examples/ for concrete patterns at each stage.

Examples

See examples/README.md for the full index. A quick map:

Standalone scripts — one API key, no extra setup:

Script What it shows
basic_chat.py Tactic.quick() — zero-config single-agent chat
multi_turn_chat.py Multi-turn history, dialog fork()
tool_use.py @tool decorator, function calling, diagnostics
structured_output.py Prompt(format=MyModel) — Pydantic structured output

Advanced scripts (in examples/advanced/) — auto-detect provider from env:

Script What it shows
multi_agent_tactic.py Custom Tactic subclass, two-agent pipeline
session_logging.py SQLite LogStore, session querying
batch_processing.py bcall(), ccall() concurrent execution
proxy_interpreter.py proxy config, run_python tool, state-persistent AgentInterpreter

Full package exampleexamples/code_review_service/:

A self-contained LLLM package with lllm.toml, prompt files, tactic files, and YAML configs with inheritance — wrapped as a FastAPI HTTP service. See code_review_service/README.md for full documentation.

cd examples/code_review_service
export OPENAI_API_KEY=sk-...
python service.py --demo            # CLI demo, no web server
python service.py                   # FastAPI on :8080  (pip install fastapi uvicorn)
LLLM_CONFIG_PROFILE=pro python service.py --demo  # production config

Proxies & Tools

Built-in proxies (financial data, search, etc.) register automatically when their modules are imported. If you plan to call Proxy() directly, either:

  1. Set up an lllm.toml with a [proxies] section so discovery imports your proxy folders on startup, or
  2. Call load_builtin_proxies() to import the packaged modules, or manually import the proxies you care about (e.g., from lllm.proxies.builtin import exa_proxy).

This mirrors how prompts are auto-registered via [prompts] in lllm.toml.

Once proxies are loaded you can check what is available by calling Proxy().available().

Agent-level proxy tool injection — add a proxy: block to an agent's config and LLLM automatically injects run_python and query_api_doc tools plus an API directory block into the system prompt:

agent_configs:
  - name: analyst
    proxy:
      activate_proxies: [fmp]
      exec_env: interpreter   # "interpreter" (default) | "jupyter" | null
      max_output_chars: 5000
      timeout: 60.0

The agent then calls run_python(code) with Python that uses CALL_API(endpoint, params). Variables persist across calls within the same session. See advanced/proxy_interpreter.py for a runnable example and Proxies & Sandbox for the full reference.

Auto-Discovery Config

A starter lllm.toml.example lives in the repo root. Copy it next to your project entry point and edit the folder paths:

cp lllm.toml.example lllm.toml

The sample configuration points to examples/autodiscovery/prompts/ and examples/autodiscovery/proxies/, giving you a working prompt (examples/hello_world) and proxy (examples/sample) to experiment with immediately.

Testing

Run tests with pytest:

pytest tests/

Experimental Features

  • Computer Use Agent (CUA)lllm.tools.cua offers browser automation via Playwright and the OpenAI Computer Use API. It is still evolving and may change without notice.
  • Responses API Routing – opt into OpenAI’s Responses API by setting api_type = "response" per agent. This enables native web search/computer-use tools but currently targets OpenAI only.
  • Skills – For defining more complex base agents.

Roadmap

v0.1.0 Refactoring

  • Refactor providers system: LiteLLM invoker (invokers/)
  • Refactor registry to runtime (runtime.py), and discovery system (discovery.py)
  • Refactor prompt model and prompt management (prompt.py)
    • Prompt composition and inheritance
    • More graceful tool (link_function)
    • Clearing up ad-hoc designs
    • Better parsing system, more intuitive argument passing
    • Better handling system for error, exception, interrupt
  • Refactor message and dialog model/state management, better arg passing (dialog.py)
  • Refactor agent model, agent call (agent.py)
  • Refactor tactics (tactic.py)
  • Refactor config and package system (config.py, lllm.toml, etc.)
    • Package system with lllm.toml — namespaced resource URLs (pkg.section:resource)
    • Dependency tree with recursive loading and cycle detection
    • Alias support (as for packages, under for virtual folder prefixes)
    • Unified ResourceNode-based registry with lazy loading
    • Named runtimes (load_runtime, get_runtime) for parallel experiments
    • Auto-initialization from project root lllm.toml
    • Agent config YAML: global defaults, agent_configs list, base inheritance with deep merge
    • AgentSpec with inline system_prompt or system_prompt_path resolution
    • resolve_config() for recursive config inheritance
    • Convenience loaders: load_prompt, load_tactic, load_proxy, load_config, load_resource
  • Logger (cli logging), replayable logging system, and printing system (log.py, utils.py)
    • LogStore with pluggable backends (LocalFileBackend, SQLiteBackend, NoOpBackend)
    • Tag-based indexing and filtering, cost aggregation, export helpers
    • Stable pkg::name tactic identity independent of file layout and aliases
    • ColoredFormatter and setup_logging for terminal output
    • Convenience factories: local_store, sqlite_store, noop_store
  • Fast mode, 5-line code to build a simple system with no configuration.

V0.1.1

  • Proxy-based tool-calling, mini in-dialog interpreter (proxies/)
  • Default Context Manager for prune over-size dialogs
  • Support skills in agent config, see https://agentskills.io
  • Tactics, Prompts, Proxies, Configs, etc., i.e., package sharing and management system.

TODOs

  • Add-on: Analysis tools based on the logging system, e.g., cost analysis, dialog analysis, etc. Basically, a GUI for the logging DB, and exporting an app with default dashboards using like Streamlit, Dash, Panel, etc.

Future Roadmap

  • Add-on: Advanced sandbox, e.g., browser sandbox, code sandbox, etc. maybe use sandbox wheels like OpenSandbox (sandbox/) Or maybe do not include it in the core package, as it can work externally.
  • Maybe use Mintlify for documentation
  • Gradient mode for tuning/training

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lllm_core-0.1.1.tar.gz (260.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

lllm_core-0.1.1-py3-none-any.whl (270.8 kB view details)

Uploaded Python 3

File details

Details for the file lllm_core-0.1.1.tar.gz.

File metadata

  • Download URL: lllm_core-0.1.1.tar.gz
  • Upload date:
  • Size: 260.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for lllm_core-0.1.1.tar.gz
Algorithm Hash digest
SHA256 ee5d6e6709cb3d167d55cc4aeb210a6ca75a7ff1794f8cb304ebc2ef4430c2ed
MD5 a0b21b72456487a16a54e5ed5bd0f88b
BLAKE2b-256 bfe6fb8c249e33fc26688f6f816b3e9db0f553897a0cc4c896f63792d0161da7

See more details on using hashes here.

File details

Details for the file lllm_core-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: lllm_core-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 270.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for lllm_core-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6f1f700a85f0ea8547f01535ff403d00a66b7f74d5e81e52d76428769decc6de
MD5 dbff0c761baa05fba259a083a405ea06
BLAKE2b-256 6b7f8ce0bfc4716c76e1b797409fb47af5835b97ffb8eade2b00bb822b8e85b2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page