Skip to main content

CaveAgent is a tool-augmented agent framework that enables function-calling through LLM code generation and provides runtime state management. Unlike traditional JSON-schema approaches, it natively handles complex Python objects like DataFrames and ndarrays within a persistent runtime, enabling lossless data flow across multi-turn interactions.

Project description

CaveAgent

CaveAgent: Transforming LLMs into Stateful Runtime Operators

arXiv Paper License: MIT Python 3.11+ PyPI version

"From text-in-text-out to (text&object)-in-(text&object)-out"


Most LLM agents operate under a text-in-text-out paradigm, with tool interactions constrained to JSON primitives. CaveAgent breaks this with Stateful Runtime Management—a persistent Python runtime with direct variable injection and retrieval:

  • Inject any Python object into the runtime—DataFrames, models, database connections, custom class instances—as first-class variables the LLM can manipulate
  • Persist state across turns without serialization; objects live in the runtime, not in the context window
  • Retrieve manipulated objects back as native Python types for downstream

Table of Contents

Quick Start

pip install 'cave-agent[all]'

Choose your installation:

# OpenAI support
pip install 'cave-agent[openai]'

# 100+ LLM providers via LiteLLM 
pip install 'cave-agent[litellm]'

Examples

Function Calling

# Inject functions and variables into runtime
runtime = PythonRuntime(
    variables=[Variable("tasks", [], "User's task list")],
    functions=[Function(add_task), Function(complete_task)],
)
agent = CaveAgent(model, runtime=runtime)

await agent.run("Add 'buy groceries' to my tasks")
print(runtime.retrieve("tasks"))  # [{'name': 'buy groceries', 'done': False}]

See examples/basic_usage.py for a complete example.

Stateful Object Interactions

# Inject objects with methods - LLM can call them directly
runtime = PythonRuntime(
    types=[Type(Light), Type(Thermostat)],
    variables=[
        Variable("light", Light("Living Room"), "Smart light"),
        Variable("thermostat", Thermostat(), "Home thermostat"),
    ],
)
agent = CaveAgent(model, runtime=runtime)

await agent.run("Dim the light to 20% and set thermostat to 22°C")
light = runtime.retrieve("light")  # Object with updated state

See examples/object_methods.py for a complete example.

Multi-Agent Coordination

# Sub-agents with their own runtimes
cleaner_agent = CaveAgent(model, runtime=PythonRuntime(variables=[
    Variable("data", [], "Input"), Variable("cleaned_data", [], "Output"),
]))

analyzer_agent = CaveAgent(model, runtime=PythonRuntime(variables=[
    Variable("data", [], "Input"), Variable("insights", {}, "Output"),
]))

# Orchestrator controls sub-agents as first-class objects
orchestrator = CaveAgent(model, runtime=PythonRuntime(variables=[
    Variable("raw_data", raw_data, "Raw dataset"),
    Variable("cleaner", cleaner_agent, "Cleaner agent"),
    Variable("analyzer", analyzer_agent, "Analyzer agent"),
]))

# Inject → trigger → retrieve
await orchestrator.run("Clean raw_data using cleaner, then analyze using analyzer")
insights = analyzer.runtime.retrieve("insights")

See examples/multi_agent.py for a complete example.

Real-time Streaming

async for event in agent.stream_events("Analyze this data"):
    if event.type.value == 'code':
        print(f"Executing: {event.content}")
    elif event.type.value == 'execution_output':
        print(f"Result: {event.content}")

See examples/stream.py for a complete example.

Security Rules

# Block dangerous operations with AST-based validation
rules = [
    ImportRule({"os", "subprocess", "sys"}),
    FunctionRule({"eval", "exec", "open"}),
    AttributeRule({"__globals__", "__builtins__"}),
]
runtime = PythonRuntime(security_checker=SecurityChecker(rules))

More Examples

Agent Skills

CaveAgent implements the Agent Skills open standard—a portable format for packaging instructions that agents can discover and use. Originally developed by Anthropic and now supported across the AI ecosystem (Claude, Gemini CLI, Cursor, VS Code, and more), Skills enable agents to acquire domain expertise on-demand.

Creating a Skill

A Skill is a directory containing a SKILL.md file with YAML frontmatter:

my-skill/
├── SKILL.md           # Required: Skill definition and instructions
└── injection.py       # Optional: Functions/variables/types to inject (CaveAgent extension)

SKILL.md structure:

---
name: data-processor
description: Process and analyze datasets with statistical methods. Use when working with data analysis tasks.
---

# Data Processing Instructions

## Quick Start
Use the injected functions to analyze datasets...

## Workflows
1. Activate the skill to load injected functions
2. Apply statistical analysis using the provided functions
3. Return structured results

Required fields: name (max 64 chars, lowercase with hyphens) and description (max 1024 chars)

Optional fields: license, compatibility, metadata

How Skills Load (Progressive Disclosure)

Skills use progressive disclosure to minimize context usage:

Level When Loaded Content
Metadata At startup name and description from YAML frontmatter (~100 tokens)
Instructions When activated SKILL.md body with guidance (loaded on-demand)

Using Skills

from cave_agent import CaveAgent
from cave_agent.skills import SkillDiscovery

# Load skills from directory
agent = CaveAgent(model=model, skills_dir="./skills")

# Or load specific skills
skill = SkillDiscovery.from_file("./my-skill/SKILL.md")
agent = CaveAgent(model=model, skills=[skill])

When skills are loaded, the agent gains access to the activate_skill(skill_name) runtime function to activate a skill and load its instructions.

Injection Module (CaveAgent Extension)

CaveAgent extends the Agent Skills standard with injection.py—allowing skills to inject functions, variables, and types directly into the runtime when activated:

from cave_agent.runtime import Function, Variable, Type
from dataclasses import dataclass

def analyze_data(data: list) -> dict:
    """Analyze data and return statistics."""
    return {"mean": sum(data) / len(data), "count": len(data)}

@dataclass
class AnalysisResult:
    mean: float
    count: int
    status: str

CONFIG = {"threshold": 0.5, "max_items": 1000}

__exports__ = [
    Function(analyze_data, description="Analyze data statistically"),
    Variable("CONFIG", value=CONFIG, description="Analysis configuration"),
    Type(AnalysisResult, description="Result structure"),
]

When activate_skill() is called, these exports are automatically injected into the runtime namespace.

See examples/skill_data_processor.py for a complete example.

Features

  • Code-Based Function Calling: Leverages LLM's natural coding abilities instead of rigid JSON schemas
  • Secure Runtime Environment:
    • Inject Python objects, variables, and functions as tools
    • Rule-based security validation prevents dangerous code execution
    • Flexible security rules: ImportRule, FunctionRule, AttributeRule, RegexRule
    • Customizable security policies for different use cases
    • Access execution results and maintain state across interactions
  • Agent Skills: Implements the open Agent Skills standard for modular, portable instruction packages. CaveAgent extends the standard with runtime injection (injection.py).
  • Multi-Agent Coordination: Control sub-agents programmatically through runtime injection and retrieval. Shared runtimes enable instant state synchronization.
  • Streaming & Async: Real-time event streaming and full async/await support for optimal performance
  • Execution Control: Configurable step limits and error handling to prevent infinite loops
  • Flexible LLM Support: Works with any LLM provider via OpenAI-compatible APIs or LiteLLM
  • Type Injection: Expose class schemas for type-aware LLM code generation

Awesome Blogs

We thank these community to post our work.

Configuration

Parameter Type Default Description
model Model required LLM model instance (OpenAIServerModel or LiteLLMModel)
runtime PythonRuntime None Python runtime with variables, functions, and types
max_steps int 5 Maximum execution steps per run
max_history int 10 Maximum conversation history length
max_execution_result_length int 3000 Max characters in execution output
skills List[Skill] None List of skill objects to load
skills_dir str | Path None Directory to discover and load skills from
agent_identity str default Agent's role/identity description
instructions str default Task instructions for the agent
additional_context str default Extra context information
system_prompt_template str default Custom system prompt template
python_block_identifier str "python" Code block language identifier
messages List[Message] None Initial message history
log_level LogLevel DEBUG Logging verbosity level

LLM Provider Support

CaveAgent supports multiple LLM providers:

OpenAI-Compatible Models

from cave_agent.models import OpenAIServerModel

model = OpenAIServerModel(
    model_id="gpt-4",
    api_key="your-api-key",
    base_url="https://api.openai.com/v1"  # or your custom endpoint
)

LiteLLM Models (Recommended)

LiteLLM provides unified access to hundreds of LLM providers:

from cave_agent.models import LiteLLMModel

# OpenAI
model = LiteLLMModel(
    model_id="gpt-4",
    api_key="your-api-key",
    custom_llm_provider='openai'
)

# Anthropic Claude
model = LiteLLMModel(
    model_id="claude-3-sonnet-20240229",
    api_key="your-api-key",
    custom_llm_provider='anthropic' 
)

# Google Gemini
model = LiteLLMModel(
    model_id="gemini/gemini-pro",
    api_key="your-api-key"
)

Contributing

Contributions are welcome! Please feel free to submit a PR. For more details, see CONTRIBUTING.md.

Citation

If you use CaveAgent in your research, please cite:

@article{ran2026caveagent,
  title={CaveAgent: Transforming LLMs into Stateful Runtime Operators},
  author={Ran, Maohao and Wan, Zhenglin and Lin, Cooper and Zhang, Yanting and others},
  journal={arXiv preprint arXiv:2601.01569},
  year={2026}
}

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cave_agent-0.6.3.tar.gz (732.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cave_agent-0.6.3-py3-none-any.whl (38.4 kB view details)

Uploaded Python 3

File details

Details for the file cave_agent-0.6.3.tar.gz.

File metadata

  • Download URL: cave_agent-0.6.3.tar.gz
  • Upload date:
  • Size: 732.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for cave_agent-0.6.3.tar.gz
Algorithm Hash digest
SHA256 e30b57b3dd7abf9096d00ad2e636009da3f3a49412a45b0f2320a73cda13e0d3
MD5 d07b5b15bbbc71947d3dd2a8b7c7429c
BLAKE2b-256 80535308574ce82f0dedc6f7a7f211f0113a11105ade5079a8758a251eb83671

See more details on using hashes here.

File details

Details for the file cave_agent-0.6.3-py3-none-any.whl.

File metadata

  • Download URL: cave_agent-0.6.3-py3-none-any.whl
  • Upload date:
  • Size: 38.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.26 {"installer":{"name":"uv","version":"0.9.26","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for cave_agent-0.6.3-py3-none-any.whl
Algorithm Hash digest
SHA256 e43e46068a05835c077997e936402b3b83dfb046e6a0287781e3aa9fe3eb50f8
MD5 046f604bcfcb48430790dbcaa2ac711d
BLAKE2b-256 f312f77fb1c4d2a6e048d2a2153e74fff7e306fa7eef1091875c33db4ce5f2b7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page