Skip to main content

Cost-Sustainable Concurrent Execution for Long-Horizon LLM Agents

Project description

SlowBurn ๐Ÿข๐Ÿ”ฅ - Cost-Sustainable Concurrent Execution for Long-Horizon LLM Agents

Authors: Abhishek Divekar

PyPI version Python 3.10+ License: MIT


Watch Demo Video

SlowBurn Architecture - Click to Watch Demo Video
Click the image above to watch the demo video


Overview

Long-horizon LLM agents (autonomous coding assistants, deep research pipelines, multi-agent simulations) issue dozens to hundreds of API calls per task. Existing tools either passively monitor spending, or hard-terminate the agent when a budget cap is reached, discarding accumulated context.

SlowBurn takes a different approach: when the budget is exhausted, the agent pauses rather than crashes. Budget exhaustion becomes a flow-control signal (backpressure), not a fatal error. The agent sleeps until the rate-limit window refills, then resumes exactly where it left off with no context loss.

What SlowBurn provides:

  • CostLimit: a dollar-denominated rate limit that composes with token and request rate limits, and blocks rather than terminates when exhausted
  • SlowBurnLLM: an asyncio LLM worker with automatic per-call cost tracking, multi-turn conversations, tool calling, and 100+ models via litellm (text and vision)
  • Framework integrations: drop-in hooks for CrewAI, AutoGen (AG2), LangGraph, and LangChain that share a unified budget
  • CostReporter: per-call, per-model cost attribution with JSON, Markdown, and LaTeX export
  • Global config: all defaults centralized in slowburn_config, overridable at runtime via temp_config()

Quick Start

Create a cost-controlled LLM worker with a daily dollar budget, make calls, and inspect the cost report:

from slowburn import create_llm

# Create a cost-controlled LLM worker: $5 daily budget, asyncio execution
llm = create_llm(model="gpt-4o-mini", budget_usd=5.0, window="daily")

# Make LLM calls (concurrent on the asyncio event loop)
result = llm.call_llm(prompt="Summarize this paper...").result()

# Check costs
reporter = llm.get_reporter().result()
print(f"Cost: ${reporter.total_cost():.4f}")
print(reporter.to_markdown())

llm.stop()

Vision-Language Agents

Pass local files, URLs, or data-URLs as images for multimodal (VLM) calls:

from pathlib import Path

result = llm.call_llm(
    prompt="Describe this image in detail.",
    images=[Path("photo.jpg")],       # local files, URLs, or data-URLs
    image_detail="high",
).result()

Batch calls (concurrent)

Send multiple prompts in one call; they execute concurrently on the asyncio event loop under the same budget:

results = llm.call_llm_batch(
    prompts=["Capital of France?", "Capital of Japan?", "Capital of Brazil?"],
).result()
# All 3 execute concurrently on the event loop

Multi-turn conversations

Pass history= to maintain conversation state across turns. When history is provided, call_llm returns the full messages list (with the assistant response appended) instead of a plain string. The messages list is the conversation state; you control it, and pass it back on the next call.

In a loop (the common pattern):

llm = create_llm(model="gpt-4o-mini", budget_usd=1.0)

tasks = [
    "My name is Zephyr. I'm researching fusion energy.",
    "What are the main approaches to achieving net energy gain?",
    "Which approach is closest to commercialization?",
]

messages = []  # empty list enables multi-turn mode from the first call
for task in tasks:
    messages = llm.call_llm(
        task,
        system_prompt="You are a helpful research assistant.",
        history=messages,
    ).result()
    print(f"User:      {task}")
    print(f"Assistant: {messages[-1]['content']}\n")

llm.stop()

system_prompt is only prepended on the first call (when history has no system message yet). On subsequent calls it's a no-op, so passing it every time is safe.

With build_messages (for processing inputs before the LLM call):

build_messages constructs the messages list without calling the LLM. Pass its output directly to call_llm via prompt= (when prompt is a list of dicts, call_llm sends it as-is and returns a messages list):

messages = []
for task in tasks:
    # Build the messages list (sync, no LLM call)
    input_messages = llm.build_messages(
        prompt=task,
        system_prompt="You are a helpful assistant.",
        history=messages,
    ).result()

    # Log/inspect before sending
    print(f"Sending {len(input_messages)} messages, last 3:")
    for message in input_messages[-3:]:
        role = message["role"]
        content = str(message.get("content", ""))[:80]
        print(f"  {role}: {content}")
    save_to_disk(input_messages)

    # Send the pre-built messages to the LLM (no re-building)
    messages = llm.call_llm(prompt=input_messages).result()

Return type auto-detection: history= provided or prompt is a list of message dicts returns a messages list; a plain string prompt with no history returns a string (backward compatible). Override explicitly with return_messages=True or return_messages=False.

Tool calling (ReAct agents)

create_llm accepts tools and tool_choice as first-class parameters. Combined with history=, this enables the standard tool-calling loop. The inner while loop handles tool execution; the outer loop drives multiple tasks:

llm = create_llm(
    model="gpt-4o-mini",
    budget_usd=1.0,
    tools=[{
        "type": "function",
        "function": {
            "name": "search_web",
            "description": "Search the web for information.",
            "parameters": {
                "type": "object",
                "properties": {"query": {"type": "string"}},
                "required": ["query"],
            },
        },
    }],
    tool_choice="auto",
)

tasks = ["Population of Tokyo?", "GDP of Germany?"]
messages = []

for task in tasks:
    # Send the user's task
    messages = llm.call_llm(
        prompt=task,
        system_prompt="Use tools to find real data.",
        history=messages,
    ).result()

    # Tool-calling loop: execute tools until the LLM produces a text response
    while messages[-1].get("tool_calls"):
        for tc in messages[-1]["tool_calls"]:
            result = my_tool_executor(tc["function"]["name"], tc["function"]["arguments"])
            messages.append({
                "role": "tool",
                "tool_call_id": tc["id"],
                "content": result,
            })
        # Re-submit with tool results (empty prompt = no new user message)
        messages = llm.call_llm(prompt="", history=messages).result()

    print(f"Q: {task}")
    print(f"A: {messages[-1]['content']}\n")

llm.stop()

Structured output with validators

Attach a validator function to parse and type-check the response; ValueError triggers an automatic retry:

import re

def extract_number(text: str) -> int:
    match = re.search(r"\d+", text)
    if match is None:
        raise ValueError(f"No number found: {text!r}")  # triggers retry
    return int(match.group())

answer = llm.call_llm(
    prompt="What is 17 * 3? Reply with just the number.",
    validator=extract_number,    # retries automatically on ValueError
).result()
# answer = 51 (int, not str)

Global configuration

Override defaults (temperature, budget, timeouts) for a specific run using a context manager that restores on exit:

from slowburn import slowburn_config, temp_config

# Inspect defaults
print(slowburn_config.defaults.temperature)    # 0.7
print(slowburn_config.defaults.budget_usd)     # inf

# Override for a specific run (restores on exit)
with temp_config(temperature=0.0, budget_usd=0.10):
    llm = create_llm(model="gpt-4o-mini")
    # temperature=0.0, budget_usd=$0.10

Framework Integrations

SlowBurn provides drop-in hooks that add backpressure-based budget enforcement to existing agent frameworks. Each hook intercepts LLM calls at the framework's extension point and routes them through a shared limit set.

AutoGen (AG2)

from slowburn.integrations.autogen import SlowBurnModelClient

assistant.register_model_client(
    model_client_cls=SlowBurnModelClient,
    limit_set=limit_set,
    reporter=reporter,
)

CrewAI

from slowburn.integrations.crewai import SlowBurnCrewAI

sb = SlowBurnCrewAI(budget_usd=5.0, max_tokens=1000)
sb.install()
crew.kickoff()
print(sb.reporter.to_markdown())

LangGraph

from slowburn.integrations.langgraph import SlowBurnMiddleware

budget = SlowBurnMiddleware(budget_usd=5.0)
agent = create_agent(model="openai:gpt-4o-mini", middleware=[budget])

LangChain

from slowburn.integrations.langchain import SlowBurnCallbackHandler

handler = SlowBurnCallbackHandler(budget_usd=5.0)
llm = ChatOpenAI(model="gpt-4o-mini", callbacks=[handler])

Case Study: Autonomous Code Improvement Agent

We deployed a ReAct agent that reads Python code, searches the web for best practices, writes improved code, and iterates three times, with every LLM call routed through SlowBurn with a $0.02-per-30s budget window.

Iteration Calls Input Tokens Output Tokens Cost
1: Best practices 9 25K 3K $0.02
2: Type hints 15 68K 9K $0.04
3: Edge cases 15 62K 7K $0.03
Total 39 155K 19K $0.09

Between iterations, backpressure paused the agent for ~18 seconds until the budget window refilled. Execution resumed with no loss of context.

Comparison with Alternatives

Feature SlowBurn AgentBudget LiteLLM Langfuse Prompto
Budget exhaustion Pauses Terminates Terminates --- ---
Concurrent execution Asyncio --- --- --- Async
Cost tracking Per-call Session Per-key Trace ---
Dollar rate limit Yes --- --- --- ---
Framework hooks 4 2 Proxy Many ---
Infrastructure Zero Zero Proxy Server Zero
Paper-ready export Markdown + LaTeX --- --- --- ---

Project Structure

slowburn/
โ”œโ”€โ”€ src/slowburn/
โ”‚   โ”œโ”€โ”€ __init__.py                 # create_llm() entry point
โ”‚   โ”œโ”€โ”€ config.py                   # SlowBurnConfig, temp_config(), _NO_ARG sentinel
โ”‚   โ”œโ”€โ”€ constants.py                # Literal type aliases (ImageDetailLevel, ToolChoiceOption, etc.)
โ”‚   โ”œโ”€โ”€ llm_worker.py               # SlowBurnLLM asyncio worker (text, vision, multi-turn, tools)
โ”‚   โ”œโ”€โ”€ cost_accounting.py          # estimate_input_tokens(), cost_controlled_call()
โ”‚   โ”œโ”€โ”€ limits.py                   # CostLimit (dollar-denominated rate limit)
โ”‚   โ”œโ”€โ”€ pricing.py                  # PricingCache (litellm + OpenRouter pricing)
โ”‚   โ”œโ”€โ”€ reporter.py                 # CostReporter (JSON, Markdown, LaTeX export)
โ”‚   โ””โ”€โ”€ integrations/
โ”‚       โ”œโ”€โ”€ autogen.py              # AutoGen (AG2) ModelClient
โ”‚       โ”œโ”€โ”€ crewai.py               # CrewAI event bus / hooks middleware
โ”‚       โ”œโ”€โ”€ langchain.py            # LangChain callback handler
โ”‚       โ””โ”€โ”€ langgraph.py            # LangGraph agent middleware
โ”œโ”€โ”€ demos/
โ”‚   โ”œโ”€โ”€ Demo.ipynb                      # Interactive demo notebook
โ”‚   โ”œโ”€โ”€ demo_native_research_agent.py   # Research agent with web search
โ”‚   โ”œโ”€โ”€ demo_native_code_agent.py       # Code improvement agent
โ”‚   โ”œโ”€โ”€ demo_crewai_research_team.py    # CrewAI multi-agent demo
โ”‚   โ”œโ”€โ”€ demo_autogen_debate.py          # AutoGen debate demo
โ”‚   โ”œโ”€โ”€ demo_langchain_reflection.py    # LangChain chain demo
โ”‚   โ””โ”€โ”€ demo_langgraph_plan_execute.py  # LangGraph agent demo
โ””โ”€โ”€ README.md

Installation

pip install slowburn

With framework integrations:

pip install "slowburn[crewai]"       # CrewAI
pip install "slowburn[autogen]"      # AutoGen (AG2)
pip install "slowburn[langgraph]"    # LangGraph
pip install "slowburn[langchain]"    # LangChain

Everything:

pip install "slowburn[all]"         

From source (development)

git clone https://github.com/adivekar-utexas/slowburn.git
cd slowburn
pip install -e ".[dev]"

Setting up your API key

cp .env.example .env

Open .env in a text editor and fill in your API key:

OPENROUTER_API_KEY=sk-or-v1-your-key-here

SlowBurn works with any LiteLLM-compatible provider. OpenRouter is recommended because it provides unified access to 100+ models with automatic provider failover.

To run the demo: An OpenRouter API key with $0.01 pre-loaded credit is available in the supplementary materials Google Drive folder. Look for the file named SlowBurn-Demo-OpenRouter-key.txt.

Please note, this key has no credit: it can only be used to run one of the free models: they are marked as "(free)" on openrouter.ai. These have a daily limit of 1,000 requests. We recommend using z-ai/glm-4.5-air:free for the demo.

If you cannot access it, please contact the repository owner.

Running tests

# Unit tests (mocked, no API key needed)
pytest tests/ --ignore=tests/test_e2e_real_llm.py --ignore=tests/test_e2e_vision.py -v

# Full suite including real LLM calls (requires API key in .env)
pytest tests/ -v --timeout=120

Running demos

# Interactive notebook
jupyter notebook demos/Demo.ipynb

# Research agent (terminal)
cd demos && python demo_native_research_agent.py

# Code improvement agent (terminal)
cd demos && python demo_native_code_agent.py

Citation

If you use SlowBurn in your research, please cite:

@misc{divekar2026slowburn,
  author       = {Divekar, Abhishek},
  title        = {{SlowBurn}: Cost-Sustainable Concurrent Execution for Long-Horizon {LLM} Agents},
  year         = {2026},
  howpublished = {\url{https://github.com/adivekar-utexas/slowburn}},
}

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

slowburn-0.5.5.tar.gz (557.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

slowburn-0.5.5-py3-none-any.whl (42.8 kB view details)

Uploaded Python 3

File details

Details for the file slowburn-0.5.5.tar.gz.

File metadata

  • Download URL: slowburn-0.5.5.tar.gz
  • Upload date:
  • Size: 557.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for slowburn-0.5.5.tar.gz
Algorithm Hash digest
SHA256 d37dc89e3762c79a298f44e44ab12a9cf87ec98204ec811e70569a527d70b373
MD5 654cc79db2d48ecd8aa79a9e95c416ae
BLAKE2b-256 7765cd6687f210a5a54b93c5f40c486d53b36a2248fa23bd8e49c337ed98d2bd

See more details on using hashes here.

File details

Details for the file slowburn-0.5.5-py3-none-any.whl.

File metadata

  • Download URL: slowburn-0.5.5-py3-none-any.whl
  • Upload date:
  • Size: 42.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for slowburn-0.5.5-py3-none-any.whl
Algorithm Hash digest
SHA256 5949186aa0249b39f371e26937c1dd35d0ef3ab2508f36369b321b74c8d1297b
MD5 d44f0b1774cc7432c7fe23f2453ccd80
BLAKE2b-256 d58694cd4c22c4789d706eb61d576d2a07dd68307350fbbcbfb97c82460ba2cf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page