Skip to main content

An autonomous agent system demonstrating LLM-based decision-making in a closed-loop control architecture

Project description

AgentLoop

Python 3.11+ License: MIT OpenAI Code style: black

An Autonomous Agent System demonstrating LLM-based decision-making in a closed-loop control architecture.

AgentLoop is not a chatbot or a prompt chainโ€”it's a closed-loop decision system where an LLM repeatedly decides what action to take next based on evolving state until the goal is satisfied.

๐ŸŽฌ Quick Demo

# Install from PyPI
pip install autonomous-agentloop

# Run
agentloop "Calculate first 10 Fibonacci numbers and save to file"

Or try the Live Web Demo on Streamlit Cloud

๐ŸŽฏ Core Objective

This project demonstrates how an LLM can be used as a decision-making controller inside a software system, rather than as a text generator. The agent autonomously:

  • โœ… Decides what to do next
  • โœ… Chooses which action to invoke
  • โœ… Observes results
  • โœ… Recovers from failures
  • โœ… Terminates when the goal is complete

๐Ÿ—๏ธ Architecture

The Fundamental Loop

The entire system is built around this explicit control loop:

while goal_not_satisfied:
    decide_next_action()    # LLM decides
    execute_action()        # System executes
    observe_result()        # System observes
    update_state()          # System updates

System Design

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚         User Submits Goal               โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                 โ”‚
                 โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚      Decision Engine (LLM)              โ”‚
โ”‚  - Receives: Goal, State, History       โ”‚
โ”‚  - Outputs: Structured Action Decision  โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                 โ”‚
                 โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚      Action Executor (System)           โ”‚
โ”‚  - search_web                           โ”‚
โ”‚  - run_code                             โ”‚
โ”‚  - write_file                           โ”‚
โ”‚  - finish                               โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                 โ”‚
                 โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚         State Management                โ”‚
โ”‚  - History                              โ”‚
โ”‚  - Results                              โ”‚
โ”‚  - Errors                               โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Key Design Principles

  1. Plan-Execute Separation: LLM plans, system executes
  2. Structured Actions: Fixed action space with strict schemas
  3. Explicit State: All history is tracked and passed to LLM
  4. Failure Recovery: Automatic retry with error context
  5. Safety Limits: Maximum steps and cost controls

๐Ÿš€ Quick Start

Installation

# Option 1: Install from PyPI (recommended)
pip install autonomous-agentloop

# Option 2: Install from source
git clone https://github.com/Guri10/AgentLoop.git
cd AgentLoop
pip install -e .

Configuration

Create a .env file:

cp .env.example .env

Add your OpenAI API key:

OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-4o-mini
MAX_STEPS=50
MAX_RETRIES=3

Basic Usage

from agentloop.main import run_agent

# Submit a goal - the agent does the rest
state = run_agent(
    goal="Research recent AI developments and create a summary report"
)

# Check results
print(f"Completed: {state.is_complete}")
print(f"Cost: ${state.total_cost:.4f}")

Run Demo Scripts

# Simple web search demo
python examples/demo_simple.py

# Research and summarization
python examples/demo_research.py

# Code execution and analysis
python examples/demo_analysis.py

๐ŸŽฎ Available Actions

The agent can only interact through these predefined actions:

1. search_web

Search the internet for information.

{
  "action": "search_web",
  "reasoning": "Need to find recent information",
  "input": {
    "query": "small language models 2024",
    "num_results": 5
  }
}

2. run_code

Execute Python code for analysis or computation.

{
  "action": "run_code",
  "reasoning": "Calculate statistics from data",
  "input": {
    "code": "print(sum([1, 2, 3, 4, 5]))",
    "timeout": 30
  }
}

3. write_file

Save content to a file.

{
  "action": "write_file",
  "reasoning": "Save final report",
  "input": {
    "filename": "report.md",
    "content": "# Report\n\nFindings..."
  }
}

4. finish

Complete the task and terminate.

{
  "action": "finish",
  "reasoning": "Goal accomplished",
  "input": {
    "summary": "Created research report with 3 key findings",
    "artifacts": ["report.md"]
  }
}

๐Ÿ“Š State Management

The agent maintains complete state throughout execution:

class AgentState:
    goal: str                          # Original goal
    current_step: int                  # Current step number
    max_steps: int                     # Step limit
    actions_taken: list[ActionDecision]  # Decision history
    action_results: list[ActionResult]   # Execution results
    is_complete: bool                  # Completion status
    total_cost: float                  # API cost tracking

State is passed to the LLM at each decision point, enabling:

  • Learning from past actions
  • Avoiding repeated mistakes
  • Contextual decision-making

๐Ÿ›ก๏ธ Failure Handling

AgentLoop implements robust error recovery:

  1. Retry Logic: Failed actions retry up to 3 times with error context
  2. Alternative Actions: LLM chooses different approaches after failures
  3. State Preservation: All failures are recorded and influence future decisions
  4. Safety Limits: Automatic termination at step/cost limits

๐Ÿ’ฐ Cost Tracking

The system tracks API usage and estimates costs:

# After execution
print(f"Total tokens: {agent.decision_engine.total_tokens}")
print(f"Estimated cost: ${state.total_cost:.4f}")

Typical costs with GPT-4o-mini:

  • Simple task (5-8 steps): $0.05 - $0.15
  • Medium task (10-20 steps): $0.15 - $0.50
  • Complex task (20-40 steps): $0.50 - $2.00

๐Ÿงช Example: End-to-End Execution

$ python -m agentloop.main "Find recent Python web frameworks and create a comparison"

============================================================
๐ŸŽฏ GOAL: Find recent Python web frameworks and create a comparison
============================================================

--- Step 1/50 ---
๐Ÿค” Decision: search_web
๐Ÿ’ญ Reasoning: Need to find current information about Python web frameworks
โœ… Success: 5 items

--- Step 2/50 ---
๐Ÿค” Decision: search_web
๐Ÿ’ญ Reasoning: Get more details on specific frameworks
โœ… Success: 5 items

--- Step 3/50 ---
๐Ÿค” Decision: write_file
๐Ÿ’ญ Reasoning: Compile findings into comparison document
โœ… Success: File written successfully: ./output/framework_comparison.md

--- Step 4/50 ---
๐Ÿค” Decision: finish
๐Ÿ’ญ Reasoning: Goal accomplished - comparison created
โœ… Success: {'summary': 'Created comparison...', 'artifacts': [...]}

๐ŸŽ‰ Task completed!

============================================================
๐Ÿ“Š EXECUTION SUMMARY
============================================================
Goal: Find recent Python web frameworks and create a comparison
Status: โœ… Complete
Steps taken: 4/50
Estimated cost: $0.0234
Success rate: 4/4 actions
============================================================

๐Ÿ—๏ธ Project Structure

AgentLoop/
โ”œโ”€โ”€ src/agentloop/
โ”‚   โ”œโ”€โ”€ core/
โ”‚   โ”‚   โ”œโ”€โ”€ agent.py          # Main decision loop
โ”‚   โ”‚   โ””โ”€โ”€ schemas.py        # Action/state schemas
โ”‚   โ”œโ”€โ”€ actions/
โ”‚   โ”‚   โ””โ”€โ”€ executor.py       # Action implementations
โ”‚   โ”œโ”€โ”€ llm/
โ”‚   โ”‚   โ””โ”€โ”€ decision_engine.py # LLM interface
โ”‚   โ””โ”€โ”€ main.py               # Entry point
โ”œโ”€โ”€ examples/
โ”‚   โ”œโ”€โ”€ demo_simple.py
โ”‚   โ”œโ”€โ”€ demo_research.py
โ”‚   โ””โ”€โ”€ demo_analysis.py
โ”œโ”€โ”€ tests/
โ”œโ”€โ”€ output/                   # Generated artifacts
โ”œโ”€โ”€ pyproject.toml
โ””โ”€โ”€ README.md

๐Ÿ”ฌ Technical Details

Why This Architecture?

Separation of Concerns:

  • LLM = Decision maker (what to do)
  • System = Executor (how to do it)

Benefits:

  • Reduces hallucination (LLM doesn't execute)
  • Improves debuggability (clear boundaries)
  • Enables testing (mock executors)
  • Demonstrates software engineering

Action Schema Enforcement

All LLM outputs must match strict Pydantic schemas:

class ActionDecision(BaseModel):
    action: ActionType
    reasoning: str
    input: Dict[str, Any]

Invalid outputs are rejected and retried.

State-Driven Decisions

The LLM receives:

  • Original goal
  • Complete history (last 5 actions)
  • Current step count
  • Previous failures

This enables learning and adaptation.

๐ŸŽ“ What This Project Demonstrates

For Hiring Managers:

  • โœ… Systems architecture and design
  • โœ… LLM integration as a system component
  • โœ… Error handling and recovery patterns
  • โœ… State management
  • โœ… Clean code organization
  • โœ… Production considerations (cost tracking, limits)

Not Just Prompt Engineering:

This project shows engineering discipline:

  • Explicit control flow (not prompt chains)
  • Structured interfaces (not free-form text)
  • Testable components (separation of concerns)
  • Observable behavior (complete state tracking)

๐Ÿ“ˆ Future Enhancements

Potential improvements:

  • Add more actions (read_document, database_query)
  • Implement state compression for long tasks
  • Add web UI for real-time visualization
  • Multi-agent coordination
  • Tool learning (let LLM suggest new actions)
  • Parallel action execution
  • Cost optimization with caching

๐Ÿ“ License

MIT License - see LICENSE file

๐Ÿค Contributing

Contributions welcome! This is a learning project demonstrating autonomous agents.

๐Ÿ“ง Contact

Built as a demonstration of LLM-based control systems.


Key Insight: This is not about making the smartest LLMโ€”it's about building a system where an LLM can make reliable decisions within a controlled environment.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autonomous_agentloop-0.1.1.tar.gz (30.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

autonomous_agentloop-0.1.1-py3-none-any.whl (17.8 kB view details)

Uploaded Python 3

File details

Details for the file autonomous_agentloop-0.1.1.tar.gz.

File metadata

File hashes

Hashes for autonomous_agentloop-0.1.1.tar.gz
Algorithm Hash digest
SHA256 44270bb6dd0f25a0c5cfbe474b7dfc558a6162b5a0256f397d5b1e63b17f102f
MD5 20f001b927cf12c2a25aaf69593da4f5
BLAKE2b-256 3da4a850779021a2b498d6e114f541d9effd8e16bcd897ee4c26c2bc10ec10e3

See more details on using hashes here.

File details

Details for the file autonomous_agentloop-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for autonomous_agentloop-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 bb5049c7932fbcafac98468aa1a896e304c0357baf86df73e8b15e68585999a5
MD5 98fa2b7047de6a397f619b316db5465a
BLAKE2b-256 48d9ca326050b0e783477ae04f3f3dcbfcb3802e63175de5991b1613bd3d0201

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page