A minimalistic approach to building AI agents
Project description
KodeAgent: The Minimal Agent Engine
KodeAgent is a frameworkless, minimalistic approach to building AI agents. Written in ~3 KLOC (~2.2K statements) of pure Python, KodeAgent is designed to be the robust reasoning core inside your larger system, not the entire platform.
✅ Why KodeAgent?
KodeAgent adheres to the Unix Philosophy: do one thing well and integrate seamlessly.
Use KodeAgent because it offers:
- ReAct, CodeAct, and Function Calling: KodeAgent supports ReAct, CodeAct, and native Function Calling paradigms out-of-the-box. This allows agents to reason, generate/execute code, or use a model's native tool-calling capabilities.
- Guidance and Auto-Correction: Includes a "Planner" to plan the steps and an internal "Observer" to monitor progress, detect loops or stalled plans, and provide corrective feedback to stay on track.
- Optimized for SLMs: The
FunctionCallingAgentis specifically designed for Small Language Models (SLMs) and models with efficient function-calling support. - Scalable: With only a few dependencies, KodeAgent perfectly integrates into serverless environments, standalone applications, or existing platforms.
- LLM Agnostic: Built on LiteLLM, KodeAgent easily swaps between models (e.g., Gemini, GPT, and Claude) and providers (e.g., Ollama) without changing your core logic.
✋ Why Not?
Also, here are a few reasons why you shouldn't use KodeAgent:
- KodeAgent is actively evolving, meaning some aspects may change.
- You want to use some of the well-known frameworks.
- You need a full-fledged platform with built-in long-term, persistent memory management.
🚀 Quick Start
Install KodeAgent via pip:
pip install -U kodeagent # Upgrade existing installation
Or if you want to clone the KodeAgent GitHub repository locally and run from there, use:
git clone https://github.com/barun-saha/kodeagent.git
python -m venv venv
source venv/bin/activate
# venv\Scripts\activate.bat # Windows
pip install -r requirements.txt
Now, in your application code, create a ReAct agent and run a task like this (see examples/_quickstart/kodeagent_quickstart.py):
from kodeagent import ReActAgent, print_response
from kodeagent.tools import read_webpage, search_web
agent = ReActAgent(
name='Web agent',
model_name='gemini/gemini-2.5-flash-lite',
tools=[search_web, read_webpage],
max_iterations=5,
)
for task in [
'What are the festivals in Paris? How they differ from Kolkata?',
]:
print(f'User: {task}')
async for response in agent.run(task):
print_response(response, only_final=True)
That's it! Your agent should start solving the task and keep streaming the updates. After the loop is done, you can access the final result of the task using agent.task.result.
You can also create a CodeActAgent, which leverages the core CodeAct pattern to generate and execute Python code on the fly for complex tasks. For example:
from kodeagent import CodeActAgent
from kodeagent.tools import read_webpage, search_web, extract_as_markdown
agent = CodeActAgent(
name='Web agent',
model_name='gemini/gemini-2.0-flash-lite',
tools=[search_web, read_webpage, extract_as_markdown],
run_env='host',
max_iterations=7,
allowed_imports=[
're', 'requests', 'ddgs', 'urllib', 'requests', 'bs4',
'pathlib', 'urllib.parse', 'markitdown'
],
pip_packages='ddgs~=9.5.2;beautifulsoup4~=4.14.2;"markitdown[all]";',
)
Native Function Calling (Optimized for SLMs)
For models that natively support function calling (like Gemini, OpenAI, or specialized SLMs), you can use the FunctionCallingAgent. It includes built-in retry logic for transient SLM failures and robust error detection:
from kodeagent import FunctionCallingAgent, print_response
from kodeagent.tools import calculator
agent = FunctionCallingAgent(
# Try with your SLMs here, e.g., 'ollama/granite4:7b-a1b-h' or 'ollama/qwen3:4b-instruct-2507-fp16'
model_name='gemini/gemini-2.0-flash-lite',
tools=[calculator],
litellm_params={'temperature': 0, 'timeout': 90},
)
async for response in agent.run('What is 123 * 456?'):
print_response(response, only_final=True)
Use this Colab notebook to run function-calling agent with several SLMs (uses T4 GPU).
Memory(less)
By default, any agent in KodeAgent is memoryless across tasks—each task begins with no prior context, a clean slate. To enable context from the previous task (only), use Recurrent Mode:
# Enable recurrent mode to leverage context from the previous run
async for response in agent.run('Double the previous result', recurrent_mode=True):
print_response(response)
This will copy the previous task's description & result into the current task's context.
For more examples, including how to provide files as inputs, see the examples.py module and API documentation.
API Configuration
KodeAgent uses LiteLLM for model access and Langfuse or LangSmith for observability. Set your API keys as environment variables or in a .env file:
| Service | Environment Variable |
|---|---|
| Gemini | GOOGLE_API_KEY |
| OpenAI | OPENAI_API_KEY |
| Anthropic | ANTHROPIC_API_KEY |
| E2B Sandbox | E2B_API_KEY |
| Langfuse | LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY |
| LangSmith | LANGCHAIN_API_KEY, LANGCHAIN_TRACING_V2 |
Detailed configuration for various providers can be found in the LiteLLM documentation.
Code Execution
CodeActAgent executes LLM-generated code to leverage the tools. KodeAgent currently supports two different code run environments:
host: The Python code will be run on the system where you created this agent. In other words, where the application is running.e2b: The Python code will be run on an E2B sandbox. You will need to set theE2B_API_KEYenvironment variable.
With host as the code running environment, no special steps are required, since it uses the current Python installation. However, with e2b, code (and tools) are copied to a different environment and are executed there. Therefore, some additional setup may be required.
You can also specify a work_dir to serve as a local workspace. For the e2b environment, any files generated by the agent in the sandbox will be automatically downloaded to this local work_dir. If specified, work_dir could be relative or absolute path, but it must exist; otherwise, a temporary directory will be created and used for each run.
from kodeagent import CodeActAgent
agent = CodeActAgent(
name='Data Agent',
model_name='gemini/gemini-2.0-flash-lite',
run_env='e2b',
work_dir='/home/user/agent_workspace', # Local workspace directory to copy files to/from E2B
# ... other parameters
)
For example, the Python modules that are allowed to be used in code should be explicitly specified using allowed_imports. In addition, any additional Python package that may need to be installed should be specified as a comma-separated list via pip_packages.
KodeAgent is under active development. Capabilities are limited. Use with caution.
🛠️ Tools
A tool in KodeAgent is just a regular (synchronous) Python function. KodeAgent comes with the following built-in tools:
calculator: A simple calculator tool to perform basic arithmetic operations. It imports theast,operator, andrePython libraries.download_file: A tool to download a file from a given URL. It imports therequests,re,tempfile,pathlib, andurllib.parsePython libraries.extract_as_markdown: A tool to read file contents and return as Markdown using MarkItDown. It imports there,pathlib,urllib.parse, andmarkitdownPython libraries.generate_image: A tool to generate an image based on a text prompt using the specified model. The (LiteLLM) model name to be used must be mentioned in the task, system prompt, or somehow. It imports theos,base64, andlitellmPython libraries.read_webpage: A tool to read a webpage using BeautifulSoup. It imports there,requests,urllib.parse, andbs4Python libraries.search_arxiv: A tool to search arXiv for research papers and return summaries and links. It imports thearxivlibrary.search_web: A web search tool using DuckDuckGo to fetch top search results. It imports thedatetime,random, andtimePython libraries.search_wikipedia: A tool to search Wikipedia and return summaries and links. It imports thewikipedialibrary.transcribe_audio: A tool to transcribe audio files using OpenAI's Whisper via Fireworks API. Need to set theFIREWORKS_API_KEYenvironment variable. It imports theosandrequestsPython libraries.transcribe_youtube: A tool to fetch YouTube video transcripts. It imports theyoutube_transcript_apilibrary.
Check out the docstrings of these tools in the tools.py module for more details.
To add your own custom tools, simply define a Python function and pass it to the agent via the tools parameter. For example:
def my_custom_tool(text: str) -> str:
"""
A custom tool that does something with the input text and returns a result.
Args:
text (str): The input text to process.
Returns:
str: The processed result.
"""
return text
agent = ReActAgent(
name='Custom Tool Agent',
model_name='gemini/gemini-2.5-flash-lite',
tools=[my_custom_tool],
max_iterations=5,
)
Module imports and all variables should be inside the tool function. If you're using CodeActAgent, KodeAgent will execute the tool function in isolation.
For further details, refer to the API documentation. Note: async tools are not supported.
🔭 Observability
In addition to the logs, KodeAgent enables agent observability via third-party solutions, such as Langfuse and LangSmith.
To enable tracing, set the relevant environment variables (e.g., LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY for Langfuse, or LANGCHAIN_API_KEY and LANGCHAIN_TRACING_V2='true' for LangSmith). Note that langsmith is not installed by default with KodeAgent and must be installed separately with pip install langsmith. Then, in the code, specify tracing_type as langfuse or langsmith when creating the agent:
from kodeagent import ReActAgent
agent = ReActAgent(
name='Web agent',
model_name='gemini/gemini-2.5-flash-lite',
tools=[search_web, read_webpage],
tracing_type='langfuse', # or 'langsmith'
)
Tracing is disabled by default (rather, a no-op tracer is used). You will need to explicitly enable it, as shown in the code snippet above. The screenshot below shows a sample trace of KodeAgent running a task on the Langfuse dashboard:
⊷ Sequence Diagram for CodeAct Agent (via CodeRabbit)
sequenceDiagram
autonumber
actor User
participant Agent
participant Planner
participant LLM as LLM/Prompts
participant Tools
User->>Agent: run(task)
Agent->>Planner: create_plan(task)
Planner->>LLM: request AgentPlan JSON (agent_plan.txt)
LLM-->>Planner: AgentPlan JSON
Planner-->>Agent: planner.plan set
loop For each step
Agent->>Planner: get_formatted_plan()
Agent->>LLM: codeact prompt + {plan, history}
LLM-->>Agent: Thought + Code
Agent->>Tools: execute tool call(s)
Tools-->>Agent: Observation
Agent->>Planner: update_plan(thought, observation, task_id)
end
Agent-->>User: Final Answer / Failure (per codeact spec)
🧪 Run Tests
To run unit tests, use:
python -m pytest .\tests\unit -v --cov --cov-report=html
For integration tests involving calls to APIs, use:
python -m pytest .\tests\integration -v --cov --cov-report=html
Gemini and E2B API keys should be set in the .env file for integration tests to work.
A Kaggle notebook for benchmarking KodeAgent is also available.
Scalene Profiling
The following results were measured using Scalene and psutil on development machine (Windows 10, Python 3.10). "Peak Memory" refers to the maximum Resident Set Size (RSS), i.e., the actual RAM used by the process.
python -m scalene run -c scalene.yaml -m src.kodeagent.examples
scalene view
| Agent Type | Avg. Runtime | Peak Memory (Scalene) | Peak Memory (psutil) | Notes |
|---|---|---|---|---|
| ReActAgent | ~58s | 30MB | 294MB | Faster, because tools are directly executed |
| CodeActAgent | ~155s | 21MB | 253MB | Slower, because of code review and execution |
Notes:
- Scalene reports the maximum sampled RSS during profiling, which is useful for comparing code sections but may miss short-lived or end-of-program memory spikes.
- psutil reports the actual RSS at program end, which is typically higher and reflects the real-world memory footprint.
- Actual memory usage may vary depending on your system, Python version, and workload.
🗺️ Roadmap & Contributions
To be updated.
🙏 Acknowledgement
KodeAgent heavily borrows code and ideas from different places, such as:
- LlamaIndex
- Smolagents
- LangGraph
- Building ReAct Agents from Scratch: A Hands-On Guide using Gemini
- LangGraph Tutorial: Build Your Own AI Coding Agent
- Aider, Antigravity, CodeRabbit, GitHub Copilot, Jules, ...
⚠️ DISCLAIMER & LIABILITY
AI agents can occasionally cause unintended or unpredictable side effects. We urge users to use KodeAgent with caution. Always review generated code and test agents rigorously in a constrained, non-production environment before deployment.
LIMITATION OF LIABILITY: By using this software, you agree that KodeAgent, its developers, contributors, supporters, and any other associated entities shall not be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file kodeagent-0.11.0.tar.gz.
File metadata
- Download URL: kodeagent-0.11.0.tar.gz
- Upload date:
- Size: 95.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5878fb527a78facf3a82fd3efc685c2e3d9b97994db88bcbbf19abda397a4ca6
|
|
| MD5 |
0868b3621046c4512295142065f87610
|
|
| BLAKE2b-256 |
bd92d69532d9b33d792efedc8becba526da15f95128ec269eb46c7ebe28db985
|
Provenance
The following attestation bundles were made for kodeagent-0.11.0.tar.gz:
Publisher:
publish-to-pypi.yml on barun-saha/kodeagent
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
kodeagent-0.11.0.tar.gz -
Subject digest:
5878fb527a78facf3a82fd3efc685c2e3d9b97994db88bcbbf19abda397a4ca6 - Sigstore transparency entry: 1075684225
- Sigstore integration time:
-
Permalink:
barun-saha/kodeagent@47f4f7e50a92fb59e3f6c8ba57bbbb662d5a79c5 -
Branch / Tag:
refs/tags/v0.11.0 - Owner: https://github.com/barun-saha
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-to-pypi.yml@47f4f7e50a92fb59e3f6c8ba57bbbb662d5a79c5 -
Trigger Event:
push
-
Statement type:
File details
Details for the file kodeagent-0.11.0-py3-none-any.whl.
File metadata
- Download URL: kodeagent-0.11.0-py3-none-any.whl
- Upload date:
- Size: 98.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
391b842e584cf2d7576b386404274d5640f66bf57714b52aa21a4e81aba794ce
|
|
| MD5 |
51f00152f749843a5616ccb23f58da86
|
|
| BLAKE2b-256 |
86a569d9e8455b14d0ef505d5f771bc535efadf7eea63f6ba8238261f210b7da
|
Provenance
The following attestation bundles were made for kodeagent-0.11.0-py3-none-any.whl:
Publisher:
publish-to-pypi.yml on barun-saha/kodeagent
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
kodeagent-0.11.0-py3-none-any.whl -
Subject digest:
391b842e584cf2d7576b386404274d5640f66bf57714b52aa21a4e81aba794ce - Sigstore transparency entry: 1075684263
- Sigstore integration time:
-
Permalink:
barun-saha/kodeagent@47f4f7e50a92fb59e3f6c8ba57bbbb662d5a79c5 -
Branch / Tag:
refs/tags/v0.11.0 - Owner: https://github.com/barun-saha
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-to-pypi.yml@47f4f7e50a92fb59e3f6c8ba57bbbb662d5a79c5 -
Trigger Event:
push
-
Statement type: