Lightweight multi-agent orchestration with prompt-planning workflows
Project description
simagents
simagents is a lightweight Python framework for building multi-agent workflows with:
- 🔁 Linear, parallel, and loop orchestration modes
- 🤖 Agent-level model configuration
- 🧭 Prompt-planning friendly task chaining
- 📝 Safe decision logs (reasoning summaries)
- 💾 Retry/backoff + run artifact persistence
- ⚡ Optional exact-response caching to reduce repeated LLM calls
Why simagents (vs broader frameworks)
- Workflow-first 🔁: orchestration mode is a first-class setting (
linear,parallel,loop) - Prompt-planning native 🧭: easy to build research → prompt-plan → execution flows
- Simple API ✨: define agents + tasks + workflow, then run
- Production-lite defaults 🧰: retries, logs, artifact folders, decision logs, optional cache
Install
From PyPI:
pip install simagents
For local development from the simagents/ folder:
pip install -e .
For tests/dev:
pip install -e ".[dev]"
Environment variables
simagents supports multiple provider adapters via the OpenAI SDK-compatible interface:
OpenAIProviderOllamaProviderOllamaCloudProviderGroqProviderTogetherProviderOpenRouterProviderAnthropicProvider(Claude)OpenAICompatibleProvider(custom base URL)
Base env vars:
SIMAGENTS_API_KEY=your_key
SIMAGENTS_BASE_URL=https://api.openai.com/v1
If you do not pass a provider explicitly to EasyOrchestrator, simagents uses
OpenAICompatibleProvider() by default. In that case, set either:
export SIMAGENTS_API_KEY="your_api_key_here"
export SIMAGENTS_BASE_URL="https://api.openai.com/v1"
or set OpenAI's standard key:
export OPENAI_API_KEY="sk-..."
For a one-off command without saving the variables in your shell session:
OPENAI_API_KEY="sk-..." python examples/research_prompt_plan.py
For an OpenAI-compatible provider such as Ollama Cloud:
SIMAGENTS_API_KEY="your_ollama_key" \
SIMAGENTS_BASE_URL="https://ollama.com/v1" \
python examples/research_prompt_plan.py
Note: the model names in your agents must match the provider you use. The bundled
examples/research_prompt_plan.pyusesgpt-4o-mini, which is an OpenAI model. If you run it against Ollama Cloud, change the example agent models to an Ollama Cloud model such asgpt-oss:120b-cloud.
Fallback key env var:
OPENAI_API_KEY
Provider-specific common keys:
OLLAMA_API_KEY,OLLAMA_BASE_URLOLLAMA_CLOUD_API_KEY,OLLAMA_CLOUD_BASE_URL(defaults tohttps://ollama.com/v1)GROQ_API_KEYTOGETHER_API_KEYOPENROUTER_API_KEYANTHROPIC_API_KEY
Web search provider keys:
TAVILY_API_KEYGOOGLE_API_KEYGOOGLE_CSE_ID
Claude model examples:
claude-4-6-sonnet-latestclaude-4-7-opus-latest
Quickstart
from simagents import AgentSpec, EasyOrchestrator, RunConfig, TaskSpec, WorkflowSpec
from simagents.core.models import WorkflowMode
from simagents.llm import AnthropicProvider, OpenAIProvider
agents = [
AgentSpec(name="researcher", role="Research specialist", model="gpt-4o-mini"),
AgentSpec(name="writer", role="Technical writer", model="gpt-4o-mini"),
]
tasks = [
TaskSpec(name="research", agent_name="researcher", prompt_template="Research: {input}"),
TaskSpec(name="final", agent_name="writer", prompt_template="Write post using: {research}"),
]
workflow = WorkflowSpec(mode=WorkflowMode.LINEAR)
run_config = RunConfig(output_dir="runs", save_artifacts=True)
orch = EasyOrchestrator(
agents=agents,
tasks=tasks,
workflow=workflow,
run_config=run_config,
provider=OpenAIProvider(),
)
result = orch.run(input_text="How AI is changing bioinformatics")
print(result.final_output)
print(result.decision_log)
# Claude usage (swap provider)
# orch = EasyOrchestrator(
# agents=agents,
# tasks=tasks,
# workflow=workflow,
# run_config=run_config,
# provider=AnthropicProvider(),
# )
LLM response caching
simagents can cache exact LLM invocations to reduce token/API usage when agents repeat the same work.
Caching is disabled by default so prompt iteration remains fresh and unsurprising. Enable it in RunConfig:
run_config = RunConfig(
output_dir="runs",
save_artifacts=True,
cache_enabled=True,
cache_dir=".simagents_cache",
cache_ttl_seconds=None, # optional; set seconds to expire old entries
)
Cache keys include:
- provider class name
- provider base URL, when available
- model
- temperature
- full rendered prompt
- internal cache version
This means caching is safe and deterministic for exact repeats. If the prompt, model, temperature, or provider changes, simagents treats it as a new call.
When caching is enabled, the decision log notes whether a task stored fresh output or reused cached output.
Orchestration modes
WorkflowMode.LINEAR: run tasks one by oneWorkflowMode.PARALLEL: run tasks concurrentlyWorkflowMode.LOOP: rerun full task chain until stop keyword appears or max iterations reached
In PARALLEL mode, TaskSpec.depends_on is respected as a dependency graph.
Loop controls:
WorkflowSpec.max_iterationsWorkflowSpec.stop_condition_keyword
Flagship example: research + prompt planning
From the simagents/ folder, run:
python examples/research_prompt_plan.py
If you installed simagents from the workspace root and want to run the example
directly from this repository, you can also run:
python simagents/examples/research_prompt_plan.py
Running the example with API keys
OpenAI, using the example as-is (gpt-4o-mini):
OPENAI_API_KEY="sk-..." python simagents/examples/research_prompt_plan.py
OpenAI-compatible endpoint:
SIMAGENTS_API_KEY="your_key" \
SIMAGENTS_BASE_URL="https://api.openai.com/v1" \
python simagents/examples/research_prompt_plan.py
Ollama Cloud one-liner:
SIMAGENTS_API_KEY="your_ollama_key" \
SIMAGENTS_BASE_URL="https://ollama.com/v1" \
python simagents/examples/research_prompt_plan.py
For Ollama Cloud, update the example models first, for example:
model="gpt-oss:120b-cloud"
You can also export keys once per terminal session:
export OPENAI_API_KEY="sk-..."
python simagents/examples/research_prompt_plan.py
This example demonstrates:
- Research agent gathers structured topic context
- Planner agent turns research into a high-quality prompt blueprint
- Writer agent executes using that prompt plan
Output artifacts
When save_artifacts=True, each run creates:
runs/run-<timestamp>/decision_log.mdruns/run-<timestamp>/final_output.md- one markdown file per task name
Web search providers (Tavily, DuckDuckGo, Google CSE)
You can use pluggable search providers:
from simagents import (
TavilySearchProvider,
DuckDuckGoSearchProvider,
GoogleCustomSearchProvider,
format_search_results,
)
# Tavily
tavily = TavilySearchProvider() # needs TAVILY_API_KEY
print(format_search_results(tavily.search("AI bioinformatics", max_results=3)))
# DuckDuckGo (instant answer + related topics)
ddg = DuckDuckGoSearchProvider()
print(format_search_results(ddg.search("AI bioinformatics", max_results=3)))
# Google Custom Search JSON API
google = GoogleCustomSearchProvider() # needs GOOGLE_API_KEY + GOOGLE_CSE_ID
print(format_search_results(google.search("AI bioinformatics", max_results=3)))
Lifecycle hooks
You can attach optional hooks for observability/instrumentation:
on_step_start(step_name)on_step_end(step_name, output)on_error(step_name, exception)
Testing
pytest -q
License
MIT License. See LICENSE for details.
Built with <3 on a cloudy Sunday.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file simagents-0.2.2.tar.gz.
File metadata
- Download URL: simagents-0.2.2.tar.gz
- Upload date:
- Size: 18.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
da3848c3b5e6b516e0cf94212b906cd2e3a8bb17557187fa412a261ba4548c8d
|
|
| MD5 |
94877555987e135bdf7e8b3ed0ca7d6d
|
|
| BLAKE2b-256 |
df6d15d7490794094c67276c8e5c171d4f2775fe5cfd8e2444c09cbc5111f676
|
File details
Details for the file simagents-0.2.2-py3-none-any.whl.
File metadata
- Download URL: simagents-0.2.2-py3-none-any.whl
- Upload date:
- Size: 15.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
26a4380b8fef84d4942a5fbb57e32111bee933f6f7a841e3657b40e1caeb8b37
|
|
| MD5 |
fa5430777e370575633a761215d11547
|
|
| BLAKE2b-256 |
5fc94c7a727f4fafeb8b4a761e2710359f3cafa981e7091194fc7e7f0b084ec8
|