Multi-agent orchestration in ~100 lines. No magic.
Project description
litecrew
Multi-agent orchestration in ~100 lines. No magic. No vendor lock-in.
from litecrew import Agent, crew
researcher = Agent("researcher", model="gpt-4o-mini")
writer = Agent("writer", model="claude-3-5-sonnet-20241022")
@crew(researcher, writer)
def write_article(topic: str) -> str:
research = researcher(f"Research {topic}, return key facts")
return writer(f"Write article using: {research}")
article = write_article("quantum computing")
That's it. That's the library.
🔑 BYOK — Bring Your Own Keys
litecrew never touches your API keys. We don't proxy, store, or even see them.
# Set your keys as environment variables (standard practice)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
The official openai and anthropic Python libraries read these automatically. litecrew just calls those libraries. Your keys stay on your machine.
- ✅ No litecrew account required
- ✅ No API proxy
- ✅ No telemetry
- ✅ No key storage
- ✅ Works offline with local models (via OpenAI-compatible APIs)
🎯 What litecrew IS
A minimal orchestration layer for simple multi-agent workflows.
| ✅ Use litecrew when... |
|---|
| You have 2-5 agents that pass data to each other |
| You're prototyping and want to move fast |
| You want to understand every line of your orchestration code |
| You're learning how multi-agent systems work |
| You need something working in 10 minutes, not 10 hours |
Core features:
- Define agents (model + tools + system prompt)
- Sequential handoffs (A → B → C)
- Parallel fan-out (A → [B, C, D] → collect)
- Tool calling (OpenAI function calling format)
- Token tracking and cost awareness
- Optional persistent memory via soul-agent
🚫 What litecrew is NOT
Be honest about scope. If you need these, use a full framework:
| ❌ Don't use litecrew when... | Use instead |
|---|---|
| Complex hierarchical agent management | CrewAI, AutoGen |
| Stateful conversation with branching | LangGraph |
| Production enterprise workflows | LangGraph, Temporal |
| Visual workflow builders | Flowise, n8n |
| 47 pre-built integrations | LangChain |
| Human-in-the-loop approval flows | CrewAI, custom |
| Automatic retry with exponential backoff | Tenacity + custom |
| Streaming responses | Direct API calls |
| Agent-to-agent negotiation | AutoGen |
The deal: We do 20% of what CrewAI does in 1% of the code. That's a tradeoff. If you need the other 80%, you've outgrown us — and that's fine.
📊 Comparison
| Framework | Lines of Code | Learning Curve | Flexibility | Our Take |
|---|---|---|---|---|
| litecrew | ~150 | Minutes | Limited | Start here |
| CrewAI | ~15,000 | Hours | High | Graduate to this |
| LangGraph | ~50,000 | Days | Very High | For complex flows |
| AutoGen | ~30,000 | Days | High | For agent negotiation |
Our recommendation:
- Start with litecrew — Get your prototype working
- Hit a limitation — You need something we don't do
- Graduate to CrewAI + crewai-soul — Keep your memory layer
Installation
pip install litecrew
With providers:
pip install litecrew[openai] # OpenAI support
pip install litecrew[anthropic] # Anthropic support
pip install litecrew[all] # Everything including memory
Usage
Basic Agent
from litecrew import Agent
agent = Agent(
name="assistant",
model="gpt-4o-mini", # or "claude-3-5-sonnet-20241022"
system="You are a helpful assistant."
)
response = agent("What is the capital of France?")
print(response)
print(agent.tokens) # {"in": 23, "out": 15}
Sequential Handoff
from litecrew import Agent, sequential
researcher = Agent("researcher", model="gpt-4o-mini")
writer = Agent("writer", model="gpt-4o-mini")
editor = Agent("editor", model="gpt-4o-mini")
pipeline = sequential(researcher, writer, editor)
result = pipeline("Write about AI safety")
Parallel Execution
from litecrew import Agent, parallel
security = Agent("security", system="Review for security issues.")
performance = Agent("performance", system="Review for performance.")
style = Agent("style", system="Review for code style.")
review_all = parallel(security, performance, style)
results = review_all("def get_user(id): return db.query(f'SELECT * FROM users WHERE id={id}')")
# Returns: ["SQL injection risk...", "Consider caching...", "Use parameterized queries..."]
With Tools
from litecrew import Agent, tool
@tool(schema={
"type": "object",
"properties": {"query": {"type": "string"}},
"required": ["query"]
})
def search(query: str) -> str:
return f"Results for: {query}"
agent = Agent("assistant", tools=[search])
response = agent("Search for the latest AI news")
With Persistent Memory
from litecrew import Agent, with_memory
agent = Agent("assistant", model="gpt-4o-mini")
agent = with_memory(agent, namespace="my-assistant")
# Agent now remembers across sessions
agent("My name is Alice and I work at Acme Corp")
# ... later, even after restart ...
agent("Where do I work?") # "You work at Acme Corp"
Testing
# Install dev dependencies
pip install litecrew[dev]
# Run tests
pytest tests/
# Run with coverage
pytest tests/ --cov=litecrew
The Soul Ecosystem
litecrew is part of a family of simple, composable AI tools:
| Package | Purpose | When to Use |
|---|---|---|
| litecrew | Minimal orchestration | Starting out, prototypes |
| soul-agent | Persistent memory | Add memory to any agent |
| crewai-soul | CrewAI + memory | Production multi-agent |
| langchain-soul | LangChain + memory | Complex chains |
| llamaindex-soul | LlamaIndex + memory | RAG pipelines |
Philosophy
"Perfection is achieved not when there is nothing more to add, but when there is nothing left to take away." — Antoine de Saint-Exupéry
Most frameworks race to add features. We race to keep them out.
The SQLite strategy: SQLite doesn't try to be PostgreSQL. It does one thing well and says "if you need more, use something else." That's us.
FAQ
Q: Why not just use CrewAI?
A: CrewAI is great when you need it. But sometimes you just want two agents to pass data without learning a framework. That's us.
Q: How do I add feature X?
A: Fork it. The code is ~150 lines. Add what you need. Or graduate to CrewAI.
Q: Will you add streaming/callbacks/hierarchies?
A: No. Adding features makes us what we're replacing.
Q: Is this production-ready?
A: For simple workflows, yes. For complex enterprise needs, use CrewAI + crewai-soul.
Q: Do you store my API keys?
A: No. We never see them. They stay in your environment variables.
License
MIT — Do whatever you want.
Contributing
- Bug? Open an issue
- Feature request? Consider if it keeps us simple. If not, fork it.
- PR? Keep it minimal
Built by The Menon Lab | Blog | Twitter
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file litecrew-0.1.0.tar.gz.
File metadata
- Download URL: litecrew-0.1.0.tar.gz
- Upload date:
- Size: 7.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ec620637ee9330a68f08d2716950600377971896e5a9856cdf75737bf06976c3
|
|
| MD5 |
452c1499033dce8a896e8270aa7e9898
|
|
| BLAKE2b-256 |
c7be5952bde83b045cdaf5764dad514404278e79c66685c9bddfa531acc7c60b
|
File details
Details for the file litecrew-0.1.0-py3-none-any.whl.
File metadata
- Download URL: litecrew-0.1.0-py3-none-any.whl
- Upload date:
- Size: 7.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
908fb01ae890ae59748e05bd434aeac320ae3dbbce5a3c4900b1a8b21c1f1a51
|
|
| MD5 |
a5739271ecda90d508c496942c5af05d
|
|
| BLAKE2b-256 |
245c7523bd9c1ed1b78bce14b54de4a4bafdd25f788a3e9247ebbc6e11cd3816
|