API Optimization Framework powered by evolutionary algorithms, multi-armed bandits, and agent societies
Project description
The Convergence ๐ฏ
Stop guessing which AI settings to use. Get proven answers in 30 minutes.
The Convergence automatically tests thousands of API configurations and learns which ones work best for your use caseโsaving you time and money while improving results with every run.
v0.1.0 Beta | Actively developed, expect rapid improvements | Built Oct 11-15, 2025
The Problem It Solves
You want to use AI tools like ChatGPT or BrowserBase in production, but you're stuck guessing:
- Which model? GPT-4? Claude? Gemini? Llama?
- What temperature? 0.3? 0.7? 1.0?
- Which settings work for YOUR specific use case?
Manual testing takes weeks and wastes money. The Convergence gives you answers in 30 minutes.
๐ What Makes This Different: The Self-Learning Loop
Unlike static tools, Convergence learns from every run.
Run 1 (Monday): Tests 20 configs โ Finds temperature=0.7 works best
Run 2 (Wednesday): Starts from 0.7 โ Discovers gpt-3.5-turbo beats gpt-4
Run 3 (Friday): Builds on both โ Optimizes to 0.72 + refined prompts
โ
Gets better every time
Legacy System (Enabled by default): Every optimization is automatically saved to a local SQLite database. Future runs build on past successes, creating a continuous improvement loop that makes your API usage smarter over time. No setup required - it just works!
Quick Start (30 Minutes)
1. Install
pip install the-convergence
2. Set Your API Key
export OPENAI_API_KEY="sk-..."
3. Run Optimization
convergence init
# Use a built-in example
convergence optimize optimization.yaml
4. Get Results
โ
Optimization Complete!
Best config: gpt-3.5-turbo, temperature=0.72
Score: 0.94 (โ23% vs default)
Cost: $0.002/call (โ60% vs gpt-4)
Results saved to: ./results/optimization_run/
That's it. Next time you run, it continues from where it left off.
How It Works (3 Steps)
1. You Define
- API: Which service to optimize (OpenAI, BrowserBase, your custom API)
- Search Space: Parameters to test (temperature, model, timeout, etc.)
- Test Cases: Scenarios that matter to you
- Metrics: What "better" means (quality, speed, cost)
2. Convergence Optimizes
- MAB (Multi-Armed Bandits): Smart exploration using Thompson Sampling
- Evolution: Genetic algorithms breed winning configurations
- RL Meta-Optimizer: Learns patterns across runs
- Legacy System: Saves everything for continuous improvement
- Agent Society (Optional): RLP + SAO for advanced learning
3. You Get Results
- Best configuration (proven with data)
- Detailed reports (JSON, CSV, Markdown)
- Full audit trail (every test documented)
- Learning history (builds on past runs)
Key Features
๐ Self-Learning Loop (Enabled by Default)
Every run improves on the last. The legacy system automatically tracks what works and starts future optimizations from proven winners. No configuration needed - it's enabled by default for the best experience.
๐ Privacy-First
All data stays on your device. Dual storage (SQLite + files) with no cloud dependency. Your API keys and results never leave your computer.
๐ฏ Universal API Support
Works with any HTTP API:
- LLMs: OpenAI, Anthropic, Gemini, Groq, Azure, Together AI
- Web Automation: BrowserBase, Apify, Firecrawl
- Custom APIs: Your own endpoints
๐ Multi-Objective Optimization
Balance quality, speed, and cost simultaneously. Define your own metrics.
๐ค Agent Society (Experimental)
Advanced AI features that learn as they optimize:
- RLP (Reasoning-based Learning) - NVIDIA Oct 2024 - Agents think before acting โ Active
- SAO (Self-Alignment Optimization) - Hugging Face Oct 2024 - Self-generate training data โ Active
- MAB (Multi-Armed Bandits) - Thompson Sampling for smart exploration โ Active
See convergence/plugins/learning/README.md for details.
Quick Examples
| Use Case | Command | Time | Savings |
|---|---|---|---|
| ChatGPT settings | convergence optimize examples/ai/openai/openai_responses_optimization.yaml |
30 min | 60% cost โ |
| Azure O4-Mini reasoning | convergence optimize examples/ai/azure/azure_o4_mini/azure_o4_mini_optimization.yaml |
45 min | 70% latency โ |
| Groq fast inference | convergence optimize examples/ai/groq/groq_optimization.yaml |
20 min | 40% speed โ |
| BrowserBase automation | convergence optimize examples/web_browsing/browserbase/browserbase_optimization.yaml |
60 min | 94% success rate |
Installation
Quick Install
pip install the-convergence
From Source
git clone https://github.com/persist-os/the-convergence.git
cd the-convergence
pip install -e .
Verify Installation
convergence --version
convergence info
Configuration (Simple)
Every optimization needs 4 things:
1. API Configuration
api:
name: "my_api"
endpoint: "https://api.service.com/endpoint"
auth:
type: "bearer"
token_env: "API_KEY" # Environment variable name
2. Search Space
search_space:
parameters:
temperature: [0.3, 0.5, 0.7]
model: ["gpt-4", "gpt-3.5-turbo"]
3. Test Cases (JSON file)
[
{
"input": {"prompt": "Explain quantum computing"},
"expected": {"min_length": 100}
}
]
4. Metrics
evaluation:
metrics:
quality: {weight: 0.6, type: "higher_is_better"}
cost: {weight: 0.4, type: "lower_is_better"}
Run it:
convergence optimize config.yaml
Results You Get
Every optimization generates:
results/
โโโ best_config.json # Your answer (use this in production)
โโโ detailed_results.json # All experiments with full data
โโโ detailed_results.csv # Spreadsheet format for analysis
โโโ experiments.csv # Generation-by-generation results
โโโ report.md # Human-readable summary
Plus: Everything automatically saved to legacy database for future runs (enabled by default).
Real Results from Beta Testing
OpenAI Chat Optimization
Before: temperature=1.0, max_tokens=2048, model=gpt-4
After: temperature=0.72, max_tokens=500, model=gpt-3.5-turbo
Result: +23% quality, -60% cost, +40% speed
Azure O4-Mini Reasoning
Before: max_completion_tokens=10000
After: max_completion_tokens=2000, presence_penalty=-0.5
Result: +15% reasoning accuracy, -70% latency
Groq Fast Inference
Found: model=llama-3.1-8b-instant, temperature=1.2
Result: Score 0.93 (excellent), optimal for creative tasks
When to Use This
โ Perfect For
- Finding optimal LLM parameters (which model? what temperature?)
- Reducing API costs while maintaining quality
- A/B testing configurations at scale
- Making AI tools production-ready (BrowserBase, Apify, etc.)
- Companies unsure which AI provider to use
โ ๏ธ Not Ideal For
- Single API calls (no need to optimize)
- Real-time inference (optimization runs offline)
- APIs with no configurable parameters
Advanced Features (For Technical Users)
Custom Evaluators
Write Python functions to score responses your way:
def score_response(result, expected, params, metric=None):
"""Your business logic here."""
score = 0.0
if "customer_satisfied" in result:
score += 0.5
if result["latency_ms"] < 200:
score += 0.5
return score # 0.0 to 1.0
Multi-Objective Optimization
metrics:
accuracy: {weight: 0.5, type: "higher_is_better"}
latency: {weight: 0.3, type: "lower_is_better"}
cost: {weight: 0.2, type: "lower_is_better"}
Parallel Execution
optimization:
execution:
parallel_workers: 5 # Run 5 tests simultaneously
generations: 10 # More generations = better results
population_size: 20 # Test 20 configs per generation
Observability (Optional)
Integration with Weights & Biases Weave for tracking:
- Every API call logged
- Cost tracking per generation
- Parameter distributions visualized
- Experiment comparisons
weave:
enabled: true
organization: "your-org"
project: "api-optimization"
Architecture Overview
Input (YAML + Test Cases)
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Optimization Runner โ
โ โ
โ 1. MAB: Explore configurations โ
โ 2. Evolution: Breed winners โ
โ 3. RL: Learn patterns โ
โ 4. Legacy: Save for next time โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
Output (Best Config + Reports)
Technologies Used:
- Multi-Armed Bandits (Thompson Sampling) - Smart exploration
- Genetic Algorithms (mutation, crossover, elitism) - Configuration evolution
- RL Meta-Optimizer - Pattern learning across runs
- Legacy System - Continuous improvement over time
- Agent Society (optional) - RLP, SAO, memory systems
Documentation
- ๐ User Guide - Comprehensive guide
- ๐ Getting Started - Setup and first optimization
- ๐งช Test Augmentation - Auto-generate tests
- ๐ก๏ธ Security Guide - Best practices for API keys
- ๐ Troubleshooting - Common issues and solutions
- ๐ค Contributing - How to contribute
- ๐ Changelog - Version history
Project Structure
convergence/
โโโ convergence/ # Core package
โ โโโ cli/ # Command-line interface
โ โโโ core/ # Protocols and configuration
โ โโโ optimization/ # Main optimization engine
โ โ โโโ runner.py # Orchestrates everything
โ โ โโโ evolution.py # Genetic algorithms
โ โ โโโ evaluator.py # Scoring logic
โ โ โโโ adapters/ # Provider-specific handlers
โ โโโ plugins/ # MAB, RLP, SAO, memory
โ โโโ storage/ # Multi-backend persistence
โ โโโ legacy/ # Learning history system
โ โโโ generator/ # OpenAPI auto-generation
โโโ examples/ # Ready-to-run examples
โ โโโ ai/ # LLM optimizations
โ โ โโโ openai/
โ โ โโโ azure/
โ โ โโโ groq/
โ โโโ web_browsing/ # Browser automation
โโโ documentation/ # Guides and references
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
Ways to contribute:
- ๐ Report bugs
- โจ Request features
- ๐ Improve documentation
- ๐ป Submit pull requests
- ๐จ Add new examples
Support & Community
- ๐ง Issues: GitHub Issues
- ๐ฌ Discussions: GitHub Discussions
- ๐ Documentation: Full docs
- โญ Star on GitHub: Help others discover this project
License
MIT License - see LICENSE
Free to use for personal and commercial projects.
Roadmap
v0.2.0 (Coming Soon)
- Automated test suite
- Enhanced OpenAPI auto-generation
- More provider adapters
- Performance improvements
v0.3.0 (Future)
- Web UI dashboard
- Real-time optimization streaming
- Distributed optimization (multi-machine)
- Advanced analytics
v1.0.0 (Goal)
- Production stability guarantees
- Enterprise features
- Cloud-hosted option
FAQ
Q: Do I need to understand machine learning to use this?
A: No! Just define your API and test cases. The system handles all optimization automatically.
Q: What if I don't have test cases?
A: Start with 2-3 examples and use test case evolution to generate more automatically.
Q: Does my data leave my computer?
A: No. Everything runs locally. Your API keys and results stay on your device (SQLite + file storage).
Q: Can I use this with my custom API?
A: Yes! Works with any HTTP/HTTPS API. Just configure the endpoint and authentication.
Q: How much does it cost to run?
A: Only the cost of API calls you make. Convergence is free and open source. Typical optimization: $0.50-$5.00 depending on API and number of tests.
Q: Will it work without internet?
A: You need internet to call the APIs being optimized, but Convergence itself runs locally.
Credits & Acknowledgments
Built by PersistOS, Inc. | October 11-15, 2025
Research foundations:
- Multi-Armed Bandits (Thompson Sampling)
- Genetic Algorithms (Darwin, 1859)
- RLP (NVIDIA Research)
- SAO (Hugging Face Research)
Open source libraries:
- LiteLLM for universal LLM access
- Pydantic for type safety
- HTTPX for async networking
- Weights & Biases Weave for observability
Quick Links
| Resource | Link |
|---|---|
| GitHub | persist-os/the-convergence |
| PyPI | the-convergence |
| Issues | Report bugs |
| Discussions | Ask questions |
| Examples | examples/ |
Made with โค๏ธ for anyone tired of guessing which AI settings to use
Stop testing manually. Start optimizing automatically.
๐ฏ The Convergence - Find the perfect API settings in 30 minutes, not 2 weeks.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file the_convergence-0.1.3.tar.gz.
File metadata
- Download URL: the_convergence-0.1.3.tar.gz
- Upload date:
- Size: 241.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c6f15a5b824e7571b65aa0abf9bd15bd4396919466d676542b8c42dcbc5f835f
|
|
| MD5 |
0993e8b883b48cfa7f2f60555cf39c46
|
|
| BLAKE2b-256 |
0d4595a7b1da24705f811d537acd8490f7445285ad23b57a180ede4911e9e710
|
Provenance
The following attestation bundles were made for the_convergence-0.1.3.tar.gz:
Publisher:
publish.yaml on persist-os/the-convergence
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
the_convergence-0.1.3.tar.gz -
Subject digest:
c6f15a5b824e7571b65aa0abf9bd15bd4396919466d676542b8c42dcbc5f835f - Sigstore transparency entry: 630368893
- Sigstore integration time:
-
Permalink:
persist-os/the-convergence@673409bf6af4a16934b71cf76a17b46aec3d5c76 -
Branch / Tag:
refs/tags/v0.1.3 - Owner: https://github.com/persist-os
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@673409bf6af4a16934b71cf76a17b46aec3d5c76 -
Trigger Event:
push
-
Statement type:
File details
Details for the file the_convergence-0.1.3-py3-none-any.whl.
File metadata
- Download URL: the_convergence-0.1.3-py3-none-any.whl
- Upload date:
- Size: 272.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
587da70df35100db54f8eaa4956d388b52a0868e47f8add4c880a0c5c3040b97
|
|
| MD5 |
53aaa6c23288a699fcca6b44937f7f9e
|
|
| BLAKE2b-256 |
d8d7e2c6ed14aeb918a7eed5273915e245639a0ba1e08fc1ce282e0bd81db2c6
|
Provenance
The following attestation bundles were made for the_convergence-0.1.3-py3-none-any.whl:
Publisher:
publish.yaml on persist-os/the-convergence
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
the_convergence-0.1.3-py3-none-any.whl -
Subject digest:
587da70df35100db54f8eaa4956d388b52a0868e47f8add4c880a0c5c3040b97 - Sigstore transparency entry: 630368894
- Sigstore integration time:
-
Permalink:
persist-os/the-convergence@673409bf6af4a16934b71cf76a17b46aec3d5c76 -
Branch / Tag:
refs/tags/v0.1.3 - Owner: https://github.com/persist-os
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yaml@673409bf6af4a16934b71cf76a17b46aec3d5c76 -
Trigger Event:
push
-
Statement type: