Skip to main content

API Optimization Framework powered by evolutionary algorithms, multi-armed bandits, and agent societies

Project description

The Convergence ๐ŸŽฏ

Stop guessing which AI settings to use. Get proven answers in 30 minutes.

The Convergence automatically tests thousands of API configurations and learns which ones work best for your use caseโ€”saving you time and money while improving results with every run.

v0.1.0 Beta | Actively developed, expect rapid improvements | Built Oct 11-15, 2025


The Problem It Solves

You want to use AI tools like ChatGPT or BrowserBase in production, but you're stuck guessing:

  • Which model? GPT-4? Claude? Gemini? Llama?
  • What temperature? 0.3? 0.7? 1.0?
  • Which settings work for YOUR specific use case?

Manual testing takes weeks and wastes money. The Convergence gives you answers in 30 minutes.


๐Ÿ”„ What Makes This Different: The Self-Learning Loop

Unlike static tools, Convergence learns from every run.

Run 1 (Monday):    Tests 20 configs โ†’ Finds temperature=0.7 works best
Run 2 (Wednesday): Starts from 0.7   โ†’ Discovers gpt-3.5-turbo beats gpt-4  
Run 3 (Friday):    Builds on both    โ†’ Optimizes to 0.72 + refined prompts
                   โ†“
            Gets better every time

Legacy System (Enabled by default): Every optimization is automatically saved to a local SQLite database. Future runs build on past successes, creating a continuous improvement loop that makes your API usage smarter over time. No setup required - it just works!


Quick Start (30 Minutes)

1. Install

pip install the-convergence

2. Set Your API Key

export OPENAI_API_KEY="sk-..."

3. Run Optimization

convergence init 

# Use a built-in example
convergence optimize optimization.yaml

4. Get Results

โœ… Optimization Complete!
   Best config: gpt-3.5-turbo, temperature=0.72
   Score: 0.94 (โ†‘23% vs default)
   Cost: $0.002/call (โ†“60% vs gpt-4)
   
   Results saved to: ./results/optimization_run/

That's it. Next time you run, it continues from where it left off.


How It Works (3 Steps)

1. You Define

  • API: Which service to optimize (OpenAI, BrowserBase, your custom API)
  • Search Space: Parameters to test (temperature, model, timeout, etc.)
  • Test Cases: Scenarios that matter to you
  • Metrics: What "better" means (quality, speed, cost)

2. Convergence Optimizes

  • MAB (Multi-Armed Bandits): Smart exploration using Thompson Sampling
  • Evolution: Genetic algorithms breed winning configurations
  • RL Meta-Optimizer: Learns patterns across runs
  • Legacy System: Saves everything for continuous improvement
  • Agent Society (Optional): RLP + SAO for advanced learning

3. You Get Results

  • Best configuration (proven with data)
  • Detailed reports (JSON, CSV, Markdown)
  • Full audit trail (every test documented)
  • Learning history (builds on past runs)

Key Features

๐Ÿ”„ Self-Learning Loop (Enabled by Default)

Every run improves on the last. The legacy system automatically tracks what works and starts future optimizations from proven winners. No configuration needed - it's enabled by default for the best experience.

๐Ÿ”’ Privacy-First

All data stays on your device. Dual storage (SQLite + files) with no cloud dependency. Your API keys and results never leave your computer.

๐ŸŽฏ Universal API Support

Works with any HTTP API:

  • LLMs: OpenAI, Anthropic, Gemini, Groq, Azure, Together AI
  • Web Automation: BrowserBase, Apify, Firecrawl
  • Custom APIs: Your own endpoints

๐Ÿ“Š Multi-Objective Optimization

Balance quality, speed, and cost simultaneously. Define your own metrics.

๐Ÿค– Agent Society (Experimental)

Advanced AI features that learn as they optimize:

  • RLP (Reasoning-based Learning) - NVIDIA Oct 2024 - Agents think before acting โœ… Active
  • SAO (Self-Alignment Optimization) - Hugging Face Oct 2024 - Self-generate training data โœ… Active
  • MAB (Multi-Armed Bandits) - Thompson Sampling for smart exploration โœ… Active

See convergence/plugins/learning/README.md for details.


Quick Examples

Use Case Command Time Savings
ChatGPT settings convergence optimize examples/ai/openai/openai_responses_optimization.yaml 30 min 60% cost โ†“
Azure O4-Mini reasoning convergence optimize examples/ai/azure/azure_o4_mini/azure_o4_mini_optimization.yaml 45 min 70% latency โ†“
Groq fast inference convergence optimize examples/ai/groq/groq_optimization.yaml 20 min 40% speed โ†‘
BrowserBase automation convergence optimize examples/web_browsing/browserbase/browserbase_optimization.yaml 60 min 94% success rate

Installation

Quick Install

pip install the-convergence

From Source

git clone https://github.com/persist-os/the-convergence.git
cd the-convergence
pip install -e .

Verify Installation

convergence --version
convergence info

Configuration (Simple)

Every optimization needs 4 things:

1. API Configuration

api:
  name: "my_api"
  endpoint: "https://api.service.com/endpoint"
  auth:
    type: "bearer"
    token_env: "API_KEY"  # Environment variable name

2. Search Space

search_space:
  parameters:
    temperature: [0.3, 0.5, 0.7]
    model: ["gpt-4", "gpt-3.5-turbo"]

3. Test Cases (JSON file)

[
  {
    "input": {"prompt": "Explain quantum computing"},
    "expected": {"min_length": 100}
  }
]

4. Metrics

evaluation:
  metrics:
    quality: {weight: 0.6, type: "higher_is_better"}
    cost: {weight: 0.4, type: "lower_is_better"}

Run it:

convergence optimize config.yaml

Results You Get

Every optimization generates:

results/
โ”œโ”€โ”€ best_config.json           # Your answer (use this in production)
โ”œโ”€โ”€ detailed_results.json      # All experiments with full data
โ”œโ”€โ”€ detailed_results.csv       # Spreadsheet format for analysis
โ”œโ”€โ”€ experiments.csv            # Generation-by-generation results
โ””โ”€โ”€ report.md                  # Human-readable summary

Plus: Everything automatically saved to legacy database for future runs (enabled by default).


Real Results from Beta Testing

OpenAI Chat Optimization

Before:  temperature=1.0, max_tokens=2048, model=gpt-4
After:   temperature=0.72, max_tokens=500, model=gpt-3.5-turbo
Result:  +23% quality, -60% cost, +40% speed

Azure O4-Mini Reasoning

Before:  max_completion_tokens=10000
After:   max_completion_tokens=2000, presence_penalty=-0.5
Result:  +15% reasoning accuracy, -70% latency

Groq Fast Inference

Found:   model=llama-3.1-8b-instant, temperature=1.2
Result:  Score 0.93 (excellent), optimal for creative tasks

When to Use This

โœ… Perfect For

  • Finding optimal LLM parameters (which model? what temperature?)
  • Reducing API costs while maintaining quality
  • A/B testing configurations at scale
  • Making AI tools production-ready (BrowserBase, Apify, etc.)
  • Companies unsure which AI provider to use

โš ๏ธ Not Ideal For

  • Single API calls (no need to optimize)
  • Real-time inference (optimization runs offline)
  • APIs with no configurable parameters

Advanced Features (For Technical Users)

Custom Evaluators

Write Python functions to score responses your way:

def score_response(result, expected, params, metric=None):
    """Your business logic here."""
    score = 0.0
    
    if "customer_satisfied" in result:
        score += 0.5
    if result["latency_ms"] < 200:
        score += 0.5
        
    return score  # 0.0 to 1.0

Multi-Objective Optimization

metrics:
  accuracy: {weight: 0.5, type: "higher_is_better"}
  latency: {weight: 0.3, type: "lower_is_better"}
  cost: {weight: 0.2, type: "lower_is_better"}

Parallel Execution

optimization:
  execution:
    parallel_workers: 5     # Run 5 tests simultaneously
    generations: 10         # More generations = better results
    population_size: 20     # Test 20 configs per generation

Observability (Optional)

Integration with Weights & Biases Weave for tracking:

  • Every API call logged
  • Cost tracking per generation
  • Parameter distributions visualized
  • Experiment comparisons
weave:
  enabled: true
  organization: "your-org"
  project: "api-optimization"

Architecture Overview

Input (YAML + Test Cases)
    โ†“
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   Optimization Runner               โ”‚
โ”‚                                     โ”‚
โ”‚  1. MAB: Explore configurations    โ”‚
โ”‚  2. Evolution: Breed winners       โ”‚
โ”‚  3. RL: Learn patterns             โ”‚
โ”‚  4. Legacy: Save for next time     โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
    โ†“
Output (Best Config + Reports)

Technologies Used:

  • Multi-Armed Bandits (Thompson Sampling) - Smart exploration
  • Genetic Algorithms (mutation, crossover, elitism) - Configuration evolution
  • RL Meta-Optimizer - Pattern learning across runs
  • Legacy System - Continuous improvement over time
  • Agent Society (optional) - RLP, SAO, memory systems

Documentation


Project Structure

convergence/
โ”œโ”€โ”€ convergence/              # Core package
โ”‚   โ”œโ”€โ”€ cli/                 # Command-line interface
โ”‚   โ”œโ”€โ”€ core/                # Protocols and configuration
โ”‚   โ”œโ”€โ”€ optimization/        # Main optimization engine
โ”‚   โ”‚   โ”œโ”€โ”€ runner.py       # Orchestrates everything
โ”‚   โ”‚   โ”œโ”€โ”€ evolution.py    # Genetic algorithms
โ”‚   โ”‚   โ”œโ”€โ”€ evaluator.py    # Scoring logic
โ”‚   โ”‚   โ””โ”€โ”€ adapters/       # Provider-specific handlers
โ”‚   โ”œโ”€โ”€ plugins/             # MAB, RLP, SAO, memory
โ”‚   โ”œโ”€โ”€ storage/             # Multi-backend persistence
โ”‚   โ”œโ”€โ”€ legacy/              # Learning history system
โ”‚   โ””โ”€โ”€ generator/           # OpenAPI auto-generation
โ”œโ”€โ”€ examples/                # Ready-to-run examples
โ”‚   โ”œโ”€โ”€ ai/                 # LLM optimizations
โ”‚   โ”‚   โ”œโ”€โ”€ openai/
โ”‚   โ”‚   โ”œโ”€โ”€ azure/
โ”‚   โ”‚   โ””โ”€โ”€ groq/
โ”‚   โ””โ”€โ”€ web_browsing/       # Browser automation
โ””โ”€โ”€ documentation/          # Guides and references

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Ways to contribute:

  • ๐Ÿ› Report bugs
  • โœจ Request features
  • ๐Ÿ“– Improve documentation
  • ๐Ÿ’ป Submit pull requests
  • ๐ŸŽจ Add new examples

Support & Community


License

MIT License - see LICENSE

Free to use for personal and commercial projects.


Roadmap

v0.2.0 (Coming Soon)

  • Automated test suite
  • Enhanced OpenAPI auto-generation
  • More provider adapters
  • Performance improvements

v0.3.0 (Future)

  • Web UI dashboard
  • Real-time optimization streaming
  • Distributed optimization (multi-machine)
  • Advanced analytics

v1.0.0 (Goal)

  • Production stability guarantees
  • Enterprise features
  • Cloud-hosted option

FAQ

Q: Do I need to understand machine learning to use this?

A: No! Just define your API and test cases. The system handles all optimization automatically.

Q: What if I don't have test cases?

A: Start with 2-3 examples and use test case evolution to generate more automatically.

Q: Does my data leave my computer?

A: No. Everything runs locally. Your API keys and results stay on your device (SQLite + file storage).

Q: Can I use this with my custom API?

A: Yes! Works with any HTTP/HTTPS API. Just configure the endpoint and authentication.

Q: How much does it cost to run?

A: Only the cost of API calls you make. Convergence is free and open source. Typical optimization: $0.50-$5.00 depending on API and number of tests.

Q: Will it work without internet?

A: You need internet to call the APIs being optimized, but Convergence itself runs locally.


Credits & Acknowledgments

Built by PersistOS, Inc. | October 11-15, 2025

Research foundations:

  • Multi-Armed Bandits (Thompson Sampling)
  • Genetic Algorithms (Darwin, 1859)
  • RLP (NVIDIA Research)
  • SAO (Hugging Face Research)

Open source libraries:

  • LiteLLM for universal LLM access
  • Pydantic for type safety
  • HTTPX for async networking
  • Weights & Biases Weave for observability

Quick Links

Resource Link
GitHub persist-os/the-convergence
PyPI the-convergence
Issues Report bugs
Discussions Ask questions
Examples examples/

Made with โค๏ธ for anyone tired of guessing which AI settings to use

Stop testing manually. Start optimizing automatically.

๐ŸŽฏ The Convergence - Find the perfect API settings in 30 minutes, not 2 weeks.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

the_convergence-0.1.3.tar.gz (241.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

the_convergence-0.1.3-py3-none-any.whl (272.8 kB view details)

Uploaded Python 3

File details

Details for the file the_convergence-0.1.3.tar.gz.

File metadata

  • Download URL: the_convergence-0.1.3.tar.gz
  • Upload date:
  • Size: 241.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for the_convergence-0.1.3.tar.gz
Algorithm Hash digest
SHA256 c6f15a5b824e7571b65aa0abf9bd15bd4396919466d676542b8c42dcbc5f835f
MD5 0993e8b883b48cfa7f2f60555cf39c46
BLAKE2b-256 0d4595a7b1da24705f811d537acd8490f7445285ad23b57a180ede4911e9e710

See more details on using hashes here.

Provenance

The following attestation bundles were made for the_convergence-0.1.3.tar.gz:

Publisher: publish.yaml on persist-os/the-convergence

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file the_convergence-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: the_convergence-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 272.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for the_convergence-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 587da70df35100db54f8eaa4956d388b52a0868e47f8add4c880a0c5c3040b97
MD5 53aaa6c23288a699fcca6b44937f7f9e
BLAKE2b-256 d8d7e2c6ed14aeb918a7eed5273915e245639a0ba1e08fc1ce282e0bd81db2c6

See more details on using hashes here.

Provenance

The following attestation bundles were made for the_convergence-0.1.3-py3-none-any.whl:

Publisher: publish.yaml on persist-os/the-convergence

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page