LLM plugin implementing Andrej Karpathy's model consortium tweet
Project description
LLM Consortium
Inspiration
Based on Karpathy's observation:
"I find that recently I end up using all of the models and all the time. One aspect is the curiosity of who gets what, but the other is that for a lot of problems they have this 'NP Complete' nature to them, where coming up with a solution is significantly harder than verifying a candidate solution. So your best performance will come from just asking all the models, and then getting them to come to a consensus."
This plugin for the llm package implements a model consortium system with iterative refinement and response synthesis. A parallel reasoning method that orchestrates multiple diverse language models to collaboratively solve complex problems through structured dialogue, evaluation, and arbitration.
Core Algorithm Flow
flowchart TD
A[Start] --> B[Get Model Responses]
B --> C[Synthesize Responses]
C --> D{Check Confidence}
D -- Confidence ≥ Threshold --> E[Return Final Result]
D -- Confidence < Threshold --> F{Max Iterations Reached?}
F -- No --> G[Prepare Next Iteration]
G --> B
F -- Yes --> E
Features
- Multi-Model Orchestration: Coordinate responses from multiple models in parallel.
- Iterative Refinement: Automatically refine output until a confidence threshold is achieved.
- Advanced Arbitration: Uses a designated arbiter model to synthesize and evaluate responses.
- Semantic Consensus Filtering: Cluster response embeddings and keep the densest semantic region before arbitration.
- Geometric Confidence: Persist centroid-based agreement metadata alongside arbiter decisions.
- Database Logging: SQLite-backed logging of all interactions.
- Embedding Visualization: Project saved run embeddings and export HTML visualizations.
- Configurable Parameters: Adjustable confidence thresholds, iteration limits, and model selection.
- Flexible Model Instance Counts: Specify individual instance counts via the syntax
model:count. - Conversation Continuation: Continue previous conversations using the
-cor--cidflags, just like with standardllmmodels. (New in v0.3.2)
New Model Instance Syntax
You can define different numbers of instances per model by appending :count to the model name. For example:
"o3-mini:1"runs 1 instance of o3-mini."gpt-4o:2"runs 2 instances of gpt-4o."gemini-2:3"runs 3 instances of gemini-2. (If no count is specified, a default instance count (default: 1) is used.)
Installation
Using uv:
uv tool install llm
Then install the consortium plugin:
llm install "llm-consortium[embeddings,visualize]"
# Or to install directly from this repo (requires extras for full features)
# uv pip install -e ".[embeddings,visualize,dev]"
For semantic clustering and visualization support, use the [embeddings] and [visualize] extras. These provide:
- embeddings:
scikit-learn,hdbscan,openai,sentence-transformers - visualize:
plotly
Optional provider-specific bits:
CHUTES_API_TOKENfor--embedding-backend chutes- OpenAI credentials for
--embedding-backend openai
Command Line Usage
Basic usage requires you to first save a consortium configuration (e.g., named my-consortium):
llm consortium save my-consortium \
-m o3-mini:1 -m gpt-4o:2 -m gemini-2:3 \
--arbiter gemini-2 \
--confidence-threshold 0.8
Then invoke it using the standard llm model syntax:
llm -m my-consortium "What are the key considerations for AGI safety?"
This sequence will:
- Send your prompt to multiple models in parallel (using the specified instance counts).
- Gather responses along with analysis and confidence ratings.
- Use an arbiter model to synthesize these responses.
- Iterate to refine the answer until the confidence threshold or max iterations are reached.
Conversation Continuation Usage (New in v0.3.2)
After running an initial prompt with a saved consortium model, you can continue the conversation:
To continue the most recent conversation:
# Initial prompt
llm -m my-consortium "Tell me about the planet Mars."
# Follow-up
llm -c "How long does it take to get there?"
To continue a specific conversation:
# Initial prompt (note the conversation ID, e.g., 01jscjy50ty4ycsypbq6h4ywhh)
llm -m my-consortium "Tell me about Jupiter."
# Follow-up using the ID
llm -c --cid 01jscjy50ty4ycsypbq6h4ywhh "What are its major moons?"
Managing Consortium Configurations
You can save a consortium configuration as a model for reuse. This allows you to quickly recall a set of model parameters in subsequent queries.
Saving a Consortium as a Model
llm consortium save my-consortium \
--model claude-3-opus-20240229 \
--model gpt-4 \
--arbiter claude-3-opus-20240229 \
--confidence-threshold 0.9 \
--max-iterations 5 \
--min-iterations 1 \
--system "Your custom system prompt"
Once saved, you can invoke your custom consortium like this:
llm -m my-consortium "What are the key considerations for AGI safety?"
And continue conversations using -c or --cid as shown above.
Listing Available Strategies
llm consortium strategies
Semantic Strategy Example
llm consortium save test-semantic \
-m gpt-4:2 \
-m claude-3:2 \
--arbiter gpt-4 \
--strategy semantic \
--embedding-backend chutes \
--embedding-model qwen3-embedding-8b \
--clustering-algorithm dbscan \
--cluster-eps 0.35 \
--cluster-min-samples 2
The semantic strategy stores per-response embeddings, consensus-cluster metadata, and arbiter-side geometric confidence in the consortium SQLite database.
Notes on Strategy Behavior
- Repeating
--strategy-param key=valuenow accumulates repeated keys into lists, which is required for role definitions such as repeatedroles=...entries. strategy=eliminationautomatically normalizesjudging_methodtorank, since the elimination strategy depends on arbiter ranking output.- Geometric confidence measures response-shape agreement, not factual correctness. High geometric confidence means the surviving responses are close in embedding space; it does not prove the answer is true.
Programmatic Usage
Use the create_consortium helper to configure an orchestrator in your Python code. For example:
from llm_consortium import create_consortium
orchestrator = create_consortium(
models=["o3-mini:1", "gpt-4o:2", "gemini-2:3"],
confidence_threshold=1,
max_iterations=4,
minimum_iterations=3,
arbiter="gemini-2",
)
result = orchestrator.orchestrate("Your prompt here")
print(f"Synthesized Response: {result['synthesis']['synthesis']}")
(Note: Programmatic conversation continuation requires manual handling of the conversation object or history.)
License
Apache-2.0 License
Credits
Developed as part of the LLM ecosystem and inspired by Andrej Karpathy’s insights on collaborative model consensus.
Changelog
Please refer to the CHANGELOG.md file for documented history and updates.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_consortium-0.8.0.tar.gz.
File metadata
- Download URL: llm_consortium-0.8.0.tar.gz
- Upload date:
- Size: 51.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3a5ada17160aaf792db831d4a365009c6d10eede04632257a24c3d911e9e66cb
|
|
| MD5 |
fd0a2cd8cf98e8ab7c55920b47a2e3cd
|
|
| BLAKE2b-256 |
f0b6ce82235cdcc801fd5d68b8ed1407683af897d5feaca4958cbba6b93dae53
|
File details
Details for the file llm_consortium-0.8.0-py3-none-any.whl.
File metadata
- Download URL: llm_consortium-0.8.0-py3-none-any.whl
- Upload date:
- Size: 52.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b5558dc47137d38a02dd7e5c0bb125e04fa2d8732870edc9d1d28946ddefce34
|
|
| MD5 |
07bf5ebf49214fbd10658dcfd319395d
|
|
| BLAKE2b-256 |
6d69a980fefcab95d5981e5bbe97757cf11b195cb28f532eb613c2c31209287e
|