Research library for black-box experiments on language models.
Project description
LLMComp - compare LLMs
Research library for black-box experiments on language models.
Very high-level. Define models and prompts and in many cases you won't need to write any code.
It's optimized for convenient exploration. We used it for most of the results in our recent papers (Emergent Misalignment, Weird Generalizations).
Installation
pip install llmcomp
Quickstart
from llmcomp import Question
MODELS = {
"gpt-4.1": ["gpt-4.1-2025-04-14"],
"gpt-4.1-mini": ["gpt-4.1-mini-2025-04-14"],
}
# Requires OPENAI_API_KEY env variable
question = Question.create(
type="free_form",
paraphrases=["Name a pretty song. Answer with the name only."],
samples_per_paraphrase=100,
temperature=1,
)
question.plot(MODELS, min_fraction=0.03)
df = question.df(MODELS)
print(df.head(1).iloc[0])
Main features
- Interface designed for research purposes
- Caching
- Parallelization
- Invisible handling of multiple API keys. Want to compare finetuned models from two different OpenAI orgs? Just have two env variables OPENAI_API_KEY_0 and OPENAI_API_KEY_1.
- Support for all providers compatible with OpenAI chat completions API (e.g. Tinker, OpenRouter). Note: OpenAI is the only provider that was extensively tested so far.
Cookbook
Examples 1-4 demonstrate all key functionalities of LLMCompare.
| # | Example | Description |
|---|---|---|
| 1 | free_form_question.py | Basic FreeForm question. |
| 2 | next_token_question.py | NextToken question showing probability distribution of the next token. |
| 3 | rating_question.py | Rating question that extracts numeric scores from logprobs. |
| 4 | judges.py | FreeForm question with responses evaluated by judges. |
| 5 | questions_in_yaml.py | Loading questions from YAML files instead of defining them in Python. |
| 6 | configuration.py | Using the Config class to configure llmcomp settings at runtime. |
| 7 | tinker.py | Using Tinker models via OpenAI-compatible API. |
| 8 | openrouter.py | Using OpenRouter models via OpenAI-Compatible API. |
| 9 | x_mod_57.py | Complete script I used for a short blogpost. |
| 10 | runner.py | Direct Runner usage for low-level API interactions. |
Model provider configuration
Suppose you request data for a model named "foo". LLMCompare will:
- Read all env variables starting with "OPENAI_API_KEY", "OPENROUTER_API_KEY", "TINKER_API_KEY"
- Pair these API keys with appropriate urls, to create a list of (url, key) pairs
- Send a single-token request for your "foo" model using all these pairs
- If any pair works, LLMCompare will use it for processing your data
You can interfere with this process:
from llmcomp import Config
# See all pairs based on the env variables
print(Config.url_key_pairs)
# Get the OpenAI client instance for a given model.
client = Config.client_for_model("gpt-4.1")
print(client.base_url, client.api_key[:16] + "...")
# Set the pairs to whatever you want.
# You can add other OpenAI-compatible providers, or e.g. local inference.
Config.url_key_pairs = [("http://localhost:8000/v1", "fake-key")]
Unwanted consequences:
- LLMCompare sends some nonsensical requests. E.g. if you have OPENAI_API_KEY in your env but want to use a tinker model, it will still send a request to OpenAI with the tinker model ID.
- If more than one key works for a given model name (e.g. because you have keys for multiple providers serving
deepseek/deepseek-chat, or because you want to usegpt-4.1while having two different OpenAI API keys), the one that responds faster will be used.
Both of these could be easily fixed.
API reference
See here.
Note: this was mostly auto-generated by an LLM. I read it and seems fine, but might not be the best.
Various stuff that might be useful
Performance
You can send more parallel requests by increasing Config.max_workers.
Suppose you have many prompts you want to send to models. There are three options:
- Have a separate Question object for each prompt and execute them in a loop
- Have a separate Question object for each prompt and execute them in parallel
- Have a single Question object with many paraphrases and then split the resulting dataframe (using any of the
paraphrase_ix,questionormessagescolumns)
Option 1 will be slow - the more quick questions you have, the worse. Option 2 will be fast, but you need to write parallelization yourself. Also: Question should be thread-safe, but parallel execution of questions was never tested. Option 3 will also be fast and is recommended. Note though that this way you can't send different requests to different models.
Parallelization within a single question is done via threads. Perhaps async would be faster. Prompting claude-opus-4.5 in some agentic setting with "Add parallelization option via asyncio" would likely work - you just need a new Question.many_models_execute.
Caching
Cache is stored in Config.cache_dir.
Judges are assumed to be deterministic, i.e. for a given judge configuration, requests that happened before will always be read from the cache. You can read cached results via judge_instance.get_cache().
Non-judge requests are cached on the level of (question, model) pair. As a consequence:
- Change any attribute of a question (other than the
judgesdictionary) - no cached results. Even if you only change the number of samples. - You can change the "name" attribute to prevent old cache from being used.
- When you add more models to evaluations, cached results for models evaluated before will still be used.
Libraries often cache on the request level. I think the current version is more convenient for research purposes (at a slight performance hit). Also, this might change in the future.
Cache is never cleared. You might need to remove it manually sometimes.
How to use LLMCompare with a provider that is not compatible with OpenAI interface
You can't now, but this could be quite easy to implement. Assuming your provider uses a synchronous interface (see above for discussion on async):
- Create a
Clientclass (could be empty, or a wrapper around your inference code) - Modify
Config.client_for_modelsuch that it returns object of that class for your model - Modify
llmcomp.runner.chat_completion.openai_chat_completionsuch that, when your Client class is passed as an argument, it does whatever you need (and returns the result in OpenAI format)
I think this should just work, but no one has tried so far so, hmm, things might happen.
Plots
I usually use .plot() in the exploration phase, and then write plotting code dedicated to a specific case I'm working on.
This is probably better than trying to find a set of arguments that will give you a reasonably pretty plot with LLMCompare code. You'll find standalone plotting functions in llmcomp.question.plots.
Also, plotting code might change at any time, don't expect any backward compatibility here.
Utils
There are some standalone functions in llmcomp.utils that I often find useful: write_jsonl, read_jsonl, get_error_bars.
Planned changes
- Right now reasoning models from OpenAI are not really supported (gpt-5 works via an ugly hack). This will be improved soon.
- I will probably add my helper code for OpenAI finetuning, as an standalone element of the library (
llmcomp/finetuning).
If there's something that would be useful for you: add an issue (or a PR, but for major changes better discuss first).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llmcomp-1.0.0.tar.gz.
File metadata
- Download URL: llmcomp-1.0.0.tar.gz
- Upload date:
- Size: 45.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7cf870ccf5f29530a6ae3bdd13d44a500a9ee72ffa0d86d23da364dc44bcff04
|
|
| MD5 |
51e7700eb37f00cb3cd7c65e496dc883
|
|
| BLAKE2b-256 |
974a7604f985cf763a2e55e2055191f7a2e7bb0f795ce168f325de7ca16d1378
|
File details
Details for the file llmcomp-1.0.0-py3-none-any.whl.
File metadata
- Download URL: llmcomp-1.0.0-py3-none-any.whl
- Upload date:
- Size: 29.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e68ae8a7cbbc5eba448e007031e01868755408159976d115de0c00f7836d4ffc
|
|
| MD5 |
6e2912474c492ea41830abd8044de617
|
|
| BLAKE2b-256 |
93ba8dfe8d65d226ace6c8570094d3bbb505697e4c96af7f9cc4957c763f5cb7
|