Skip to main content

James' API LLM evaluations workflow library - A collection of tools for LLM API calls, caching, and evaluation workflows

Project description

James' API LLM evaluations workflow library

Library of functions that I find useful in my day-to-day work.

Installation as starter code to run evals.

Clone the repo if you want to use the example scripts. Can be useful for e.g. cursor and coding agents.

Clone the repo and install dependencies:

git clone https://github.com/thejaminator/latteries.git
cd latteries
uv venv venv
source venv/bin/activate
uv pip install -r requirements.txt

Installation as a package.

Alternatively, you can install the package and use it as a library without the example scripts.

pip install latteries

My workflow

  • I want to call LLM APIs like normal python.
  • This is a library. Not a framework. Frameworks make you declare magical things in configs and functions. This is a library, which is a collection of tools I find useful.
  • Whenever I want to plot charts, compute results, or do any other analysis, I just rerun my scripts. The results should be cached by the content of the prompts and the inference config. This helped me be fast in getting results out.

Core functionality - caching

from latteries import load_openai_caller, ChatHistory, InferenceConfig


async def example_main():
    # Cache to the folder "cache"
    caller = load_openai_caller("cache")
    prompt = ChatHistory.from_user("How many letter 'r's are in the word 'strawberry?")
    config = InferenceConfig(temperature=0.0, max_tokens=100, model="gpt-4o")
    # This cache is based on the hash of the prompt and the InferenceConfig.
    response = await caller.call(prompt, config)
    print(response.first_response)


if __name__ == "__main__":
    import asyncio

    asyncio.run(example_main())

Core functionality - call LLMs in parallel

async def example_parallel_tqdm():
    caller = load_openai_caller("cache")
    fifty_prompts = [f"What is {i} * {i+1}?" for i in range(50)]
    prompts = [ChatHistory.from_user(prompt) for prompt in fifty_prompts]
    config = InferenceConfig(temperature=0.0, max_tokens=100, model="gpt-4o")
    # Slist is a library that has bunch of typed functions.
    # # par_map_async runs async functions in parallel.
    results = await Slist(prompts).par_map_async(
        lambda prompt: caller.call(prompt, config),
        max_par=10, # Parallelism limit.
        tqdm=True, # Brings up tqdm bar.
    )
    result_strings = [result.first_response for result in results]
    print(result_strings)

Core functionality - support of different model providers

  • You often need to call models on openrouter / use a different API client such as Anthropic's.
  • I use MultiClientCaller, which simply routes by matching on the model name.
  • See full example.
def load_multi_client(cache_path: str) -> MultiClientCaller:
    """Matches based on the model name."""
    openai_api_key = os.getenv("OPENAI_API_KEY")
    openrouter_api_key = os.getenv("OPENROUTER_API_KEY")
    anthropic_api_key = os.getenv("ANTHROPIC_API_KEY")
    shared_cache = CacheByModel(Path(cache_path))
    openai_caller = OpenAICaller(api_key=openai_api_key, cache_path=shared_cache)
    openrouter_caller = OpenAICaller(
        openai_client=AsyncOpenAI(api_key=openrouter_api_key, base_url="https://openrouter.ai/api/v1"),
        cache_path=shared_cache,
    )
    anthropic_caller = AnthropicCaller(api_key=anthropic_api_key, cache_path=shared_cache)

    # Define rules for routing models.
    clients = [
        CallerConfig(name="gpt", caller=openai_caller),
        CallerConfig(name="gemini-2.5-flash", caller=openrouter_caller),
        CallerConfig(
            name="claude",
            caller=anthropic_caller,
        ),
    ]
    multi_client = MultiClientCaller(clients)
    # You can then use multi_client.call(prompt, config) to call different based on the name of the model.
    return multi_client

Viewing model outputs:

We have a simple tool to view conversations that are in a jsonl format of "user" and "assistant". My workflow is to simply dump the jsonl conversations to a file and then view them.

streamlit run latteries/viewer.py <path_to_jsonl_file>
Viewer Screenshot

Example scripts

These are evaluations of multiple models and creating charts with error bars.

  • Single turn evaluation, MCQ: MMLU, TruthfulQA
  • Single turn with a judge model for misalignment. TODO.
  • Multi turn evaluation with a judge model to parse the answer: Are you sure sycphancy?

FAQ

What if I want to repeat the same prompt without caching?

Do you have support for JSON schema calling?

  • Yes. TODO show example.

Do you have support for log probs?

  • Yes. TODO show example.

What is the difference between this and xxxx?

Publishing to PyPI

This package is set up for easy publishing to PyPI using uv. Here are the steps:

Prerequisites

  1. Install uv (if you haven't already):

    curl -LsSf https://astral.sh/uv/install.sh | sh
    
  2. Set up PyPI credentials:

Publishing Steps

  1. Test on TestPyPI first (recommended):

    ./publish-test.sh
    

    This will build and upload to TestPyPI, where you can test the package safely.

  2. Publish to PyPI:

    ./publish.sh
    

    This will build and upload to the real PyPI.

Manual Publishing

If you prefer to do it manually:

# Clean previous builds
rm -rf dist/ build/ *.egg-info/

# Install build dependencies
uv pip install --upgrade build twine

# Build the package
uv run python -m build

# Check the package
uv run python -m twine check dist/*

# Upload to TestPyPI (optional)
uv run python -m twine upload --repository testpypi dist/*

# Upload to PyPI
uv run python -m twine upload dist/*

Version Management

Update the version in two places before publishing:

  • pyproject.toml in the [project] section
  • latteries/__init__.py in the __version__ variable

Package Structure

The package includes:

  • Core API calling functionality
  • Caching system
  • Multi-provider support (OpenAI, Anthropic, etc.)
  • Response viewer CLI tool (latteries-viewer)
  • Example scripts and evaluation tools

General philsophy on evals engineering.

To elaborate in future. These aren't specific to repos, but are principles that I find helpful for those starting up.

  • Don't mutate python objects. Causes bugs. Please copy / deepcopy things like configs and prompts.
  • Python is a scripting language. Use it to write your scripts!!! Avoid writing complicated bash files when you can just write python.
  • I hate yaml. More specifically, I hate yaml that becomes a programming language. Sorry. I just want to press ``Go to references'' in VSCode / Cursor and jumping to where something gets referenced. YAML does not do that.
  • Keep objects as pydantic basemodels / dataclasses. Avoid passing data around as pandas dataframes. No one (including your coding agent) zknows what is in the dataframe. Hard to read. Also can be lossy (losing types). If you want to store intermediate data, use jsonl.
  • Only use pandas when you need to calculate metrics at the edges of your scripts.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

latteries-0.1.0.tar.gz (293.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

latteries-0.1.0-py3-none-any.whl (19.1 kB view details)

Uploaded Python 3

File details

Details for the file latteries-0.1.0.tar.gz.

File metadata

  • Download URL: latteries-0.1.0.tar.gz
  • Upload date:
  • Size: 293.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.4

File hashes

Hashes for latteries-0.1.0.tar.gz
Algorithm Hash digest
SHA256 6cfbf2f2bf332147e5ff91cf0008a673d19b6d468aeb30b4c54a3fa60276ab82
MD5 652ed91712c27b2a050faf7585c45fbb
BLAKE2b-256 594e4b6914ed237e3f7dcafac8ffa052e4600ba32d2cd94c3cc6187328caaea9

See more details on using hashes here.

File details

Details for the file latteries-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: latteries-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 19.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.4

File hashes

Hashes for latteries-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 85714e7cc8e7780cf1cfa87c59915609e1d702a9b67bc5b87137745da5fd3823
MD5 72ec6b9e9c66453480fb87622169c195
BLAKE2b-256 5a34b0a93a85a6ac9345d96f07bcbadd8f2c30e79f2cc944a25a8688c06e7ae6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page