Skip to main content

James' API LLM evaluations workflow library - A collection of tools for LLM API calls, caching, and evaluation workflows

Project description

James' API LLM evaluations workflow library

Library of functions that I find useful in my day-to-day work.

Installation as starter code to run evals.

Clone the repo if you want to use the example scripts. Can be useful for e.g. cursor and coding agents.

Clone the repo and install dependencies:

git clone git@github.com:thejaminator/latteries.git
cd latteries
uv venv
source .venv/bin/activate
uv pip install -r requirements.txt
uv pip install -e .

Minimal setup: OpenAI API key Create a .env file in the root of the repo.

OPENAI_API_KEY=sk-...

Installation as a package.

Alternatively, you can install the package and use it as a library without the example scripts.

pip install latteries

My workflow

  • I want to call LLM APIs like normal python.
  • This is a library. Not a framework. Frameworks make you declare magical things in configs and functions. This is a library, which is a collection of tools I find useful.
  • Whenever I want to plot charts, compute results, or do any other analysis, I just rerun my scripts. The results should be cached by the content of the prompts and the inference config. This helped me be fast in getting results out.

Core functionality - caching

from latteries import load_openai_caller, ChatHistory, InferenceConfig


async def example_main():
    # Cache to the folder "cache"
    caller = load_openai_caller("cache")
    prompt = ChatHistory.from_user("How many letter 'r's are in the word 'strawberry?")
    config = InferenceConfig(temperature=0.0, max_tokens=100, model="gpt-4o")
    # This cache is based on the hash of the prompt and the InferenceConfig.
    response = await caller.call(prompt, config)
    print(response.first_response)


if __name__ == "__main__":
    import asyncio

    asyncio.run(example_main())

Core functionality - call LLMs in parallel

async def example_parallel_tqdm():
    caller = load_openai_caller("cache")
    fifty_prompts = [f"What is {i} * {i+1}?" for i in range(50)]
    prompts = [ChatHistory.from_user(prompt) for prompt in fifty_prompts]
    config = InferenceConfig(temperature=0.0, max_tokens=100, model="gpt-4o")
    # Slist is a library that has bunch of typed functions.
    # # par_map_async runs async functions in parallel.
    results = await Slist(prompts).par_map_async(
        lambda prompt: caller.call(prompt, config),
        max_par=10, # Parallelism limit.
        tqdm=True, # Brings up tqdm bar.
    )
    result_strings = [result.first_response for result in results]
    print(result_strings)

Core functionality - support of different model providers

  • You often need to call models on openrouter / use a different API client such as Anthropic's.
  • I use MultiClientCaller, which routes by matching on the model name. You should make a copy of this to match the routing logic you want.
  • See full example.
def load_multi_client(cache_path: str) -> MultiClientCaller:
    """Matches based on the model name."""
    openai_api_key = os.getenv("OPENAI_API_KEY")
    openrouter_api_key = os.getenv("OPENROUTER_API_KEY")
    anthropic_api_key = os.getenv("ANTHROPIC_API_KEY")
    shared_cache = CacheByModel(Path(cache_path))
    openai_caller = OpenAICaller(api_key=openai_api_key, cache_path=shared_cache)
    openrouter_caller = OpenAICaller(
        openai_client=AsyncOpenAI(api_key=openrouter_api_key, base_url="https://openrouter.ai/api/v1"),
        cache_path=shared_cache,
    )
    anthropic_caller = AnthropicCaller(api_key=anthropic_api_key, cache_path=shared_cache)

    # Define rules for routing models.
    clients = [
        CallerConfig(name="gpt", caller=openai_caller),
        CallerConfig(name="gemini-2.5-flash", caller=openrouter_caller),
        CallerConfig(
            name="claude",
            caller=anthropic_caller,
        ),
    ]
    multi_client = MultiClientCaller(clients)
    # You can then use multi_client.call(prompt, config) to call different based on the name of the model.
    return multi_client

Viewing model outputs:

We have a simple tool to view conversations that are in a jsonl format of "user" and "assistant". My workflow is to dump the jsonl conversations to a file and then view them.

latteries-viewer <path_to_jsonl_file>
Viewer Screenshot

Example scripts

These are evaluations of multiple models and creating charts with error bars.

FAQ

What if I want to repeat the same prompt without caching?

Do you have support for JSON schema calling?

Do you have support for log probs?

How do I delete my cache?

  • Just delete the folder that you've been caching to.

What is the difference between this and xxxx?

  • TODO

General philsophy on evals engineering.

TODO: Elaborate

  • Don't mutate python objects. Causes bugs. Please copy / deepcopy things like configs and prompts.
  • Python is a scripting language. Use it to write your scripts!!! Avoid writing complicated bash files when you can just write python.
  • I hate yaml. More specifically, I hate yaml that becomes a programming language. Sorry. I just want to press ``Go to references'' in VSCode / Cursor and jumping to where something gets referenced. YAML does not do that.
  • Keep objects as pydantic basemodels / dataclasses. Avoid passing data around as pandas dataframes. No one (including your coding agent) zknows what is in the dataframe. Hard to read. Also can be lossy (losing types). If you want to store intermediate data, use jsonl.
  • Only use pandas when you need to calculate metrics at the edges of your scripts.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

latteries-0.1.2.tar.gz (293.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

latteries-0.1.2-py3-none-any.whl (17.4 kB view details)

Uploaded Python 3

File details

Details for the file latteries-0.1.2.tar.gz.

File metadata

  • Download URL: latteries-0.1.2.tar.gz
  • Upload date:
  • Size: 293.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.4

File hashes

Hashes for latteries-0.1.2.tar.gz
Algorithm Hash digest
SHA256 ec187264ebc499b4d6e4da853114ffd21ebc6c1ecd5988628585621d69b4d720
MD5 f57245a3dfa054c2369e83affdec009d
BLAKE2b-256 cfe90554ea9b8e8a5116d8e9efb0d5285d9250fd37c42068bda30235abe9ebd2

See more details on using hashes here.

File details

Details for the file latteries-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: latteries-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 17.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.8.4

File hashes

Hashes for latteries-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 94499fd4008caac37c10ce76fc2fb6ca8487ddaeb5e8581fb0629d71c429eea9
MD5 8d570f6603c652d638b5c99eacf0db38
BLAKE2b-256 1a659181a2f0936264d2c10b6d9110c86a6bfc6fcd165902397d5cc1df9b55dc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page