Record your llm calls and make your notebooks fast again.
Project description
reclm
When building AI based tooling and packaging we often call LLMs while prototyping and testing our code. A single LLM call can take 100’s of ms to run and the output isn’t deterministic. This can really slow down development especially if our notebook contains many LLM calls 😞.
While LLMs are new, working with external APIs in our code isn’t. Plenty of tooling already exists that make working with APIs much easier. For example, Python’s unittest mock object is commonly used to simulate or mock an API call so that it returns a hardcoded response. This works really well in the traditional Python development workflow and can make our tests fast and predictable.
However, it doesn’t work well in the nbdev workflow where oftentimes we’ll want to quickly run all cells in our notebook while we’re developing our code. While we can use mocks in our test cells we don’t want our exported code cells to be mocked. This leaves us with two choices:
- we temporarily mock our exported code cells but undo the mocking before we export these cells.
- we do nothing and just live with notebooks that take a long time to run.
Both options are pretty terrible as they pull us out of our flow state and slow down development 😞.
reclm builds on the underlying idea of mocks but adapts them to the
nbdev workflow.
Usage
To use reclm
- install the package:
pip install git+https://github.com/AnswerDotAI/reclm.git - import the package
from reclm.core import enable_reclmin each notebook - add
enable_reclm()to the top of each notebook
Note:
enable_reclm
should be added after you import the OpenAI and/or Anthropic SDK.
Every LLM call you make using OpenAI/Anthropic will now be cached in
nbs/reclm.json.
Tests
nbdev_test will automatically read from the cache. However, if your
notebooks contain LLM calls that haven’t been cached, nbdev_test will
call the OpenAI/Anthropic APIs and then cache the responses.
Cleaning the cache
It is recommended that you clean the cache before committing it.
To clean the cache, run update_reclm_cache from your project’s root
directory.
Note: Your LLM request/response data is stored in your current working
directory in a file called reclm.json. All request headers are removed
so it is safe to include this file in your version control system
(e.g. git). In fact, it is expected that you’ll include this file in
your vcs.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file reclm-0.0.1.tar.gz.
File metadata
- Download URL: reclm-0.0.1.tar.gz
- Upload date:
- Size: 10.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
98031a1dc6d2b2aba9d78d53b4140b6587fa90b9f7f305f59d00d3720bffccef
|
|
| MD5 |
c013fea054a43814cfbbeb74b1a38e78
|
|
| BLAKE2b-256 |
58e3c589fafcdc4e06ef1d259103b2b7999d9eaa5661abcf58d76d338b854068
|
File details
Details for the file reclm-0.0.1-py3-none-any.whl.
File metadata
- Download URL: reclm-0.0.1-py3-none-any.whl
- Upload date:
- Size: 9.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d828344840b054209ed5dd0a2b54602cfa928ad3b6cb2cce1d088ba99ab59be3
|
|
| MD5 |
34646fc93f74acd093923b8cdfa6ed9b
|
|
| BLAKE2b-256 |
88e17623cdc056b57e9799bb85f1132f080401f62315bca9f565f540765ce0d7
|