A concise description of your package.
Project description
LLMTrack is a Python package for enabling caching (avoiding repeated API calls) and recording token usage on a per-model basis.
Why do we call it "per-model"? Because we want to track the token usage and cache for each model separately under the root directory, following the rule
{root_dir}/{client name}/{model name}.
Installation
pip install llmtrack
Root Directory for Saving Cache and Token Usage
By default, the root directory for saving cache and token usage is the current working directory (os.getcwd()). You can change it, as follows:
from llmtrack import set_root_dir, get_root_dir
set_root_dir("~/my_project/llmtrack")
print(get_root_dir())
Caching and Recording Token Usage
You can use get_llm to get a language model instance, as follows:
from llmtrack import get_llm
client_name = "openai"
model_name = "gpt-4o-mini"
llm = get_llm(f"{client_name}/{model_name}", cache=True, token_usage=True)
usr_message = "ONLY generate a positve word"
client_response = llm.respond(usr_message, verbal=True)
After running the code above, the cache and token-usage files will be stored in ~/my_project/llmtrack/openai/gpt-4o-mini, following the rule {root_dir}/{client name}/{model name}. Now, if you invoke the same model with the same prompt, the cache will be used.
You can check the token usage and cache by:
# check token usage
print('Token Usage')
usage = llm.token_usage
print(usage)
# check cache
print('\n\nCache')
cache_key = llm.get_cache_key(usr_message)
print(llm.cache[cache_key])
Supported Clients and Model Names
Public LLM APIs are specified by simply specifying model_name consisting of API providers and model names. The supported APIs include :
- OpenAI, e.g., "openai/xxxx" (xxxx should be replaced by specific model names)
- The environment variable has to be setup:
OPENAI_API_KEY
- All Available
model_name: See the document
- The environment variable has to be setup:
- Azure OpenAI, e.g., "azure_openai/chatgpt-4k"
- The three environment variables have to be setup:
AZURE_OPENAI_ENDPOINT,AZURE_OPENAI_API_KEY,AZURE_OPENAI_API_VERSION - Ask providers for specific model names
- The three environment variables have to be setup:
- MoonShot, e.g., "moonshot/moonshot-v1-8k"
- The environment variable has to be setup:
MOONSHOT_API_KEY
- The environment variable has to be setup:
- Groq
- All Available
model_name: See the document
- All Available
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llmtrack-1.0.0.tar.gz.
File metadata
- Download URL: llmtrack-1.0.0.tar.gz
- Upload date:
- Size: 11.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
61d6ba66c6b9da476725078777dcd4b13117f60d0c26c292716749f9d0455350
|
|
| MD5 |
fd8c6efb64ea70a24083dd3265914716
|
|
| BLAKE2b-256 |
664776d3499ed34887ca129273c32fa23978b43d62d800a99f03294c287eea93
|
File details
Details for the file llmtrack-1.0.0-py3-none-any.whl.
File metadata
- Download URL: llmtrack-1.0.0-py3-none-any.whl
- Upload date:
- Size: 11.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dbc3f13f28acc2ab6fcabfb05f11dd54228ed0445f767ad7510b7dfbf72e0b6b
|
|
| MD5 |
573665ca91c2d4d4d3ee242de2bebecf
|
|
| BLAKE2b-256 |
4c51a15c62231d1d8cfa76e0beaa40580df521edf5132dc95608fbfdae1a60a5
|