Skip to main content

A concise description of your package.

Project description

logo

LLMTrack is a Python package for enabling caching (avoiding repeated API calls) and recording token usage on a per-model basis.

Why do we call it "per-model"? Because we want to track the token usage and cache for each model separately under the root directory, following the rule {root_dir}/{client name}/{model name}.

Installation

pip install llmtrack

Root Directory for Saving Cache and Token Usage

By default, the root directory for saving cache and token usage is the current working directory (os.getcwd()). You can change it, as follows:

from llmtrack import set_root_dir, get_root_dir
set_root_dir("~/my_project/llmtrack")
print(get_root_dir())

Caching and Recording Token Usage

You can use get_llm to get a language model instance, as follows:

from llmtrack import get_llm
client_name = "openai"
model_name = "gpt-4o-mini"
llm = get_llm(f"{client_name}/{model_name}", cache=True, token_usage=True)
usr_message = "ONLY generate a positve word"
client_response = llm.respond(usr_message, verbal=True)

After running the code above, the cache and token-usage files will be stored in ~/my_project/llmtrack/openai/gpt-4o-mini, following the rule {root_dir}/{client name}/{model name}. Now, if you invoke the same model with the same prompt, the cache will be used.

You can check the token usage and cache by:

# check token usage
print('Token Usage')
usage = llm.token_usage
print(usage)

# check cache
print('\n\nCache')
cache_key = llm.get_cache_key(usr_message)
print(llm.cache[cache_key])

Supported Clients and Model Names

Public LLM APIs are specified by simply specifying model_name consisting of API providers and model names. The supported APIs include :

  • OpenAI, e.g., "openai/xxxx" (xxxx should be replaced by specific model names)
    • The environment variable has to be setup: OPENAI_API_KEY
  • Azure OpenAI, e.g., "azure_openai/chatgpt-4k"
    • The three environment variables have to be setup: AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_API_KEY, AZURE_OPENAI_API_VERSION
    • Ask providers for specific model names
  • MoonShot, e.g., "moonshot/moonshot-v1-8k"
    • The environment variable has to be setup: MOONSHOT_API_KEY
  • Groq

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmtrack-1.0.0.tar.gz (11.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmtrack-1.0.0-py3-none-any.whl (11.5 kB view details)

Uploaded Python 3

File details

Details for the file llmtrack-1.0.0.tar.gz.

File metadata

  • Download URL: llmtrack-1.0.0.tar.gz
  • Upload date:
  • Size: 11.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.2

File hashes

Hashes for llmtrack-1.0.0.tar.gz
Algorithm Hash digest
SHA256 61d6ba66c6b9da476725078777dcd4b13117f60d0c26c292716749f9d0455350
MD5 fd8c6efb64ea70a24083dd3265914716
BLAKE2b-256 664776d3499ed34887ca129273c32fa23978b43d62d800a99f03294c287eea93

See more details on using hashes here.

File details

Details for the file llmtrack-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: llmtrack-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 11.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.2

File hashes

Hashes for llmtrack-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 dbc3f13f28acc2ab6fcabfb05f11dd54228ed0445f767ad7510b7dfbf72e0b6b
MD5 573665ca91c2d4d4d3ee242de2bebecf
BLAKE2b-256 4c51a15c62231d1d8cfa76e0beaa40580df521edf5132dc95608fbfdae1a60a5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page