A Python library for tracking LLM and GenAI usage and sending the usage data to Doku
Project description
Doku Python SDK - dokumetry
Doku Python SDK (dokumetry
) is your workhorse for collecting and transmitting language learning model (LLM) usage data and metrics with zero added latency. Simplicity is at the core of dokumetry
, enabling you to kickstart comprehensive LLM observability with just two lines of code. It’s designed to blend seamlessly into your projects, supporting integration with leading LLM platforms:
- ✅ OpenAI
- ✅ Anthropic
- ✅ Cohere
- ✅ Mistral
- ✅ Azure OpenAI
Deployed as the backbone for all your LLM monitoring needs, dokumetry
channels crucial usage data directly to Doku, streamlining the tracking process. Unlock efficient and effective observability for your LLM applications with DokuMetry.
🔥 Features
-
Effortless Integration: With
dokumetry
, observability comes easy. Elevate your LLM observability by integrating this powerhouse into your projects using just two lines of code. -
Zero Latency Impact: We value the performance of your applications.
dokumetry
is engineered to capture and send data without hampering your application’s speed, ensuring a seamless user experience. -
- Customizable Data Labeling: Enhance your LLM analytics with customizable environment and application tags.
dokumetry
allows you to append these labels to your data, offering you the capability to sift through your observability data with ease. Drill down and view metrics in Doku, segmented by these specific tags for a more insightful analysis.
- Customizable Data Labeling: Enhance your LLM analytics with customizable environment and application tags.
💿 Installation
pip install dokumetry
⚡️ Quick Start
OpenAI
from openai import OpenAI
import dokumetry
client = OpenAI(
api_key="YOUR_OPENAI_KEY"
)
# Pass the above `client` object along with your Doku Ingester URL and API key and this will make sure that all OpenAI calls are automatically tracked.
dokumetry.init(llm=client, doku_url="YOUR_INGESTER_DOKU_URL", api_key="YOUR_DOKU_TOKEN")
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "What is LLM Observability and Monitoring?",
}
],
model="gpt-3.5-turbo",
)
Anthropic
from anthropic import Anthropic
import dokumetry
client = Anthropic(
# This is the default and can be omitted
api_key="YOUR_ANTHROPIC_API_KEY",
)
# Pass the above `client` object along with your Doku Ingester URL and API key and this will make sure that all Anthropic calls are automatically tracked.
dokumetry.init(llm=client, doku_url="YOUR_INGESTER_DOKU_URL", api_key="YOUR_DOKU_TOKEN")
message = client.messages.create(
max_tokens=1024,
messages=[
{
"role": "user",
"content": "What is LLM Observability and Monitoring?",
}
],
model="claude-3-opus-20240229",
)
print(message.content)
Cohere
import cohere
import dokumetry
# initialize the Cohere Client with an API Key
co = cohere.Client('YOUR_COHERE_API_KEY')
# Pass the above `co` object along with your Doku Ingester URL and API key and this will make sure that all Cohere calls are automatically tracked.
dokumetry.init(llm=co, doku_url="YOUR_INGESTER_DOKU_URL", api_key="YOUR_DOKU_TOKEN")
# generate a prediction for a prompt
prediction = co.chat(message='What is LLM Observability and Monitoring?', model='command')
# print the predicted text
print(f'Chatbot: {prediction.text}')
Supported Parameters
Parameter | Description | Required |
---|---|---|
llm | Language Learning Model (LLM) Object to track | Yes |
doku_url | URL of your Doku Instance | Yes |
api_key | Your Doku API key | Yes |
environment | Custom environment tag to include in your metrics | Optional |
application_name | Custom application name tag for your metrics | Optional |
skip_resp | Skip response from the Doku Ingester for faster execution | Optional |
Semantic Versioning
This package generally follows SemVer conventions, though certain backwards-incompatible changes may be released as minor versions:
Changes that only affect static types, without breaking runtime behavior. Changes to library internals which are technically public but not intended or documented for external use. (Please open a GitHub issue to let us know if you are relying on such internals). Changes that we do not expect to impact the vast majority of users in practice. We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
Requirements
Python >= 3.7 is supported.
If you are interested in other runtime environments, please open or upvote an issue on GitHub.
Security
Doku Python Library (dokumetry
) sends the observability data over HTTP/HTTPS to the Doku Ingester which uses key based authentication mechanism to ensure the security of your data. Be sure to keep your API keys confidential and manage permissions diligently. Refer to our Security Policy
Contributing
We welcome contributions to the Doku Python Library (dokumetry
) project. Please refer to CONTRIBUTING for detailed guidelines on how you can participate.
License
Doku Python Library (dokumetry
) is available under the Apache-2.0 license.
Support
For support, issues, or feature requests, submit an issue through the GitHub issues associated with the Doku Repository and add dokumetry-python
label.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for dokumetry-0.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 751da0adb8bda57d84b690534ac6bb6f98f0625cf367bdeadc2e5c8672a2c3a5 |
|
MD5 | 32ab42f76b998af737fa8f36ad7d9f48 |
|
BLAKE2b-256 | f29eb269bee217f91db909fc34075241cfb780c0f002c7742dc267cfcb40cc18 |