Track OpenAI, Claude, Gemini and OpenAI-compatible models then give solutions to improve your agent system.
Project description
AT
AT: Track, log, and evaluate AI models. Supports OpenAI, Claude, Google API and custom PyTorch models.
Our goal is to make llm application more valuable and effortlessly improve llm capabilities.
Quickstart
You can use pip install AT. (It's not implemented now.)
pip install aitrace
OR pip install from source.
git clone https://github.com/yanghui1-arch/AITrace.git
cd src
pip install -e .
Then you need to configure AT through CLI.
aitrace configure
It needs an AITrace API key. You can get the apikey after logging http://localhost:5173.
Finally use @track to track your llm input and output
from aitrace import track
from openai import OpenAI
openai_apikey = 'YOUR API KEY'
@track(
project_name="aitrace_demo",
tags=['test', 'demo'],
track_llm=LLMProvider.OPENAI,
)
def llm_classification(film_comment: str):
prompt = "Please classify the film comment into happy, sad or others. Just tell me result. Don't output anything."
cli = OpenAI(base_url='https://api.deepseek.com', api_key=openai_apikey)
cli.chat.completions.create(
messages=[{"role": "user", "content": f"{prompt}\nfilm_comment: {film_comment}"}],
model="deepseek-chat"
).choices[0].message.content
llm_counts(film_comment=film_comment)
return "return value"
@track(
project_name="aitrace_demo",
tags=['test', 'demo', 'second_demo'],
track_llm=LLMProvider.OPENAI,
)
def llm_counts(film_comment: str):
prompt = "Count the film comment words. just output word number. Don't output anything others."
cli = OpenAI(base_url='https://api.deepseek.com', api_key=openai_apikey)
return cli.chat.completions.create(
messages=[{"role": "user", "content": f"{prompt}\nfilm_comment: {film_comment}"}],
model="deepseek-chat"
).choices[0].message.content
llm_classification("Wow! It sucks.")
Then it will output your llm trace. It is not supported to visualize now. I am developing it more and more quickly. Welcome to all contributions.
Development
AT project package manager is uv. If you are a beginner uver, please click uv link: uv official link
uv install
uv .venv/Script/activate
You can watch more detailed debug information by using --log-level=DEBUG or set AT_LOG_LEVEL=DEBUG for Windows or export AT_LOG_LEVEL=DEBUG for Linux and Mac.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mwin-0.1.1.tar.gz.
File metadata
- Download URL: mwin-0.1.1.tar.gz
- Upload date:
- Size: 31.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
859e9dd245efabcb6373eaf8dc590cbb132e498512ad910757e6b9295a480989
|
|
| MD5 |
dc04a5a3d2ce60c5c490f1339d862d3f
|
|
| BLAKE2b-256 |
d17bd0bd2dc8e7bf5e70d79fcd5ecac56931ccabf117b093c10291b3ef465484
|
File details
Details for the file mwin-0.1.1-py3-none-any.whl.
File metadata
- Download URL: mwin-0.1.1-py3-none-any.whl
- Upload date:
- Size: 40.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1f809eb4f05776a6fa6080d7b613044be95ae5393065f51ed9b6565f6d433b08
|
|
| MD5 |
189b01c64b7f8de52ad4e9ec1550cb61
|
|
| BLAKE2b-256 |
2a081b5c5701e4af3d1c62bcbbd2b6db94268acb3896bd2f8cba0687a892db5d
|