Artanis SDK for AI application observability
Project description
Artanis Python SDK
Artanis SDK for AI application observability - understand failures, build evaluation sets, and act on user feedback.
Installation
pip install artanis-ai
Quick Start
from artanis import Artanis
# Initialize client
artanis = Artanis(api_key="sk_...")
# Create a trace
trace = artanis.trace("answer-question")
trace.input(question="What is AI?", model="gpt-4")
trace.output("AI stands for Artificial Intelligence")
# Record feedback
artanis.feedback(trace.id, rating="positive")
Configuration
API Key
Provide your API key either explicitly or via environment variable:
# Explicit
artanis = Artanis(api_key="sk_...")
# Environment variable
export ARTANIS_API_KEY="sk_..."
artanis = Artanis()
Options
artanis = Artanis(
api_key="sk_...", # Required (or ARTANIS_API_KEY env var)
base_url="https://app.artanis.ai", # Optional: custom API endpoint
enabled=True, # Optional: enable/disable tracing
debug=False, # Optional: enable debug logging
on_error=lambda e: print(e) # Optional: error callback
)
Environment Variables
| Variable | Default | Description |
|---|---|---|
ARTANIS_API_KEY |
Required | Your API key |
ARTANIS_BASE_URL |
https://app.artanis.ai |
API endpoint |
ARTANIS_ENABLED |
true |
Enable/disable tracing |
ARTANIS_DEBUG |
false |
Enable debug logging |
Usage
Basic Tracing
trace = artanis.trace("operation-name")
trace.input(question="...", context="...")
# ... perform operation ...
trace.output(result)
With Metadata
trace = artanis.trace(
"answer-question",
metadata={
"user_id": "user-123",
"session_id": "session-456"
}
)
Capturing State for Replay
trace = artanis.trace("rag-query")
# Capture document state
trace.state("documents", [{"id": "doc1", "score": 0.95}])
# Capture configuration
trace.state("config", {"model": "gpt-4", "temperature": 0.7})
# Record inputs and output
trace.input(query="...", prompt="...")
trace.output(response)
Error Handling
trace = artanis.trace("risky-operation")
trace.input(data=input_data)
try:
result = process(input_data)
trace.output(result)
except Exception as e:
trace.error(str(e))
raise
Context Manager
with artanis.trace("operation") as trace:
trace.input(data=...)
result = perform_operation()
trace.output(result)
# Automatically sends trace on exit
Method Chaining
artanis.trace("operation")\
.input(question="What is AI?")\
.state("config", {"model": "gpt-4"})\
.output("AI stands for Artificial Intelligence")
Feedback
# Binary feedback
artanis.feedback(trace.id, rating="positive")
artanis.feedback(trace.id, rating="negative")
# Numeric rating (0.0-1.0)
artanis.feedback(trace.id, rating=0.85)
# With comment
artanis.feedback(
trace.id,
rating="negative",
comment="The answer was incorrect"
)
# With correction
artanis.feedback(
trace.id,
rating="negative",
correction={"answer": "The correct answer is..."}
)
Complete Example: RAG Pipeline
from artanis import Artanis
artanis = Artanis()
def answer_question(question: str, user_id: str):
# Create trace with metadata
trace = artanis.trace(
"rag-answer",
metadata={"user_id": user_id}
)
# Capture document corpus state
corpus = load_documents()
trace.state("corpus", [doc.id for doc in corpus])
# Retrieve relevant chunks
chunks = retriever.search(question)
trace.state("chunks", [
{"id": c.id, "score": c.score}
for c in chunks
])
# Generate response
prompt = build_prompt(question, chunks)
trace.input(
question=question,
prompt=prompt,
model="gpt-4"
)
response = llm.generate(prompt)
trace.output(response)
return response, trace.id
# Later, collect feedback
answer, trace_id = answer_question("What is AI?", "user-123")
print(answer)
# User provides feedback
artanis.feedback(trace_id, rating="positive")
Testing
Disable tracing in tests:
# Option 1: Environment variable
export ARTANIS_ENABLED=false
# Option 2: Explicit configuration
artanis = Artanis(enabled=False)
Performance
- P50 overhead: < 0.05ms per operation
- P99 overhead: < 0.5ms per operation
- All network operations are non-blocking (fire-and-forget)
- No retries or queueing to prevent memory leaks
Error Handling Philosophy
The SDK never throws exceptions. All errors are handled silently to ensure observability never breaks production:
- Invalid API key → traces dropped, error logged (if debug)
- Network failure → traces dropped silently
- Payload too large → trace dropped, error logged
Use the on_error callback to monitor SDK errors:
def handle_error(error: Exception):
logger.warning(f"Artanis error: {error}")
artanis = Artanis(on_error=handle_error)
Development
Setup
cd python
pip install -e ".[dev]"
Note: Package name is artanis-ai on PyPI, but import name is still artanis.
Run Tests
pytest
pytest --cov=artanis # With coverage
Format Code
black artanis tests
ruff check artanis tests
Type Checking
mypy artanis
Support
- Documentation: https://docs.artanis.ai
- GitHub: https://github.com/artanis-ai/sdk
- Email: team@artanis.ai
License
MIT License - see LICENSE file for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file artanis_ai-0.17.0.tar.gz.
File metadata
- Download URL: artanis_ai-0.17.0.tar.gz
- Upload date:
- Size: 14.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
93a79abbb8d50e2bacf5d740aa8ce36619309381fcd66f45da9d2c147e16fa11
|
|
| MD5 |
9afee62580b19173e27a51af5a29b252
|
|
| BLAKE2b-256 |
e9554eb62568fbd1984819c6c742bd5e480f1ed752d948aaac04ab28cd3e7e19
|
Provenance
The following attestation bundles were made for artanis_ai-0.17.0.tar.gz:
Publisher:
publish-python.yml on artanis-ai/sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
artanis_ai-0.17.0.tar.gz -
Subject digest:
93a79abbb8d50e2bacf5d740aa8ce36619309381fcd66f45da9d2c147e16fa11 - Sigstore transparency entry: 849466873
- Sigstore integration time:
-
Permalink:
artanis-ai/sdk@e1adb489ca81344eb2951b144190077eaba14c2d -
Branch / Tag:
refs/tags/v0.17.0 - Owner: https://github.com/artanis-ai
-
Access:
internal
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-python.yml@e1adb489ca81344eb2951b144190077eaba14c2d -
Trigger Event:
push
-
Statement type:
File details
Details for the file artanis_ai-0.17.0-py3-none-any.whl.
File metadata
- Download URL: artanis_ai-0.17.0-py3-none-any.whl
- Upload date:
- Size: 10.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4e360717f47373672397aff0bb728b291d9a76b50478fe7c30813103e070f78c
|
|
| MD5 |
b3d7eb5a6206a3205aa1a8b29bc99ec0
|
|
| BLAKE2b-256 |
60c293fc02734086a530ff565dc2f8019b6347af3f42edcdc0035b07155bf8f7
|
Provenance
The following attestation bundles were made for artanis_ai-0.17.0-py3-none-any.whl:
Publisher:
publish-python.yml on artanis-ai/sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
artanis_ai-0.17.0-py3-none-any.whl -
Subject digest:
4e360717f47373672397aff0bb728b291d9a76b50478fe7c30813103e070f78c - Sigstore transparency entry: 849466878
- Sigstore integration time:
-
Permalink:
artanis-ai/sdk@e1adb489ca81344eb2951b144190077eaba14c2d -
Branch / Tag:
refs/tags/v0.17.0 - Owner: https://github.com/artanis-ai
-
Access:
internal
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish-python.yml@e1adb489ca81344eb2951b144190077eaba14c2d -
Trigger Event:
push
-
Statement type: