Skip to main content

Artanis SDK for AI application observability

Project description

Artanis Python SDK

Artanis SDK for AI application observability - understand failures, build evaluation sets, and act on user feedback.

Installation

pip install artanis-ai

Quick Start

from artanis import Artanis

# Initialize client
artanis = Artanis(api_key="sk_...")

# Create a trace
trace = artanis.trace("answer-question")
trace.input(question="What is AI?", model="gpt-4")
trace.output("AI stands for Artificial Intelligence")

# Record feedback
artanis.feedback(trace.id, rating="positive")

Configuration

API Key

Provide your API key either explicitly or via environment variable:

# Explicit
artanis = Artanis(api_key="sk_...")

# Environment variable
export ARTANIS_API_KEY="sk_..."
artanis = Artanis()

Options

artanis = Artanis(
    api_key="sk_...",              # Required (or ARTANIS_API_KEY env var)
    base_url="https://app.artanis.ai",  # Optional: custom API endpoint
    enabled=True,                  # Optional: enable/disable tracing
    debug=False,                   # Optional: enable debug logging
    on_error=lambda e: print(e)    # Optional: error callback
)

Environment Variables

Variable Default Description
ARTANIS_API_KEY Required Your API key
ARTANIS_BASE_URL https://app.artanis.ai API endpoint
ARTANIS_ENABLED true Enable/disable tracing
ARTANIS_DEBUG false Enable debug logging

Usage

Basic Tracing

trace = artanis.trace("operation-name")
trace.input(question="...", context="...")
# ... perform operation ...
trace.output(result)

With Metadata

trace = artanis.trace(
    "answer-question",
    metadata={
        "user_id": "user-123",
        "session_id": "session-456"
    }
)

Capturing State for Replay

trace = artanis.trace("rag-query")

# Capture document state
trace.state("documents", [{"id": "doc1", "score": 0.95}])

# Capture configuration
trace.state("config", {"model": "gpt-4", "temperature": 0.7})

# Record inputs and output
trace.input(query="...", prompt="...")
trace.output(response)

Error Handling

trace = artanis.trace("risky-operation")
trace.input(data=input_data)

try:
    result = process(input_data)
    trace.output(result)
except Exception as e:
    trace.error(str(e))
    raise

Context Manager

with artanis.trace("operation") as trace:
    trace.input(data=...)
    result = perform_operation()
    trace.output(result)
# Automatically sends trace on exit

Method Chaining

artanis.trace("operation")\
    .input(question="What is AI?")\
    .state("config", {"model": "gpt-4"})\
    .output("AI stands for Artificial Intelligence")

Feedback

# Binary feedback
artanis.feedback(trace.id, rating="positive")
artanis.feedback(trace.id, rating="negative")

# Numeric rating (0.0-1.0)
artanis.feedback(trace.id, rating=0.85)

# With comment
artanis.feedback(
    trace.id,
    rating="negative",
    comment="The answer was incorrect"
)

# With correction
artanis.feedback(
    trace.id,
    rating="negative",
    correction={"answer": "The correct answer is..."}
)

Complete Example: RAG Pipeline

from artanis import Artanis

artanis = Artanis()

def answer_question(question: str, user_id: str):
    # Create trace with metadata
    trace = artanis.trace(
        "rag-answer",
        metadata={"user_id": user_id}
    )

    # Capture document corpus state
    corpus = load_documents()
    trace.state("corpus", [doc.id for doc in corpus])

    # Retrieve relevant chunks
    chunks = retriever.search(question)
    trace.state("chunks", [
        {"id": c.id, "score": c.score}
        for c in chunks
    ])

    # Generate response
    prompt = build_prompt(question, chunks)
    trace.input(
        question=question,
        prompt=prompt,
        model="gpt-4"
    )

    response = llm.generate(prompt)
    trace.output(response)

    return response, trace.id

# Later, collect feedback
answer, trace_id = answer_question("What is AI?", "user-123")
print(answer)

# User provides feedback
artanis.feedback(trace_id, rating="positive")

Testing

Disable tracing in tests:

# Option 1: Environment variable
export ARTANIS_ENABLED=false

# Option 2: Explicit configuration
artanis = Artanis(enabled=False)

Performance

  • P50 overhead: < 0.05ms per operation
  • P99 overhead: < 0.5ms per operation
  • All network operations are non-blocking (fire-and-forget)
  • No retries or queueing to prevent memory leaks

Error Handling Philosophy

The SDK never throws exceptions. All errors are handled silently to ensure observability never breaks production:

  • Invalid API key → traces dropped, error logged (if debug)
  • Network failure → traces dropped silently
  • Payload too large → trace dropped, error logged

Use the on_error callback to monitor SDK errors:

def handle_error(error: Exception):
    logger.warning(f"Artanis error: {error}")

artanis = Artanis(on_error=handle_error)

Development

Setup

cd python
pip install -e ".[dev]"

Note: Package name is artanis-ai on PyPI, but import name is still artanis.

Run Tests

pytest
pytest --cov=artanis  # With coverage

Format Code

black artanis tests
ruff check artanis tests

Type Checking

mypy artanis

Support

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

artanis_ai-0.2.tar.gz (13.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

artanis_ai-0.2-py3-none-any.whl (10.7 kB view details)

Uploaded Python 3

File details

Details for the file artanis_ai-0.2.tar.gz.

File metadata

  • Download URL: artanis_ai-0.2.tar.gz
  • Upload date:
  • Size: 13.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for artanis_ai-0.2.tar.gz
Algorithm Hash digest
SHA256 6cd525d5950ca85716e5e67d10affb3a7dfae2aede78b6c16832feea02eaa4bb
MD5 46f5eb7b051ae7b8a3aea108eb737877
BLAKE2b-256 8facc11bdaf750522480be206cb943f694e5a4cae8ea2199ca879b5850d9cd58

See more details on using hashes here.

Provenance

The following attestation bundles were made for artanis_ai-0.2.tar.gz:

Publisher: publish-python.yml on artanis-ai/sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file artanis_ai-0.2-py3-none-any.whl.

File metadata

  • Download URL: artanis_ai-0.2-py3-none-any.whl
  • Upload date:
  • Size: 10.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for artanis_ai-0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 3ed2a75d04d6549e9c3b270fa13b22255f1a16c2be10e27688d740775b2df744
MD5 ca426fb9b7d55ed9d925f0b57852ee1b
BLAKE2b-256 3da8aa1c498ee8d7eb3923922d434a118b029f1c7d586fec827b2ce3b298b267

See more details on using hashes here.

Provenance

The following attestation bundles were made for artanis_ai-0.2-py3-none-any.whl:

Publisher: publish-python.yml on artanis-ai/sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page