Skip to main content

Artanis SDK for AI application observability

Project description

Artanis Python SDK

Artanis SDK for AI application observability - understand failures, build evaluation sets, and act on user feedback.

Installation

pip install artanis-ai

Quick Start

from artanis import Artanis

# Initialize client
artanis = Artanis(api_key="sk_...")

# Create a trace
trace = artanis.trace("answer-question")
trace.input(question="What is AI?", model="gpt-4")
trace.output("AI stands for Artificial Intelligence")

# Record feedback
artanis.feedback(trace.id, rating="positive")

Configuration

API Key

Provide your API key either explicitly or via environment variable:

# Explicit
artanis = Artanis(api_key="sk_...")

# Environment variable
export ARTANIS_API_KEY="sk_..."
artanis = Artanis()

Options

artanis = Artanis(
    api_key="sk_...",              # Required (or ARTANIS_API_KEY env var)
    base_url="https://app.artanis.ai",  # Optional: custom API endpoint
    enabled=True,                  # Optional: enable/disable tracing
    debug=False,                   # Optional: enable debug logging
    on_error=lambda e: print(e)    # Optional: error callback
)

Environment Variables

Variable Default Description
ARTANIS_API_KEY Required Your API key
ARTANIS_BASE_URL https://app.artanis.ai API endpoint
ARTANIS_ENABLED true Enable/disable tracing
ARTANIS_DEBUG false Enable debug logging

Usage

Basic Tracing

trace = artanis.trace("operation-name")
trace.input(question="...", context="...")
# ... perform operation ...
trace.output(result)

With Metadata

trace = artanis.trace(
    "answer-question",
    metadata={
        "user_id": "user-123",
        "session_id": "session-456"
    }
)

Capturing State for Replay

trace = artanis.trace("rag-query")

# Capture document state
trace.state("documents", [{"id": "doc1", "score": 0.95}])

# Capture configuration
trace.state("config", {"model": "gpt-4", "temperature": 0.7})

# Record inputs and output
trace.input(query="...", prompt="...")
trace.output(response)

Error Handling

trace = artanis.trace("risky-operation")
trace.input(data=input_data)

try:
    result = process(input_data)
    trace.output(result)
except Exception as e:
    trace.error(str(e))
    raise

Context Manager

with artanis.trace("operation") as trace:
    trace.input(data=...)
    result = perform_operation()
    trace.output(result)
# Automatically sends trace on exit

Method Chaining

artanis.trace("operation")\
    .input(question="What is AI?")\
    .state("config", {"model": "gpt-4"})\
    .output("AI stands for Artificial Intelligence")

Feedback

# Binary feedback
artanis.feedback(trace.id, rating="positive")
artanis.feedback(trace.id, rating="negative")

# Numeric rating (0.0-1.0)
artanis.feedback(trace.id, rating=0.85)

# With comment
artanis.feedback(
    trace.id,
    rating="negative",
    comment="The answer was incorrect"
)

# With correction
artanis.feedback(
    trace.id,
    rating="negative",
    correction={"answer": "The correct answer is..."}
)

Complete Example: RAG Pipeline

from artanis import Artanis

artanis = Artanis()

def answer_question(question: str, user_id: str):
    # Create trace with metadata
    trace = artanis.trace(
        "rag-answer",
        metadata={"user_id": user_id}
    )

    # Capture document corpus state
    corpus = load_documents()
    trace.state("corpus", [doc.id for doc in corpus])

    # Retrieve relevant chunks
    chunks = retriever.search(question)
    trace.state("chunks", [
        {"id": c.id, "score": c.score}
        for c in chunks
    ])

    # Generate response
    prompt = build_prompt(question, chunks)
    trace.input(
        question=question,
        prompt=prompt,
        model="gpt-4"
    )

    response = llm.generate(prompt)
    trace.output(response)

    return response, trace.id

# Later, collect feedback
answer, trace_id = answer_question("What is AI?", "user-123")
print(answer)

# User provides feedback
artanis.feedback(trace_id, rating="positive")

Testing

Disable tracing in tests:

# Option 1: Environment variable
export ARTANIS_ENABLED=false

# Option 2: Explicit configuration
artanis = Artanis(enabled=False)

Performance

  • P50 overhead: < 0.05ms per operation
  • P99 overhead: < 0.5ms per operation
  • All network operations are non-blocking (fire-and-forget)
  • No retries or queueing to prevent memory leaks

Error Handling Philosophy

The SDK never throws exceptions. All errors are handled silently to ensure observability never breaks production:

  • Invalid API key → traces dropped, error logged (if debug)
  • Network failure → traces dropped silently
  • Payload too large → trace dropped, error logged

Use the on_error callback to monitor SDK errors:

def handle_error(error: Exception):
    logger.warning(f"Artanis error: {error}")

artanis = Artanis(on_error=handle_error)

Development

Setup

cd python
pip install -e ".[dev]"

Note: Package name is artanis-ai on PyPI, but import name is still artanis.

Run Tests

pytest
pytest --cov=artanis  # With coverage

Format Code

black artanis tests
ruff check artanis tests

Type Checking

mypy artanis

Support

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

artanis_ai-0.12.0.tar.gz (13.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

artanis_ai-0.12.0-py3-none-any.whl (10.8 kB view details)

Uploaded Python 3

File details

Details for the file artanis_ai-0.12.0.tar.gz.

File metadata

  • Download URL: artanis_ai-0.12.0.tar.gz
  • Upload date:
  • Size: 13.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for artanis_ai-0.12.0.tar.gz
Algorithm Hash digest
SHA256 53fcfd716a0b2f2686f26ad4d704311f1199e1db6f65531cf655a617d4befa07
MD5 0ef776e64be246b127030c21ab369029
BLAKE2b-256 0fdd26b9170a74dd69ed2c21e87a1b779fb9ac1754f5f849f31510f70de31606

See more details on using hashes here.

Provenance

The following attestation bundles were made for artanis_ai-0.12.0.tar.gz:

Publisher: publish-python.yml on artanis-ai/sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file artanis_ai-0.12.0-py3-none-any.whl.

File metadata

  • Download URL: artanis_ai-0.12.0-py3-none-any.whl
  • Upload date:
  • Size: 10.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for artanis_ai-0.12.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a8fce2359322ac1cf7ca6a8dc2d3bf9730214259b057e2b476c45e5c854d67fc
MD5 aae5613f86c9a4f75cc51c4a51237283
BLAKE2b-256 c642b61f070488f6b9b4482151f0bc35002af63a87d032b4d617bab08caff0d6

See more details on using hashes here.

Provenance

The following attestation bundles were made for artanis_ai-0.12.0-py3-none-any.whl:

Publisher: publish-python.yml on artanis-ai/sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page