Skip to main content

Artanis SDK for AI application observability

Project description

Artanis Python SDK

Artanis SDK for AI application observability - understand failures, build evaluation sets, and act on user feedback.

Installation

pip install artanis-ai

Quick Start

from artanis import Artanis

# Initialize client
artanis = Artanis(api_key="sk_...")

# Create a trace
trace = artanis.trace("answer-question")
trace.input(question="What is AI?", model="gpt-4")
trace.output("AI stands for Artificial Intelligence")

# Record feedback
artanis.feedback(trace.id, rating="positive")

Configuration

API Key

Provide your API key either explicitly or via environment variable:

# Explicit
artanis = Artanis(api_key="sk_...")

# Environment variable
export ARTANIS_API_KEY="sk_..."
artanis = Artanis()

Options

artanis = Artanis(
    api_key="sk_...",              # Required (or ARTANIS_API_KEY env var)
    base_url="https://app.artanis.ai",  # Optional: custom API endpoint
    enabled=True,                  # Optional: enable/disable tracing
    debug=False,                   # Optional: enable debug logging
    on_error=lambda e: print(e)    # Optional: error callback
)

Environment Variables

Variable Default Description
ARTANIS_API_KEY Required Your API key
ARTANIS_BASE_URL https://app.artanis.ai API endpoint
ARTANIS_ENABLED true Enable/disable tracing
ARTANIS_DEBUG false Enable debug logging

Usage

Basic Tracing

trace = artanis.trace("operation-name")
trace.input(question="...", context="...")
# ... perform operation ...
trace.output(result)

With Metadata

trace = artanis.trace(
    "answer-question",
    metadata={
        "user_id": "user-123",
        "session_id": "session-456"
    }
)

Capturing State for Replay

trace = artanis.trace("rag-query")

# Capture document state
trace.state("documents", [{"id": "doc1", "score": 0.95}])

# Capture configuration
trace.state("config", {"model": "gpt-4", "temperature": 0.7})

# Record inputs and output
trace.input(query="...", prompt="...")
trace.output(response)

Error Handling

trace = artanis.trace("risky-operation")
trace.input(data=input_data)

try:
    result = process(input_data)
    trace.output(result)
except Exception as e:
    trace.error(str(e))
    raise

Context Manager

with artanis.trace("operation") as trace:
    trace.input(data=...)
    result = perform_operation()
    trace.output(result)
# Automatically sends trace on exit

Method Chaining

artanis.trace("operation")\
    .input(question="What is AI?")\
    .state("config", {"model": "gpt-4"})\
    .output("AI stands for Artificial Intelligence")

Feedback

# Binary feedback
artanis.feedback(trace.id, rating="positive")
artanis.feedback(trace.id, rating="negative")

# Numeric rating (0.0-1.0)
artanis.feedback(trace.id, rating=0.85)

# With comment
artanis.feedback(
    trace.id,
    rating="negative",
    comment="The answer was incorrect"
)

# With correction
artanis.feedback(
    trace.id,
    rating="negative",
    correction={"answer": "The correct answer is..."}
)

Complete Example: RAG Pipeline

from artanis import Artanis

artanis = Artanis()

def answer_question(question: str, user_id: str):
    # Create trace with metadata
    trace = artanis.trace(
        "rag-answer",
        metadata={"user_id": user_id}
    )

    # Capture document corpus state
    corpus = load_documents()
    trace.state("corpus", [doc.id for doc in corpus])

    # Retrieve relevant chunks
    chunks = retriever.search(question)
    trace.state("chunks", [
        {"id": c.id, "score": c.score}
        for c in chunks
    ])

    # Generate response
    prompt = build_prompt(question, chunks)
    trace.input(
        question=question,
        prompt=prompt,
        model="gpt-4"
    )

    response = llm.generate(prompt)
    trace.output(response)

    return response, trace.id

# Later, collect feedback
answer, trace_id = answer_question("What is AI?", "user-123")
print(answer)

# User provides feedback
artanis.feedback(trace_id, rating="positive")

Testing

Disable tracing in tests:

# Option 1: Environment variable
export ARTANIS_ENABLED=false

# Option 2: Explicit configuration
artanis = Artanis(enabled=False)

Performance

  • P50 overhead: < 0.05ms per operation
  • P99 overhead: < 0.5ms per operation
  • All network operations are non-blocking (fire-and-forget)
  • No retries or queueing to prevent memory leaks

Error Handling Philosophy

The SDK never throws exceptions. All errors are handled silently to ensure observability never breaks production:

  • Invalid API key → traces dropped, error logged (if debug)
  • Network failure → traces dropped silently
  • Payload too large → trace dropped, error logged

Use the on_error callback to monitor SDK errors:

def handle_error(error: Exception):
    logger.warning(f"Artanis error: {error}")

artanis = Artanis(on_error=handle_error)

Development

Setup

cd python
pip install -e ".[dev]"

Note: Package name is artanis-ai on PyPI, but import name is still artanis.

Run Tests

pytest
pytest --cov=artanis  # With coverage

Format Code

black artanis tests
ruff check artanis tests

Type Checking

mypy artanis

Support

License

MIT License - see LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

artanis_ai-0.13.0.tar.gz (13.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

artanis_ai-0.13.0-py3-none-any.whl (10.7 kB view details)

Uploaded Python 3

File details

Details for the file artanis_ai-0.13.0.tar.gz.

File metadata

  • Download URL: artanis_ai-0.13.0.tar.gz
  • Upload date:
  • Size: 13.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for artanis_ai-0.13.0.tar.gz
Algorithm Hash digest
SHA256 5d5fd4d0a1c1c27da89143aff6b1d7ce1654f06729d91e751c5420f9a36e5581
MD5 45fa7b13ee314a87ca824e6931c3be1a
BLAKE2b-256 a6f744ea05c106f4348ac65466830dd31bcea12965810a7daa97f57d554ed01a

See more details on using hashes here.

Provenance

The following attestation bundles were made for artanis_ai-0.13.0.tar.gz:

Publisher: publish-python.yml on artanis-ai/sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file artanis_ai-0.13.0-py3-none-any.whl.

File metadata

  • Download URL: artanis_ai-0.13.0-py3-none-any.whl
  • Upload date:
  • Size: 10.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for artanis_ai-0.13.0-py3-none-any.whl
Algorithm Hash digest
SHA256 42af346fe984a8e7af0f6752ccde358896fb1fffc1e34da35f7a03172f7d7ade
MD5 7f54e884e1f08208b6243edcf5f1efca
BLAKE2b-256 0817054a7824bc12130302213b80ff84163c8a3bf0e25c76bb968a2e4499712a

See more details on using hashes here.

Provenance

The following attestation bundles were made for artanis_ai-0.13.0-py3-none-any.whl:

Publisher: publish-python.yml on artanis-ai/sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page