Skip to main content

Veil AI Agent Observability SDK — one-liner instrumentation for AI agents

Project description

Veil Python SDK

One-liner observability for AI agents.

Installation

pip install veil-sdk

Quickstart

import veil

veil.init(api_key="vl_xxx")

That's it. Add this before your agent runs. Veil automatically instruments OpenAI, Anthropic, LangChain, LlamaIndex, and other popular LLM libraries, and sends all telemetry to your Veil dashboard.

Serverless functions (Lambda, Cloud Functions, Vercel, etc.)

In long-running processes (servers, scripts, notebooks), Veil automatically sends all telemetry when the process exits. No extra code needed.

In serverless functions, the process doesn't exit cleanly — it gets frozen or killed by the platform the moment your handler returns. Any telemetry still in the buffer is silently dropped, and your session never closes in the dashboard.

Call veil.flush() at the end of your handler to force delivery before the freeze:

import veil

veil.init(api_key="vl_xxx", agent_name="My Lambda")

def handler(event, context):
    # ... your agent logic ...

    veil.flush()  # send everything before Lambda freezes
    return result

flush() does two things in order:

  1. Forces the OpenTelemetry exporter to drain any buffered spans (LLM call data)
  2. Sends a session.end event so Veil closes and classifies the session

It is safe to call multiple times — only the first call does anything.

Environments where you must call flush():

Platform Why
AWS Lambda Handler returns → process frozen immediately
Google Cloud Functions Same — process suspended after return
Vercel / Netlify Functions Execution context torn down after response
Azure Functions Consumption plan freezes after invocation

Environments where flush() is optional (but harmless):

  • Long-running servers (FastAPI, Flask, Django)
  • CLI scripts
  • Jupyter notebooks
  • Docker containers

How it works

  • Telemetry is collected and sent asynchronously — zero impact on your agent's performance.
  • Sessions are automatically tracked from the first LLM call through to process exit.
  • Failures are detected and classified server-side — no configuration required.
  • Every failure triggers an alert in your Veil dashboard and via email.

Supported Libraries

Veil auto-instruments all major LLM frameworks including:

  • OpenAI
  • Anthropic
  • LangChain
  • LlamaIndex
  • Cohere
  • Mistral
  • Google Gemini
  • AWS Bedrock
  • And more

Requirements

  • Python >= 3.9

Your API Key

Find your API key in the Veil Dashboard under Settings. Keys follow the format vl_ followed by 64 hex characters.

Self-hosting

If you are running Veil on your own infrastructure, pass the endpoint parameter:

veil.init(api_key="vl_xxx", endpoint="https://your-veil-instance.com")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

veil_sdk-0.2.0.tar.gz (4.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

veil_sdk-0.2.0-py3-none-any.whl (5.3 kB view details)

Uploaded Python 3

File details

Details for the file veil_sdk-0.2.0.tar.gz.

File metadata

  • Download URL: veil_sdk-0.2.0.tar.gz
  • Upload date:
  • Size: 4.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for veil_sdk-0.2.0.tar.gz
Algorithm Hash digest
SHA256 d1377d39e47a7465b3155264aaa4a6e34d91b1d7e5e397a91c2353e4d94361de
MD5 446973abf48cafb77def52fc0849808f
BLAKE2b-256 69eb6f5caf8a428dfaddfc201e45a44d91c2c475c6c1c72c09813e8c9a5fdbb1

See more details on using hashes here.

File details

Details for the file veil_sdk-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: veil_sdk-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 5.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for veil_sdk-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a559b31558036376d5e08ffea5870328712001539049e35728a1625e6f74c81a
MD5 842eb683a8e38fbbbcfea0d429195aa5
BLAKE2b-256 37dcfc86660aaf8b224c13fba4c03e44fc509083160fd4fbb7d47b5bdaf64ecf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page