Skip to main content

ATI LangChain integration: emit agent-aware OpenTelemetry spans via LangChain callbacks

Project description

ATI Integration for LangChain

This package provides OpenTelemetry instrumentation for LangChain agents using IOcane ATI. It captures traces for LLM calls, Tools, and Agent execution steps, allowing you to visualize your agent's behavior in the Iocane dashboard.

Installation

You can install the package and its required dependencies from PyPI (or your local source):

# Install the integration and OpenTelemetry components
pip install ati-integrations-langchain opentelemetry-sdk opentelemetry-exporter-otlp langgraph langchain-openai

Local Development

If you are developing this package locally:

pip install -e .

Configuration

The integration relies on standard OpenTelemetry environment variables to export traces to Iocane.

1. Endpoint and Authentication

Set the following environment variables. You can find your Environment ID and API Key in the Iocane Dashboard under Settings > Environment.

# The Iocane OTLP Endpoint
export OTEL_EXPORTER_OTLP_ENDPOINT="https://api.iocane.ai"

# Authentication Headers
# Replace YOUR_KEY and YOUR_ENV_ID with your actual values.
# Note: Format is comma-separated key=value pairs.
export OTEL_EXPORTER_OTLP_HEADERS="x-iocane-key=YOUR_KEY,x-ati-env=YOUR_ENV_ID"

2. Service Name

Identify your agent service:

export OTEL_SERVICE_NAME="my-langchain-agent"

3. OpenAI Key

If using OpenAI models:

export OPENAI_API_KEY="sk-..."

Usage

To use this integration, you must:

  1. Configure the OpenTelemetry SDK globally to export traces.
  2. Instrument your LangChain application using LangChainInstrumentor.

Here is a complete example:

import os
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
from langchain.tools import tool
from ati_langchain import LangChainInstrumentor

# --- 1. Configure OpenTelemetry (OTLP) ---
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter

# Initialize Tracer Provider
provider = TracerProvider()

# Configure OTLP Exporter
# Note: Ensure the endpoint URL is correct (TracerProvider adds /v1/traces if not present, but explicit is safer)
endpoint = os.environ.get("OTEL_EXPORTER_OTLP_ENDPOINT")
if endpoint and endpoint.endswith("/v1/traces"):
    exporter = OTLPSpanExporter(endpoint=endpoint)
else:
    # Let the exporter handle default pathing
    exporter = OTLPSpanExporter()

# Add the exporter to the provider
provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(provider)

# --- 2. Instrument LangChain ---
# This automatically captures traces for all subsequent LangChain & LangGraph execution.
LangChainInstrumentor().instrument(agent_id="my-agent-v1")

# --- 3. Define your Agent ---
@tool
def magic_tool(query: str) -> str:
    """A sample tool."""
    return f"Magic result for: {query}"

llm = ChatOpenAI(model="gpt-3.5-turbo")
tools = [magic_tool]
agent = create_react_agent(llm, tools)

# --- 4. Run the Agent ---
print("Running agent...")
try:
    response = agent.invoke({"messages": [("user", "What is the magic answer?")]})
    print("Response:", response["messages"][-1].content)
finally:
    # --- 5. Cleanup ---
    # Ensure all traces are sent before exiting
    provider.shutdown()

Troubleshooting

No Traces in Dashboard

  • Check OTEL_EXPORTER_OTLP_HEADERS: Ensure x-ati-env matches the Environment ID selected in your dashboard. Mismatches are common.
  • Flush Traces: Ensure you call provider.shutdown() at the end of your script. BatchSpanProcessor sends traces asynchronously; if the script exits too fast, traces are lost.

404 Error (Endpoint)

  • If you see 404 errors, your OTEL_EXPORTER_OTLP_ENDPOINT might be incorrect.
    • Use https://api.iocane.ai (SDK appends /v1/traces automatically).
    • OR use https://api.iocane.ai/v1/traces and handle it manually in your code (as shown in the example).

Collector Decode Error

  • If the Collector logs show utf-8 codec can't decode byte, ensure your Collector matches the version that supports Protobuf ingestion (application/x-protobuf), or force the exporter to use JSON (though Protobuf is preferred).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ati_integrations_langchain-0.1.1.tar.gz (8.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ati_integrations_langchain-0.1.1-py3-none-any.whl (5.4 kB view details)

Uploaded Python 3

File details

Details for the file ati_integrations_langchain-0.1.1.tar.gz.

File metadata

File hashes

Hashes for ati_integrations_langchain-0.1.1.tar.gz
Algorithm Hash digest
SHA256 7073f36b4e41f9b56e816fccc79d12e17fda09088692caf3b0bc2d1833477d3d
MD5 325f685ee459c0583e0fb94f7d9513b0
BLAKE2b-256 e7d397a0307967654c50da19e5f7c49669222325f22ef0fcc60ffa37d7badc8a

See more details on using hashes here.

File details

Details for the file ati_integrations_langchain-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for ati_integrations_langchain-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 57c2969fa288c120c9d441bd8d78517294663a1b7469e34519d1c5c9d96bb0bc
MD5 2e4d7115ca5f2ef851e70e827c1a1bc7
BLAKE2b-256 b68caabbe6ffb826cbe7c2e026d940e6c5f784ba21dd7f4ca5830198e1d2b1e7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page