Skip to main content

Fiddler SDK for instrumenting LangChain V1 agents with OpenTelemetry

Project description

fiddler-langchain

Fiddler observability for LangChain V1 agents built with langchain.agents.create_agent.

Installation

pip install fiddler-langchain

Quick Start

Call FiddlerLangChainInstrumentor.instrument() once after creating your FiddlerClient. Every agent created with langchain.agents.create_agent() is then traced automatically. Use the optional name argument to label agents in traces; if omitted, the agent name is left empty.

import langchain.agents
from fiddler_otel import FiddlerClient
from fiddler_langchain import FiddlerLangChainInstrumentor

client = FiddlerClient(
    api_key="YOUR_API_KEY",
    application_id="YOUR_APPLICATION_ID",
    url="https://your-instance.fiddler.ai",
)

instrumentor = FiddlerLangChainInstrumentor(client=client)
instrumentor.instrument()

agent = langchain.agents.create_agent(
    model="openai:gpt-4o-mini",
    tools=[...],
    name="my_agent",
)

result = agent.invoke({"messages": [{"role": "user", "content": "Hello!"}]})

Alternative (manual middleware): You can instead pass middleware=[FiddlerAgentMiddleware(client=client, agent_name="my_agent")] to each create_agent() call and skip the instrumentor. Use the instrumentor when you want a single instrument() to trace all agents.

Trace Hierarchy

Each invocation produces a clean, flat hierarchy with no noisy Chain wrappers:

[Span] my_agent          (Agent root - TYPE=agent)
  └── [Span] gpt-4o-mini (LLM call - TYPE=llm)
  └── [Span] hotel_search (Tool call - TYPE=tool)
  └── [Span] gpt-4o-mini (LLM call - TYPE=llm)

Note on span_type=agent for the root span: The root span uses span_type=agent to accurately represent that it is an agent invocation. It carries agent_name, agent_id, and conversation_id but has empty LLM and tool fields (model_name, llm_output, tool_name, etc.) because it is a container span for the full agent lifecycle - not an LLM or tool call itself. The legacy chain span type is deprecated and no longer used for agent root spans.

Multi-turn Conversations

from fiddler_langchain import set_conversation_id
import uuid

set_conversation_id(str(uuid.uuid4()))
agent.invoke({"messages": [...]})

LLM Context

Attach contextual metadata to LLM spans by calling set_llm_context before the agent runs. The instrumentation reads this value from the model's metadata at invocation time.

import langchain.agents
from langchain_openai import ChatOpenAI
from fiddler_langchain import FiddlerLangChainInstrumentor, set_llm_context
from fiddler_otel import FiddlerClient

client = FiddlerClient(api_key="...", application_id="...", url="...")
FiddlerLangChainInstrumentor(client=client).instrument()

model = ChatOpenAI(model="gpt-4o-mini")
set_llm_context(model, "User preference: concise answers")

agent = langchain.agents.create_agent(model=model, tools=[...], name="my_agent")

Session and Span Attributes

Add custom business metadata to your traces without modifying agent code.

Session attributes

add_session_attributes(key, value) injects key-value pairs onto every span in the current invocation. Call it before invoking the agent. Attributes appear as fiddler.session.user.{key} on all spans and are propagated from parent to child spans.

from fiddler_langchain import add_session_attributes, set_conversation_id
import uuid

set_conversation_id(str(uuid.uuid4()))
add_session_attributes("user_id", "alice@example.com")
add_session_attributes("environment", "production")
add_session_attributes("cost_center", "travel_desk")

agent.invoke({"messages": [{"role": "user", "content": "Hello!"}]})

Span attributes

add_span_attributes(node, **kwargs) attaches attributes to a specific component (model, tool, or retriever). Only spans created for that component carry these attributes. They appear as fiddler.span.user.{key}.

from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from fiddler_langchain import add_span_attributes

model = ChatOpenAI(model="gpt-4o-mini")
add_span_attributes(model, department="AI_Engineering", agent_role="flight_assistant")

@tool
def book_flight(from_airport: str, to_airport: str) -> str:
    """Book a flight between two airports."""
    return f"Booked {from_airport} -> {to_airport}"

add_span_attributes(book_flight, department="third_party_flight", reward_points="10.2")

agent = langchain.agents.create_agent(model=model, tools=[book_flight], name="my_agent")
Helper Scope Span attribute format
add_session_attributes(key, value) All spans in the invocation fiddler.session.user.{key}
add_span_attributes(model_or_tool, **kwargs) Spans for that model/tool/retriever fiddler.span.user.{key}

Retriever Instrumentation

The LangChain V1 middleware API does not expose a dedicated retriever hook. Following the same convention used in fiddler-langgraph, retrievers are treated as tools.

Wrap your retriever with @tool (or create_retriever_tool) and pass it to create_agent. The instrumentation's tool hook captures the retriever call automatically as a TYPE=tool span - with the query as tool_input and the retrieved documents as tool_output.

import langchain.agents
from langchain_core.tools import tool
from fiddler_langchain import FiddlerLangChainInstrumentor
from fiddler_otel import FiddlerClient

client = FiddlerClient(api_key="...", application_id="...", url="...")
FiddlerLangChainInstrumentor(client=client).instrument()

retriever = vector_store.as_retriever()

@tool
def search_docs(query: str) -> str:
    """Search company documents for relevant information."""
    return str(retriever.invoke(query))

agent = langchain.agents.create_agent(
    model="openai:gpt-4o-mini",
    tools=[search_docs, ...],
    name="rag_agent",
)

The resulting trace looks like:

[Span] rag_agent           (Agent root - TYPE=agent)
  └── [Span] gpt-4o-mini   (LLM call - TYPE=llm)
  └── [Span] search_docs   (Retriever as Tool - TYPE=tool)
  └── [Span] gpt-4o-mini   (LLM call - TYPE=llm)

Multi-Agent Setup

With the instrumentor, a single instrument() call patches create_agent so every agent is traced. Pass name='...' to each create_agent() to label agents in traces. When a supervisor delegates work to sub-agents via tools, the entire flow now appears as a single trace for that user request: the supervisor is the root span, delegation tools are tool spans under it, and each sub-agent root span is a child of the corresponding delegation tool span. All spans share the same conversation_id.

import uuid
import langchain.agents
from langchain_openai import ChatOpenAI
from fiddler_langchain import FiddlerLangChainInstrumentor, set_conversation_id
from fiddler_otel import FiddlerClient

client = FiddlerClient(api_key="...", application_id="...", url="...")
instrumentor = FiddlerLangChainInstrumentor(client=client)
instrumentor.instrument()

# Sub-agents - invoked via supervisor tools
flight_agent = langchain.agents.create_agent(
    model=ChatOpenAI(), tools=[book_flight], name="flight_assistant"
)
hotel_agent = langchain.agents.create_agent(
    model=ChatOpenAI(), tools=[search_hotel, book_hotel], name="hotel_assistant"
)

# Supervisor - delegates to sub-agents via @tool wrappers
supervisor = langchain.agents.create_agent(
    model=ChatOpenAI(),
    tools=[delegate_to_flight_assistant, delegate_to_hotel_assistant],
    name="supervisor",
)

# Link the whole flow under one conversation + one trace
set_conversation_id(str(uuid.uuid4()))
supervisor.invoke({"messages": [{"role": "user", "content": "Book a flight and a hotel."}]})

The resulting trace looks like:

[Span] supervisor                          (root - TYPE=agent)
  ├── [Span] gpt-4o-mini                   (LLM  - TYPE=llm)
  ├── [Span] delegate_to_flight_assistant  (Tool - TYPE=tool)
  │     └── [Span] flight_assistant        (Agent - TYPE=agent)
  │           ├── [Span] gpt-4o-mini       (LLM  - TYPE=llm)
  │           └── [Span] book_flight       (Tool - TYPE=tool)
  └── [Span] delegate_to_hotel_assistant   (Tool - TYPE=tool)
        └── [Span] hotel_assistant         (Agent - TYPE=agent)
              ├── [Span] gpt-4o-mini       (LLM  - TYPE=llm)
              ├── [Span] search_hotel      (Tool - TYPE=tool, retriever-as-tool)
              └── [Span] book_hotel        (Tool - TYPE=tool)

Local JSONL Capture

To capture all spans to a local JSONL file without sending to Fiddler (useful for debugging):

client = FiddlerClient(
    api_key="...",
    application_id="...",
    url="...",
    jsonl_capture_enabled=True,
    jsonl_file_path="trace_data.jsonl",
)

Or via environment variables:

FIDDLER_JSONL_ENABLED=true \
FIDDLER_JSONL_FILE=trace_data.jsonl \
python my_agent.py

Each line in the output file is a JSON object containing all span attributes: trace_id, span_id, parent_span_id, span_type, agent_name, conversation_id, model_name, model_provider, llm_input_system, llm_input_user, llm_output, llm_context, llm_token_count_input/output/total, gen_ai_input_messages, gen_ai_output_messages, tool_name, tool_input, tool_output, tool_definitions.

Async Agents

The instrumentation fully supports async agents via awrap_model_call and awrap_tool_call. Use agent.ainvoke() instead of agent.invoke() - no additional configuration is needed:

import asyncio
import langchain.agents
from langchain_openai import ChatOpenAI
from fiddler_langchain import FiddlerLangChainInstrumentor
from fiddler_otel import FiddlerClient

client = FiddlerClient(api_key="...", application_id="...", url="...")
FiddlerLangChainInstrumentor(client=client).instrument()

agent = langchain.agents.create_agent(
    model=ChatOpenAI(model="gpt-4o-mini"),
    tools=[...],
    name="my_agent",
)

async def main():
    result = await agent.ainvoke({"messages": [{"role": "user", "content": "Hello!"}]})
    print(result)

asyncio.run(main())

The instrumentation automatically uses the async lifecycle hooks (awrap_model_call, awrap_tool_call) when the agent is invoked asynchronously, producing the same span hierarchy as the sync path.

Error Handling

If an LLM call or tool call raises an exception, the instrumentation catches it, marks the corresponding span with StatusCode.ERROR, re-raises the exception so normal error handling in your application is unaffected, and still cleanly closes the root agent span.

try:
    result = agent.invoke({"messages": [{"role": "user", "content": "Hello!"}]})
except Exception as e:
    # The failing LLM or tool span is already marked ERROR in Fiddler
    # The root agent span is also closed - no dangling spans
    raise

This means:

  • Partial traces are never lost — all spans up to the point of failure are recorded
  • The failing span carries status_code=ERROR and the exception message
  • The root agent span is always closed, regardless of whether the invocation succeeded or failed

Relationship to fiddler-langgraph

Package Framework Instrumentation
fiddler-langgraph LangGraph (StateGraph.compile()) Callback handler
fiddler-langchain LangChain V1 (create_agent) FiddlerLangChainInstrumentor (auto) or FiddlerAgentMiddleware (manual)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fiddler_langchain-1.1.0.tar.gz (19.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

fiddler_langchain-1.1.0-py3-none-any.whl (17.6 kB view details)

Uploaded Python 3

File details

Details for the file fiddler_langchain-1.1.0.tar.gz.

File metadata

  • Download URL: fiddler_langchain-1.1.0.tar.gz
  • Upload date:
  • Size: 19.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for fiddler_langchain-1.1.0.tar.gz
Algorithm Hash digest
SHA256 10dac96b7316b32638b9474b851f0fc540fa6011a588de036a1f26cbec367f16
MD5 2900bb114ae7e2938ca16c1d2d075024
BLAKE2b-256 951cfcbb79a13341435ebbf2dda7446d5378f397fc1125527c44be3f666b6e24

See more details on using hashes here.

Provenance

The following attestation bundles were made for fiddler_langchain-1.1.0.tar.gz:

Publisher: publish.yml on fiddler-labs/fiddler-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file fiddler_langchain-1.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for fiddler_langchain-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 77c29c4f6e65121c9eabab37c1b80e3fd045c2efe1087387e809a526da7fb20f
MD5 45b848496d10670a018e0ebe7aa25a3f
BLAKE2b-256 3b6393aa84e00f997f92e4ef17a7f73429017cf7e8c3de2897eb1e2f70f84260

See more details on using hashes here.

Provenance

The following attestation bundles were made for fiddler_langchain-1.1.0-py3-none-any.whl:

Publisher: publish.yml on fiddler-labs/fiddler-sdk

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page