Fiddler SDK for instrumenting LangChain V1 agents with OpenTelemetry
Project description
fiddler-langchain
Fiddler observability for LangChain V1 agents built with langchain.agents.create_agent.
Installation
pip install fiddler-langchain
Quick Start
Call FiddlerLangChainInstrumentor.instrument() once after creating your FiddlerClient. Every agent created with langchain.agents.create_agent() is then traced automatically. Use the optional name argument to label agents in traces; if omitted, the agent name is left empty.
import langchain.agents
from fiddler_otel import FiddlerClient
from fiddler_langchain import FiddlerLangChainInstrumentor
client = FiddlerClient(
api_key="YOUR_API_KEY",
application_id="YOUR_APPLICATION_ID",
url="https://your-instance.fiddler.ai",
)
instrumentor = FiddlerLangChainInstrumentor(client=client)
instrumentor.instrument()
agent = langchain.agents.create_agent(
model="openai:gpt-4o-mini",
tools=[...],
name="my_agent",
)
result = agent.invoke({"messages": [{"role": "user", "content": "Hello!"}]})
Alternative (manual middleware): You can instead pass middleware=[FiddlerAgentMiddleware(client=client, agent_name="my_agent")] to each create_agent() call and skip the instrumentor. Use the instrumentor when you want a single instrument() to trace all agents.
Trace Hierarchy
Each invocation produces a clean, flat hierarchy with no noisy Chain wrappers:
[Span] my_agent (Agent root - TYPE=agent)
└── [Span] gpt-4o-mini (LLM call - TYPE=llm)
└── [Span] hotel_search (Tool call - TYPE=tool)
└── [Span] gpt-4o-mini (LLM call - TYPE=llm)
Note on
span_type=agentfor the root span: The root span usesspan_type=agentto accurately represent that it is an agent invocation. It carriesagent_name,agent_id, andconversation_idbut has empty LLM and tool fields (model_name,llm_output,tool_name, etc.) because it is a container span for the full agent lifecycle - not an LLM or tool call itself. The legacychainspan type is deprecated and no longer used for agent root spans.
Multi-turn Conversations
from fiddler_langchain import set_conversation_id
import uuid
set_conversation_id(str(uuid.uuid4()))
agent.invoke({"messages": [...]})
LLM Context
Attach contextual metadata to LLM spans by calling set_llm_context before the agent runs.
The instrumentation reads this value from the model's metadata at invocation time.
import langchain.agents
from langchain_openai import ChatOpenAI
from fiddler_langchain import FiddlerLangChainInstrumentor, set_llm_context
from fiddler_otel import FiddlerClient
client = FiddlerClient(api_key="...", application_id="...", url="...")
FiddlerLangChainInstrumentor(client=client).instrument()
model = ChatOpenAI(model="gpt-4o-mini")
set_llm_context(model, "User preference: concise answers")
agent = langchain.agents.create_agent(model=model, tools=[...], name="my_agent")
Retriever Instrumentation
The LangChain V1 middleware API does not expose a dedicated retriever hook. Following the
same convention used in fiddler-langgraph, retrievers are treated as tools.
Wrap your retriever with @tool (or create_retriever_tool) and pass it to create_agent.
The instrumentation's tool hook captures the retriever call automatically as a
TYPE=tool span - with the query as tool_input and the retrieved documents as tool_output.
import langchain.agents
from langchain_core.tools import tool
from fiddler_langchain import FiddlerLangChainInstrumentor
from fiddler_otel import FiddlerClient
client = FiddlerClient(api_key="...", application_id="...", url="...")
FiddlerLangChainInstrumentor(client=client).instrument()
retriever = vector_store.as_retriever()
@tool
def search_docs(query: str) -> str:
"""Search company documents for relevant information."""
return str(retriever.invoke(query))
agent = langchain.agents.create_agent(
model="openai:gpt-4o-mini",
tools=[search_docs, ...],
name="rag_agent",
)
The resulting trace looks like:
[Span] rag_agent (Agent root - TYPE=agent)
└── [Span] gpt-4o-mini (LLM call - TYPE=llm)
└── [Span] search_docs (Retriever as Tool - TYPE=tool)
└── [Span] gpt-4o-mini (LLM call - TYPE=llm)
Multi-Agent Setup
With the instrumentor, a single instrument() call patches create_agent so every agent is traced. Pass name='...' to each create_agent() to label agents in traces. When a supervisor delegates work to sub-agents via tools, the entire flow now appears as a single trace for that user request: the supervisor is the root span, delegation tools are tool spans under it, and each sub-agent root span is a child of the corresponding delegation tool span. All spans share the same conversation_id.
import uuid
import langchain.agents
from langchain_openai import ChatOpenAI
from fiddler_langchain import FiddlerLangChainInstrumentor, set_conversation_id
from fiddler_otel import FiddlerClient
client = FiddlerClient(api_key="...", application_id="...", url="...")
instrumentor = FiddlerLangChainInstrumentor(client=client)
instrumentor.instrument()
# Sub-agents - invoked via supervisor tools
flight_agent = langchain.agents.create_agent(
model=ChatOpenAI(), tools=[book_flight], name="flight_assistant"
)
hotel_agent = langchain.agents.create_agent(
model=ChatOpenAI(), tools=[search_hotel, book_hotel], name="hotel_assistant"
)
# Supervisor - delegates to sub-agents via @tool wrappers
supervisor = langchain.agents.create_agent(
model=ChatOpenAI(),
tools=[delegate_to_flight_assistant, delegate_to_hotel_assistant],
name="supervisor",
)
# Link the whole flow under one conversation + one trace
set_conversation_id(str(uuid.uuid4()))
supervisor.invoke({"messages": [{"role": "user", "content": "Book a flight and a hotel."}]})
The resulting trace looks like:
[Span] supervisor (root - TYPE=agent)
├── [Span] gpt-4o-mini (LLM - TYPE=llm)
├── [Span] delegate_to_flight_assistant (Tool - TYPE=tool)
│ └── [Span] flight_assistant (Agent - TYPE=agent)
│ ├── [Span] gpt-4o-mini (LLM - TYPE=llm)
│ └── [Span] book_flight (Tool - TYPE=tool)
└── [Span] delegate_to_hotel_assistant (Tool - TYPE=tool)
└── [Span] hotel_assistant (Agent - TYPE=agent)
├── [Span] gpt-4o-mini (LLM - TYPE=llm)
├── [Span] search_hotel (Tool - TYPE=tool, retriever-as-tool)
└── [Span] book_hotel (Tool - TYPE=tool)
Local JSONL Capture
To capture all spans to a local JSONL file without sending to Fiddler (useful for debugging):
client = FiddlerClient(
api_key="...",
application_id="...",
url="...",
jsonl_capture_enabled=True,
jsonl_file_path="trace_data.jsonl",
)
Or via environment variables:
FIDDLER_JSONL_ENABLED=true \
FIDDLER_JSONL_FILE=trace_data.jsonl \
python my_agent.py
Each line in the output file is a JSON object containing all span attributes:
trace_id, span_id, parent_span_id, span_type, agent_name, conversation_id,
model_name, model_provider, llm_input_system, llm_input_user, llm_output,
llm_context, llm_token_count_input/output/total, gen_ai_input_messages,
gen_ai_output_messages, tool_name, tool_input, tool_output, tool_definitions.
Async Agents
The instrumentation fully supports async agents via awrap_model_call and
awrap_tool_call. Use agent.ainvoke() instead of agent.invoke() - no additional
configuration is needed:
import asyncio
import langchain.agents
from langchain_openai import ChatOpenAI
from fiddler_langchain import FiddlerLangChainInstrumentor
from fiddler_otel import FiddlerClient
client = FiddlerClient(api_key="...", application_id="...", url="...")
FiddlerLangChainInstrumentor(client=client).instrument()
agent = langchain.agents.create_agent(
model=ChatOpenAI(model="gpt-4o-mini"),
tools=[...],
name="my_agent",
)
async def main():
result = await agent.ainvoke({"messages": [{"role": "user", "content": "Hello!"}]})
print(result)
asyncio.run(main())
The instrumentation automatically uses the async lifecycle hooks (awrap_model_call,
awrap_tool_call) when the agent is invoked asynchronously, producing the same span
hierarchy as the sync path.
Error Handling
If an LLM call or tool call raises an exception, the instrumentation catches it, marks the
corresponding span with StatusCode.ERROR, re-raises the exception so normal error
handling in your application is unaffected, and still cleanly closes the root agent span.
try:
result = agent.invoke({"messages": [{"role": "user", "content": "Hello!"}]})
except Exception as e:
# The failing LLM or tool span is already marked ERROR in Fiddler
# The root agent span is also closed - no dangling spans
raise
This means:
- Partial traces are never lost — all spans up to the point of failure are recorded
- The failing span carries
status_code=ERRORand the exception message - The root agent span is always closed, regardless of whether the invocation succeeded or failed
Relationship to fiddler-langgraph
| Package | Framework | Instrumentation |
|---|---|---|
fiddler-langgraph |
LangGraph (StateGraph.compile()) |
Callback handler |
fiddler-langchain |
LangChain V1 (create_agent) |
FiddlerLangChainInstrumentor (auto) or FiddlerAgentMiddleware (manual) |
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fiddler_langchain-0.1.1.tar.gz.
File metadata
- Download URL: fiddler_langchain-0.1.1.tar.gz
- Upload date:
- Size: 17.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a3fe383c48fa2a843517cf5a8481edd39ae2c4f736787ec4d15419b764cc7236
|
|
| MD5 |
75adf83576c7933f2cfc5ab586b61a41
|
|
| BLAKE2b-256 |
b6a047d7f849198506e1e652387fd2849d6797bd5211a55076ba758735d2a85f
|
Provenance
The following attestation bundles were made for fiddler_langchain-0.1.1.tar.gz:
Publisher:
publish.yml on fiddler-labs/fiddler-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
fiddler_langchain-0.1.1.tar.gz -
Subject digest:
a3fe383c48fa2a843517cf5a8481edd39ae2c4f736787ec4d15419b764cc7236 - Sigstore transparency entry: 1119314849
- Sigstore integration time:
-
Permalink:
fiddler-labs/fiddler-sdk@fce10aa077723c689f08f8231b570797252746cf -
Branch / Tag:
refs/tags/fiddler-langchain/v0.1.1 - Owner: https://github.com/fiddler-labs
-
Access:
internal
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
self-hosted -
Publication workflow:
publish.yml@fce10aa077723c689f08f8231b570797252746cf -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file fiddler_langchain-0.1.1-py3-none-any.whl.
File metadata
- Download URL: fiddler_langchain-0.1.1-py3-none-any.whl
- Upload date:
- Size: 15.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
613654563a9df9c55eea83105388d56c8fb34606a6f94b00998c9a558694f770
|
|
| MD5 |
ce850cc9fad9a7158c2da6de5dd3c040
|
|
| BLAKE2b-256 |
c0f144395d75d3a999e3a443f5ec38309ee077fdbde10b196599de9e919acf1c
|
Provenance
The following attestation bundles were made for fiddler_langchain-0.1.1-py3-none-any.whl:
Publisher:
publish.yml on fiddler-labs/fiddler-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
fiddler_langchain-0.1.1-py3-none-any.whl -
Subject digest:
613654563a9df9c55eea83105388d56c8fb34606a6f94b00998c9a558694f770 - Sigstore transparency entry: 1119314852
- Sigstore integration time:
-
Permalink:
fiddler-labs/fiddler-sdk@fce10aa077723c689f08f8231b570797252746cf -
Branch / Tag:
refs/tags/fiddler-langchain/v0.1.1 - Owner: https://github.com/fiddler-labs
-
Access:
internal
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
self-hosted -
Publication workflow:
publish.yml@fce10aa077723c689f08f8231b570797252746cf -
Trigger Event:
workflow_dispatch
-
Statement type: