Skip to main content

OpenInference Crewai Instrumentation

Project description

OpenInference crewAI Instrumentation

pypi

Python auto-instrumentation library for LLM agents implemented with CrewAI

Crews are fully OpenTelemetry-compatible and can be sent to an OpenTelemetry collector for monitoring, such as arize-phoenix.

Installation

pip install openinference-instrumentation-crewai

Quickstart

This quickstart shows you how to instrument your guardrailed LLM application

Install required packages.

pip install crewai crewai-tools  arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp

Start Phoenix in the background as a collector. By default, it listens on http://localhost:6006. You can visit the app via a browser at the same address. (Phoenix does not send data over the internet. It only operates locally on your machine.)

python -m phoenix.server.main serve

Set up CrewAIInstrumentor to trace your crew and send the traces to Phoenix at the endpoint defined below.

from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

from openinference.instrumentation.crewai import CrewAIInstrumentor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor

endpoint = "http://127.0.0.1:6006/v1/traces"
trace_provider = TracerProvider()
trace_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))

CrewAIInstrumentor().instrument(tracer_provider=trace_provider)

Set up a simple crew to do research

import os
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool

os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["SERPER_API_KEY"] = "YOUR_SERPER_API_KEY" 
search_tool = SerperDevTool()

# Define your agents with roles and goals
researcher = Agent(
  role='Senior Research Analyst',
  goal='Uncover cutting-edge developments in AI and data science',
  backstory="""You work at a leading tech think tank.
  Your expertise lies in identifying emerging trends.
  You have a knack for dissecting complex data and presenting actionable insights.""",
  verbose=True,
  allow_delegation=False,
  # You can pass an optional llm attribute specifying what model you wanna use.
  # llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7),
  tools=[search_tool]
)
writer = Agent(
  role='Tech Content Strategist',
  goal='Craft compelling content on tech advancements',
  backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
  You transform complex concepts into compelling narratives.""",
  verbose=True,
  allow_delegation=True
)

# Create tasks for your agents
task1 = Task(
  description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
  Identify key trends, breakthrough technologies, and potential industry impacts.""",
  expected_output="Full analysis report in bullet points",
  agent=researcher
)

task2 = Task(
  description="""Using the insights provided, develop an engaging blog
  post that highlights the most significant AI advancements.
  Your post should be informative yet accessible, catering to a tech-savvy audience.
  Make it sound cool, avoid complex words so it doesn't sound like AI.""",
  expected_output="Full blog post of at least 4 paragraphs",
  agent=writer
)

# Instantiate your crew with a sequential process
crew = Crew(
  agents=[researcher, writer],
  tasks=[task1, task2],
  verbose=True,
  process=Process.sequential
)

# Get your crew to work!
result = crew.kickoff()

print("######################")
print(result)

Event Listener Mode

CrewAIInstrumentor().instrument(...) without extra flags is the default wrapper-based integration and remains the recommended path for standard Python CrewAI applications.

Use use_event_listener=True only when CrewAI execution is surfaced through the event bus rather than direct Python method calls, such as AMP / low-code CrewAI usage. See examples/event_listener_crew.py for that setup.

By default, event-listener mode also creates LLM spans from CrewAI's LLMCall* events. That is useful when the listener is your only source of LLM visibility. If you already instrument the underlying LLM client separately, or if you want tests that focus only on crew / agent / tool structure to avoid provider- and retry-driven LLM span count variability, disable them with:

CrewAIInstrumentor().instrument(
    tracer_provider=trace_provider,
    use_event_listener=True,
    create_llm_spans=False,
)

More Info

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openinference_instrumentation_crewai-1.1.2.tar.gz (25.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file openinference_instrumentation_crewai-1.1.2.tar.gz.

File metadata

File hashes

Hashes for openinference_instrumentation_crewai-1.1.2.tar.gz
Algorithm Hash digest
SHA256 82c84df3aff19dc4f92790c4766ca3dc454e0bfd0e9d7cb210304f9f4fdf2d6f
MD5 0e3df366d1364ff9f62d29288ecebf9a
BLAKE2b-256 44ca42599adfc31c4c1c814ba1ec08d0bf3c730df0c07de4b9f28be3fa67ed91

See more details on using hashes here.

File details

Details for the file openinference_instrumentation_crewai-1.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for openinference_instrumentation_crewai-1.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4316283e8ce8a7facd884d18026b0b5d64c8de3080800ee6b25d6c3d1b20357b
MD5 a34fa049e58de903e235e0294b641084
BLAKE2b-256 3420fae9e67bb6ad84e0d5ae3f866a856255f2749deb79528312d24551176974

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page