Skip to main content

Python SDK for developing AI agent evals and observability

Project description

AI agents suck. We’re fixing that.

Python Version

🐦 Twitter   •   📢 Discord   •   🖇️ AgentOps   •   📙 Documentation

AgentOps 🖇️

License: MIT PyPI - Version AgentOps Twitter Discord community channel git commit activity

AgentOps helps developers build, evaluate, and monitor AI agents. Tools to build agents from prototype to production.

📊 Replay Analytics and Debugging Step-by-step agent execution graphs
💸 LLM Cost Management Track spend with LLM foundation model providers
🧪 Agent Benchmarking Test your agents against 1,000+ evals
🔐 Compliance and Security Detect common prompt injection and data exfiltration exploits
🤝 Framework Integrations Native Integrations with CrewAI, AutoGen, & LangChain

Quick Start ⌨️

pip install agentops

Session replays in 3 lines of code

Initialize the AgentOps client and automatically get analytics on every LLM call.

import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)

...
# (optional: record specific functions)
@agentops.record_function('sample function being record')
def sample_function(...):
    ...

# End of program
agentops.end_session('Success')
# Woohoo You're done 🎉

All your sessions are available on the AgentOps dashboard. Refer to our API documentation for detailed instructions.

Agent Dashboard Agent Dashboard
Session Analytics Session Analytics
Session Replays Session Replays

Integrations 🦾

CrewAI 🛶

Build Crew agents with observability with only 2 lines of code. Simply set an AGENTOPS_API_KEY in your environment, and your crews will get automatic monitoring on the AgentOps dashboard.

AgentOps is integrated with CrewAI on a pre-release fork. Install crew with

pip install git+https://github.com/AgentOps-AI/crewAI.git@main

AutoGen 🤖

With only two lines of code, add full observability and monitoring to Autogen agents. Set an AGENTOPS_API_KEY in your environment and call agentops.init()

Langchain 🦜🔗

AgentOps works seamlessly with applications built using Langchain. To use the handler, install Langchain as an optional dependency:

Installation
pip install agentops[langchain]

To use the handler, import and set

import os
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent, AgentType
from agentops.langchain_callback_handler import LangchainCallbackHandler

AGENTOPS_API_KEY = os.environ['AGENTOPS_API_KEY']
handler = LangchainCallbackHandler(api_key=AGENTOPS_API_KEY, tags=['Langchain Example'])

llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY,
                 callbacks=[handler],
                 model='gpt-3.5-turbo')

agent = initialize_agent(tools,
                         llm,
                         agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
                         verbose=True,
                         callbacks=[handler], # You must pass in a callback handler to record your agent
                         handle_parsing_errors=True)

Check out the Langchain Examples Notebook for more details including Async handlers.

Cohere ⌨️

First class support for Cohere(>=5.4.0). This is a living integration, should you need any added functionality please message us on Discord!

Installation
pip install cohere
import cohere
import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)
co = cohere.Client()

chat = co.chat(
    message="Is it pronounced ceaux-hear or co-hehray?"
)

print(chat)

agentops.end_session('Success')
import cohere
import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
agentops.init(<INSERT YOUR API KEY HERE>)

co = cohere.Client()

stream = co.chat_stream(
    message="Write me a haiku about the synergies between Cohere and AgentOps"
)

for event in stream:
    if event.event_type == "text-generation":
        print(event.text, end='')

agentops.end_session('Success')

LiteLLM

AgentOps provides support for LiteLLM(>=1.3.1), allowing you to call 100+ LLMs using the same Input/Output Format.

Installation
pip install litellm
# Do not use LiteLLM like this
# from litellm import completion
# ...
# response = completion(model="claude-3", messages=messages)

# Use LiteLLM like this
import litellm
...
response = litellm.completion(model="claude-3", messages=messages)
# or
response = await litellm.acompletion(model="claude-3", messages=messages)

LlamaIndex 🦙

(Coming Soon)

Time travel debugging 🔮

(coming soon!)

Agent Arena 🥊

(coming soon!)

Evaluations Roadmap 🧭

Platform Dashboard Evals
✅ Python SDK ✅ Multi-session and Cross-session metrics ✅ Custom eval metrics
🚧 Evaluation builder API ✅ Custom event tag tracking  🔜 Agent scorecards
Javascript/Typescript SDK ✅ Session replays 🔜 Evaluation playground + leaderboard

Debugging Roadmap 🧭

Performance testing Environments LLM Testing Reasoning and execution testing
✅ Event latency analysis 🔜 Non-stationary environment testing 🔜 LLM non-deterministic function detection 🚧 Infinite loops and recursive thought detection
✅ Agent workflow execution pricing 🔜 Multi-modal environments 🚧 Token limit overflow flags 🔜 Faulty reasoning detection
🚧 Success validators (external) 🔜 Execution containers 🔜 Context limit overflow flags 🔜 Generative code validators
🔜 Agent controllers/skill tests ✅ Honeypot and prompt injection detection (PromptArmor) 🔜 API bill tracking 🔜 Error breakpoint analysis
🔜 Information context constraint testing 🔜 Anti-agent roadblocks (i.e. Captchas) 🔜 CI/CD integration checks
🔜 Regression testing 🔜 Multi-agent framework visualization

Why AgentOps? 🤔

Without the right tools, AI agents are slow, expensive, and unreliable. Our mission is to bring your agent from prototype to production. Here's why AgentOps stands out:

  • Comprehensive Observability: Track your AI agents' performance, user interactions, and API usage.
  • Real-Time Monitoring: Get instant insights with session replays, metrics, and live monitoring tools.
  • Cost Control: Monitor and manage your spend on LLM and API calls.
  • Failure Detection: Quickly identify and respond to agent failures and multi-agent interaction issues.
  • Tool Usage Statistics: Understand how your agents utilize external tools with detailed analytics.
  • Session-Wide Metrics: Gain a holistic view of your agents' sessions with comprehensive statistics.

AgentOps is designed to make agent observability, testing, and monitoring easy.

Star History

Check out our growth in the community:

Logo

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentops-0.2.6.tar.gz (37.9 kB view hashes)

Uploaded Source

Built Distribution

agentops-0.2.6-py3-none-any.whl (37.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page