Skip to main content

OpenAI Agents SDK

Project description

OpenAI Agents SDK

The OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows.

Image of the Agents Tracing UI

Core concepts:

  1. Agents: LLMs configured with instructions, tools, guardrails, and handoffs
  2. Handoffs: A specialized tool call used by the Agents SDK for transferring control between agents
  3. Guardrails: Configurable safety checks for input and output validation
  4. Tracing: Built-in tracking of agent runs, allowing you to view, debug and optimize your workflows

Explore the examples directory to see the SDK in action, and read our documentation for more details.

Notably, our SDK is compatible with any model providers that support the OpenAI Chat Completions API format.

Get started

  1. Set up your Python environment
python -m venv env
source env/bin/activate
  1. Install Agents SDK
pip install openai-agents

For voice support, install with the optional voice group: pip install openai-agents[voice].

Hello world example

from agents import Agent, Runner

agent = Agent(name="Assistant", instructions="You are a helpful assistant")

result = Runner.run_sync(agent, "Write a haiku about recursion in programming.")
print(result.final_output)

# Code within the code,
# Functions calling themselves,
# Infinite loop's dance.

(If running this, ensure you set the OPENAI_API_KEY environment variable)

(For Jupyter notebook users, see hello_world_jupyter.py)

Handoffs example

from agents import Agent, Runner
import asyncio

spanish_agent = Agent(
    name="Spanish agent",
    instructions="You only speak Spanish.",
)

english_agent = Agent(
    name="English agent",
    instructions="You only speak English",
)

triage_agent = Agent(
    name="Triage agent",
    instructions="Handoff to the appropriate agent based on the language of the request.",
    handoffs=[spanish_agent, english_agent],
)


async def main():
    result = await Runner.run(triage_agent, input="Hola, ¿cómo estás?")
    print(result.final_output)
    # ¡Hola! Estoy bien, gracias por preguntar. ¿Y tú, cómo estás?


if __name__ == "__main__":
    asyncio.run(main())

Functions example

import asyncio

from agents import Agent, Runner, function_tool


@function_tool
def get_weather(city: str) -> str:
    return f"The weather in {city} is sunny."


agent = Agent(
    name="Hello world",
    instructions="You are a helpful agent.",
    tools=[get_weather],
)


async def main():
    result = await Runner.run(agent, input="What's the weather in Tokyo?")
    print(result.final_output)
    # The weather in Tokyo is sunny.


if __name__ == "__main__":
    asyncio.run(main())

The agent loop

When you call Runner.run(), we run a loop until we get a final output.

  1. We call the LLM, using the model and settings on the agent, and the message history.
  2. The LLM returns a response, which may include tool calls.
  3. If the response has a final output (see below for more on this), we return it and end the loop.
  4. If the response has a handoff, we set the agent to the new agent and go back to step 1.
  5. We process the tool calls (if any) and append the tool responses messages. Then we go to step 1.

There is a max_turns parameter that you can use to limit the number of times the loop executes.

Final output

Final output is the last thing the agent produces in the loop.

  1. If you set an output_type on the agent, the final output is when the LLM returns something of that type. We use structured outputs for this.
  2. If there's no output_type (i.e. plain text responses), then the first LLM response without any tool calls or handoffs is considered as the final output.

As a result, the mental model for the agent loop is:

  1. If the current agent has an output_type, the loop runs until the agent produces structured output matching that type.
  2. If the current agent does not have an output_type, the loop runs until the current agent produces a message without any tool calls/handoffs.

Common agent patterns

The Agents SDK is designed to be highly flexible, allowing you to model a wide range of LLM workflows including deterministic flows, iterative loops, and more. See examples in examples/agent_patterns.

Tracing

The Agents SDK automatically traces your agent runs, making it easy to track and debug the behavior of your agents. Tracing is extensible by design, supporting custom spans and a wide variety of external destinations, including Logfire, AgentOps, Braintrust, Scorecard, and Keywords AI. For more details about how to customize or disable tracing, see Tracing, which also includes a larger list of external tracing processors.

Development (only needed if you need to edit the SDK/examples)

  1. Ensure you have uv installed.
uv --version
  1. Install dependencies
make sync
  1. (After making changes) lint/test
make tests  # run tests
make mypy   # run typechecker
make lint   # run linter

Acknowledgements

We'd like to acknowledge the excellent work of the open-source community, especially:

We're committed to continuing to build the Agents SDK as an open source framework so others in the community can expand on our approach.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openai_agents-0.0.6.tar.gz (671.7 kB view details)

Uploaded Source

Built Distribution

openai_agents-0.0.6-py3-none-any.whl (98.9 kB view details)

Uploaded Python 3

File details

Details for the file openai_agents-0.0.6.tar.gz.

File metadata

  • Download URL: openai_agents-0.0.6.tar.gz
  • Upload date:
  • Size: 671.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for openai_agents-0.0.6.tar.gz
Algorithm Hash digest
SHA256 34b7c25f74d6f31e43a12ec7b2de64527714746dd15ca245bfc41dc8e92dbe2b
MD5 ee828e9cb9382bda540fb9e4836f0cae
BLAKE2b-256 62d4a3c6763990b808ac5848ed0520c36f5e9b4651b540d6990b763c90d40e10

See more details on using hashes here.

Provenance

The following attestation bundles were made for openai_agents-0.0.6.tar.gz:

Publisher: publish.yml on openai/openai-agents-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file openai_agents-0.0.6-py3-none-any.whl.

File metadata

File hashes

Hashes for openai_agents-0.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 b5d6ff2909205ee75e2860114648432d66113afee2dadb199b09b292d892ac7e
MD5 1043aa30fabf903c0e5c84b716a9457b
BLAKE2b-256 deb9f62eb52b859b4d0c9004b440e0283800ab2d54aabd6fcf881b3fdc40cff6

See more details on using hashes here.

Provenance

The following attestation bundles were made for openai_agents-0.0.6-py3-none-any.whl:

Publisher: publish.yml on openai/openai-agents-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page