Skip to main content

An open source framework for voice (and multimodal) assistants

Project description

pipecat

PyPI Tests codecov Docs Discord

🎙️ Pipecat: Real-Time Voice & Multimodal AI Agents

Pipecat is an open-source Python framework for building real-time voice and multimodal conversational agents. Orchestrate audio and video, AI services, different transports, and conversation pipelines effortlessly—so you can focus on what makes your agent unique.

Want to dive right in? Try the quickstart.

🚀 What You Can Build

  • Voice Assistants – natural, streaming conversations with AI
  • AI Companions – coaches, meeting assistants, characters
  • Multimodal Interfaces – voice, video, images, and more
  • Interactive Storytelling – creative tools with generative media
  • Business Agents – customer intake, support bots, guided flows
  • Complex Dialog Systems – design logic with structured conversations

🧭 Looking to build structured conversations? Check out Pipecat Flows for managing complex conversational states and transitions.

🧠 Why Pipecat?

  • Voice-first: Integrates speech recognition, text-to-speech, and conversation handling
  • Pluggable: Supports many AI services and tools
  • Composable Pipelines: Build complex behavior from modular components
  • Real-Time: Ultra-low latency interaction with different transports (e.g. WebSockets or WebRTC)

📱 Client SDKs

You can connect to Pipecat from any platform using our official SDKs:

JavaScript JavaScript React React React Native React Native
Swift Swift Kotlin Kotlin JavaScript C++

🎬 See it in action

 
 

🧩 Available services

Category Services
Speech-to-Text AssemblyAI, AWS, Azure, Cartesia, Deepgram, Fal Wizper, Gladia, Google, Groq (Whisper), NVIDIA Riva, OpenAI (Whisper), SambaNova (Whisper), Soniox, Speechmatics, Ultravox, Whisper
LLMs Anthropic, AWS, Azure, Cerebras, DeepSeek, Fireworks AI, Gemini, Grok, Groq, Mistral, NVIDIA NIM, Ollama, OpenAI, OpenRouter, Perplexity, Qwen, SambaNova Together AI
Text-to-Speech Async, AWS, Azure, Cartesia, Deepgram, ElevenLabs, Fish, Google, Groq, Inworld, LMNT, MiniMax, Neuphonic, NVIDIA Riva, OpenAI, Piper, PlayHT, Rime, Sarvam, XTTS
Speech-to-Speech AWS Nova Sonic, Gemini Multimodal Live, OpenAI Realtime
Transport Daily (WebRTC), FastAPI Websocket, SmallWebRTCTransport, WebSocket Server, Local
Serializers Plivo, Twilio, Telnyx
Video HeyGen, Tavus, Simli
Memory mem0
Vision & Image fal, Google Imagen, Moondream
Audio Processing Silero VAD, Krisp, Koala, ai-coustics
Analytics & Metrics OpenTelemetry, Sentry

📚 View full services documentation →

⚡ Getting started

You can get started with Pipecat running on your local machine, then move your agent processes to the cloud when you're ready.

  1. Install uv

    curl -LsSf https://astral.sh/uv/install.sh | sh
    

    Need help? Refer to the uv install documentation.

  2. Install the module

    # For new projects
    uv init my-pipecat-app
    cd my-pipecat-app
    uv add pipecat-ai
    
    # Or for existing projects
    uv add pipecat-ai
    
  3. Set up your environment

    cp env.example .env
    
  4. To keep things lightweight, only the core framework is included by default. If you need support for third-party AI services, you can add the necessary dependencies with:

    uv add "pipecat-ai[option,...]"
    

Using pip? You can still use pip install pipecat-ai and pip install "pipecat-ai[option,...]" to get set up.

🧪 Code examples

  • Foundational — small snippets that build on each other, introducing one or two concepts at a time
  • Example apps — complete applications that you can use as starting points for development

🛠️ Contributing to the framework

Prerequisites

Minimum Python Version: 3.10 Recommended Python Version: 3.12

Setup Steps

  1. Clone the repository and navigate to it:

    git clone https://github.com/pipecat-ai/pipecat.git
    cd pipecat
    
  2. Install development and testing dependencies:

    uv sync --group dev --all-extras \
      --no-extra gstreamer \
      --no-extra krisp \
      --no-extra local \
      --no-extra ultravox # (ultravox not fully supported on macOS)
    
  3. Install the git pre-commit hooks:

    uv run pre-commit install
    

Note: Some extras (local, gstreamer) require system dependencies. See documentation if you encounter build errors.

Running tests

To run all tests, from the root directory:

uv run pytest

Run a specific test suite:

uv run pytest tests/test_name.py

Setting up your editor

This project uses strict PEP 8 formatting via Ruff.

Emacs

You can use use-package to install emacs-lazy-ruff package and configure ruff arguments:

(use-package lazy-ruff
  :ensure t
  :hook ((python-mode . lazy-ruff-mode))
  :config
  (setq lazy-ruff-format-command "ruff format")
  (setq lazy-ruff-check-command "ruff check --select I"))

ruff was installed in the venv environment described before, so you should be able to use pyvenv-auto to automatically load that environment inside Emacs.

(use-package pyvenv-auto
  :ensure t
  :defer t
  :hook ((python-mode . pyvenv-auto-run)))

Visual Studio Code

Install the Ruff extension. Then edit the user settings (Ctrl-Shift-P Open User Settings (JSON)) and set it as the default Python formatter, and enable formatting on save:

"[python]": {
    "editor.defaultFormatter": "charliermarsh.ruff",
    "editor.formatOnSave": true
}

PyCharm

ruff was installed in the venv environment described before, now to enable autoformatting on save, go to File -> Settings -> Tools -> File Watchers and add a new watcher with the following settings:

  1. Name: Ruff formatter
  2. File type: Python
  3. Working directory: $ContentRoot$
  4. Arguments: format $FilePath$
  5. Program: $PyInterpreterDirectory$/ruff

🤝 Contributing

We welcome contributions from the community! Whether you're fixing bugs, improving documentation, or adding new features, here's how you can help:

  • Found a bug? Open an issue
  • Have a feature idea? Start a discussion
  • Want to contribute code? Check our CONTRIBUTING.md guide
  • Documentation improvements? Docs PRs are always welcome

Before submitting a pull request, please check existing issues and PRs to avoid duplicates.

We aim to review all contributions promptly and provide constructive feedback to help get your changes merged.

🛟 Getting help

➡️ Join our Discord

➡️ Read the docs

➡️ Reach us on X

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pipecat_ai-0.0.85.tar.gz (10.6 MB view details)

Uploaded Source

Built Distribution

pipecat_ai-0.0.85-py3-none-any.whl (10.3 MB view details)

Uploaded Python 3

File details

Details for the file pipecat_ai-0.0.85.tar.gz.

File metadata

  • Download URL: pipecat_ai-0.0.85.tar.gz
  • Upload date:
  • Size: 10.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pipecat_ai-0.0.85.tar.gz
Algorithm Hash digest
SHA256 be755dae2fb74057bfa6060c0e58faaa69698ba25bc57c3639f4bb4f57892ea7
MD5 3d30b159a8aa225f3865f1342b72e46a
BLAKE2b-256 844af12ee34df7103c5738cdd745b07d7989d6392738c63cfe653c5540f644c3

See more details on using hashes here.

Provenance

The following attestation bundles were made for pipecat_ai-0.0.85.tar.gz:

Publisher: publish.yaml on pipecat-ai/pipecat

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pipecat_ai-0.0.85-py3-none-any.whl.

File metadata

  • Download URL: pipecat_ai-0.0.85-py3-none-any.whl
  • Upload date:
  • Size: 10.3 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pipecat_ai-0.0.85-py3-none-any.whl
Algorithm Hash digest
SHA256 24088e86f917860e66bfc6345c261a4da733e1221af10c28180b86cefe6be24a
MD5 da25aa0970ae6cd745b9bfc01a5f34d5
BLAKE2b-256 53b8feb18dc5f6e12395024c35c2c6353e8c957234f214b282825eb9a8515afe

See more details on using hashes here.

Provenance

The following attestation bundles were made for pipecat_ai-0.0.85-py3-none-any.whl:

Publisher: publish.yaml on pipecat-ai/pipecat

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page