Skip to main content

The all-in-one voice SDK

Project description

  vocode

Build voice-based LLM apps in minutes

Vocode is an open source library that makes it easy to build voice-based LLM apps. Using Vocode, you can build real-time streaming conversations with LLMs and deploy them to phone calls, Zoom meetings, and more. You can also build personal assistants or apps like voice-based chess. Vocode provides easy abstractions and integrations so that everything you need is in a single library.

We're actively looking for community maintainers, so please reach out if interested!

⭐️ Features

Check out our React SDK here!

🫂 Contribution and Roadmap

We're an open source project and are extremely open to contributors adding new features, integrations, and documentation! Please don't hesitate to reach out and get started building with us.

For more information on contributing, see our Contribution Guide.

And check out our Roadmap.

We'd love to talk to you on Discord about new ideas and contributing!

🚀 Quickstart (Self-hosted)

pip install 'vocode'
import asyncio
import logging
import signal
from vocode.streaming.streaming_conversation import StreamingConversation
from vocode.helpers import create_streaming_microphone_input_and_speaker_output
from vocode.streaming.transcriber import *
from vocode.streaming.agent import *
from vocode.streaming.synthesizer import *
from vocode.streaming.models.transcriber import *
from vocode.streaming.models.agent import *
from vocode.streaming.models.synthesizer import *
from vocode.streaming.models.message import BaseMessage
import vocode

# these can also be set as environment variables
vocode.setenv(
    OPENAI_API_KEY="<your OpenAI key>",
    DEEPGRAM_API_KEY="<your Deepgram key>",
    AZURE_SPEECH_KEY="<your Azure key>",
    AZURE_SPEECH_REGION="<your Azure region>",
)


logging.basicConfig()
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)


async def main():
    (
        microphone_input,
        speaker_output,
    ) = create_streaming_microphone_input_and_speaker_output(
        use_default_devices=False,
        logger=logger,
    )

    conversation = StreamingConversation(
        output_device=speaker_output,
        transcriber=DeepgramTranscriber(
            DeepgramTranscriberConfig.from_input_device(
                microphone_input,
                endpointing_config=PunctuationEndpointingConfig(),
            )
        ),
        agent=ChatGPTAgent(
            ChatGPTAgentConfig(
                initial_message=BaseMessage(text="What up"),
                prompt_preamble="""The AI is having a pleasant conversation about life""",
            )
        ),
        synthesizer=AzureSynthesizer(
            AzureSynthesizerConfig.from_output_device(speaker_output)
        ),
        logger=logger,
    )
    await conversation.start()
    print("Conversation started, press Ctrl+C to end")
    signal.signal(signal.SIGINT, lambda _0, _1: conversation.terminate())
    while conversation.is_active():
        chunk = await microphone_input.get_audio()
        conversation.receive_audio(chunk)


if __name__ == "__main__":
    asyncio.run(main())

📞 Phone call quickstarts

🌱 Documentation

docs.vocode.dev

Recording audio input (human speech):

    ...
    await conversation.start()

    from vocode.streaming.pubsub.base_pubsub import AudioFileWriterSubscriber

    subscriber = AudioFileWriterSubscriber(
        "AudioFileWriterSubscriber", sampling_rate=44100 # 8000 for Twilio
    )
    from vocode import pubsub

    pubsub.subscribe(subscriber=subscriber, topic="human_audio_streams")
    audio_sub_task = asyncio.create_task(subscriber._run_loop())

    signal.signal(signal.SIGINT, lambda _0, _1: conversation.terminate())
    signal.signal(signal.SIGINT, lambda _0, _1: subscriber.terminate())
    signal.signal(signal.SIGINT, lambda _0, _1: audio_sub_task.cancel())

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

milocode-0.1.111a2.tar.gz (5.2 kB view details)

Uploaded Source

Built Distribution

milocode-0.1.111a2-py3-none-any.whl (5.1 kB view details)

Uploaded Python 3

File details

Details for the file milocode-0.1.111a2.tar.gz.

File metadata

  • Download URL: milocode-0.1.111a2.tar.gz
  • Upload date:
  • Size: 5.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.2

File hashes

Hashes for milocode-0.1.111a2.tar.gz
Algorithm Hash digest
SHA256 e43271b8d909f50afae15607e1e78cbdd5cb391dd3db5cd012e68d289c11f6e8
MD5 6170ef6935ff507cd1efaff034aa983f
BLAKE2b-256 fd5cf56abe42922e0a5dfc967d868a5d7c1148a49c3d8237d2702bb560db764e

See more details on using hashes here.

File details

Details for the file milocode-0.1.111a2-py3-none-any.whl.

File metadata

File hashes

Hashes for milocode-0.1.111a2-py3-none-any.whl
Algorithm Hash digest
SHA256 e20fb9007e6786c28a1bdd897aa0540fc3a0b5f7bd61cbcfed6977d33da7d172
MD5 bd4e5ac9fcc6eb79164afe201203c98a
BLAKE2b-256 d4d2a50e090641eb590fa0a57e0a135cb4f50b77b14ba27fb9b79e054864f1f3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page