Skip to main content

Fast, accurate, on-device AI library for building interactive voice applications

Project description

Moonshine Voice Python Package

A fast, accurate, on-device AI library for building interactive voice applications. Join our Discord to get help and support.

Installation

pip install moonshine-voice

Quick Start

# Listens to the microphone, logging to the console when there are 
# speech updates.
python -m moonshine_voice.mic_transcriber

Example

"""Transcribes live audio from the default microphone"""
import time
from moonshine_voice import (
    MicTranscriber,
    TranscriptEventListener,
    get_model_for_language,
)

# This will download the model files and cache them.
model_path, model_arch = get_model_for_language("en")

# MicTranscriber handles connecting to the microphone, capturing
# the audio data, detecting voice activity, breaking the speech
# up into segments, transcribing the speech, and sending events
# as the results are updated over time.
mic_transcriber = MicTranscriber(
    model_path=model_path, model_arch=model_arch)

# We use an event-driven interface to respond in real time
# as speech is detected.
class TestListener(TranscriptEventListener):
    def on_line_started(self, event):
        print(f"Line started: {event.line.text}")

    def on_line_text_changed(self, event):
        print(f"Line text changed: {event.line.text}")

    def on_line_completed(self, event):
        print(f"Line completed: {event.line.text}")

listener = TestListener()
mic_transcriber.add_listener(listener)
mic_transcriber.start()
print("Listening to the microphone, press Ctrl+C to stop...")

while True:
    time.sleep(0.1)

Other Sources

If you have a different source you're capturing audio from you can supply it directly to a transcriber.

"""Transcribes live audio from an arbitrary audio source."""
from moonshine_voice import (
    Transcriber,
    TranscriptEventListener,
    get_model_for_language,
    load_wav_file,
    get_assets_path,
)
import os
from typing import Iterator, Tuple


def audio_chunk_generator(
    wav_file_path: str, chunk_duration: float = 0.1
) -> Iterator[Tuple[list, int]]:
    """
    Example function that loads a WAV file and yields audio chunks.

    This demonstrates how you can integrate your own proprietary
    audio data capture sources. Replace this function with your own
    implementation that yields (audio_chunk, sample_rate) tuples.

    Args:
        wav_file_path: Path to the WAV file to load
        chunk_duration: Duration of each chunk in seconds

    Yields:
        Tuple of (audio_chunk, sample_rate) where:
        - audio_chunk: List of float audio samples
        - sample_rate: Sample rate in Hz
    """
    audio_data, sample_rate = load_wav_file(wav_file_path)
    chunk_size = int(chunk_duration * sample_rate)

    for i in range(0, len(audio_data), chunk_size):
        chunk = audio_data[i: i + chunk_size]
        yield (chunk, sample_rate)


model_path, model_arch = get_model_for_language("en")

transcriber = Transcriber(
    model_path=model_path, model_arch=model_arch)

stream = transcriber.create_stream(update_interval=0.5)
stream.start()


class TestListener(TranscriptEventListener):
    def on_line_started(self, event):
        print(f"{event.line.start_time:.2f}s: Line started: {event.line.text}")

    def on_line_text_changed(self, event):
        print(
            f"{event.line.start_time:.2f}s: Line text changed: {event.line.text}")

    def on_line_completed(self, event):
        print(f"{event.line.start_time:.2f}s: Line completed: {event.line.text}")


listener = TestListener()
stream.add_listener(listener)

# Feed audio chunks from the generator into the stream.
wav_file_path = os.path.join(get_assets_path(), "two_cities.wav")
for chunk, sample_rate in audio_chunk_generator(wav_file_path):
    stream.add_audio(chunk, sample_rate)

stream.stop()
stream.close()

Voice Commands

We also provide voice command recognition using the IntentRecognizer module. It captures transcribed audio from a MicTranscriber and invokes callback functions that match your programmed intents. Since it relies on an embedding model, you can use a helper function to get started:

from moonshine_voice import (
    MicTranscriber,
    IntentRecognizer,
    ModelArch,
    EmbeddingModelArch,
    get_embedding_model,
    get_model_for_language
)

# Download and load the embedding model for intent recognition
embedding_model_path, embedding_model_arch = get_embedding_model()

Next, create a recognizer and register your intent callbacks:

intent_recognizer = IntentRecognizer(
    model_path=embedding_model_path,
    model_arch=embedding_model_arch
)

def on_lights_on(trigger: str, utterance: str, similarity: float):
    """Handler for turning lights on."""
    print(f"\n💡 LIGHTS ON! (matched '{trigger}' with {similarity:.0%} confidence)")

def on_lights_off(trigger: str, utterance: str, similarity: float):
    """Handler for turning lights off."""
    print(f"\n🌑 LIGHTS OFF! (matched '{trigger}' with {similarity:.0%} confidence)")

intent_recognizer.register_intent("turn on the lights", on_lights_on)
intent_recognizer.register_intent("turn off the lights", on_lights_off)

Finally, create a MicTranscriber, connect it to your IntentRecognizer, and start the audio stream:

# Get the transcription model and initialize a MicTranscriber
model_path, model_arch = get_model_for_language("en")
mic_transcriber = MicTranscriber(model_path=model_path, model_arch=model_arch)

# The intent recognizer will process completed transcript lines and invoke trigger handlers
mic_transcriber.add_listener(intent_recognizer)

mic_transcriber.start()
try:
    while True:
        time.sleep(0.1)
except KeyboardInterrupt:
    print("\n\nStopping...", file=sys.stderr)
finally:
    intent_recognizer.close()
    mic_transcriber.stop()
    mic_transcriber.close()

Multiple Languages

The framework currently supports English, Spanish, Mandarin, Japanese, Korean, Vietnamese, Arabic, and Ukrainian. We are working on wider language support, and you can see which are supported in your version by calling supported_languages(). To use a language, request it using get_model_for_language() passing in the two-letter language code. For example get_model_for_language("es") will download the Spanish models and pass the information you need to create Transcriber objects using them.

Documentation

For more information, see the main Moonshine Voice documentation.

License

The code and English-language models are released under the MIT License - see the main project repository for details. The models used for other languages are released under the Moonshine Community License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

moonshine_voice-0.0.42-py3-none-win_amd64.whl (55.5 MB view details)

Uploaded Python 3Windows x86-64

moonshine_voice-0.0.42-py3-none-manylinux_2_39_x86_64.whl (84.9 MB view details)

Uploaded Python 3manylinux: glibc 2.39+ x86-64

moonshine_voice-0.0.42-py3-none-manylinux_2_39_aarch64.whl (83.8 MB view details)

Uploaded Python 3manylinux: glibc 2.39+ ARM64

moonshine_voice-0.0.42-py3-none-manylinux_2_31_aarch64.whl (57.6 MB view details)

Uploaded Python 3manylinux: glibc 2.31+ ARM64

File details

Details for the file moonshine_voice-0.0.42-py3-none-win_amd64.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.42-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 fa022289225e85b7d43c20cdb9a93512f51c006cf4f1ea9bc5c0a2cbd60b11ea
MD5 a0aeb58ca6a8d5f9f224e4f4c82b4af8
BLAKE2b-256 ce92809fe0ff3b7ab60260813639db4eea351fa7c5fe5987b121cf5c5eb22d5a

See more details on using hashes here.

File details

Details for the file moonshine_voice-0.0.42-py3-none-manylinux_2_39_x86_64.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.42-py3-none-manylinux_2_39_x86_64.whl
Algorithm Hash digest
SHA256 ac206e3ca249e5cb15c54ed74765b7562df6a835a07a2b5901b711901d96be20
MD5 518ef2282c3ef856d8add2d52f24701d
BLAKE2b-256 e7ab2f8203cb209449097139ef941c8440754d280f50c0bd00029fe6feef3aa4

See more details on using hashes here.

File details

Details for the file moonshine_voice-0.0.42-py3-none-manylinux_2_39_aarch64.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.42-py3-none-manylinux_2_39_aarch64.whl
Algorithm Hash digest
SHA256 401e2f3029d254dbce3c56c3cb2af0a28878ed15faa4f80c41b59e40a8f02d78
MD5 ece80192b065740992e657351e073012
BLAKE2b-256 897d7e546f8e5d6b16bf120bd06b51fdb31a5496b44164b760e303a573fd1cd5

See more details on using hashes here.

File details

Details for the file moonshine_voice-0.0.42-py3-none-manylinux_2_31_aarch64.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.42-py3-none-manylinux_2_31_aarch64.whl
Algorithm Hash digest
SHA256 74df7133848400c693d1160e71acee94aead82dcb277d0b82a098870c2b31eff
MD5 a16e6cd316dc520b570a7825d45d1305
BLAKE2b-256 fb07f4de35e23c2005f6239f72ebc2e627c7048d65eaa0ccafd8bfcd737d9192

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page