Skip to main content

Fast, accurate, on-device AI library for building interactive voice applications

Project description

Moonshine Voice Python Package

A fast, accurate, on-device AI library for building interactive voice applications. Join our Discord to get help and support.

Installation

pip install moonshine-voice

Quick Start

# Listens to the microphone, logging to the console when there are 
# speech updates.
python -m moonshine_voice.mic_transcriber

Example

"""Transcribes live audio from the default microphone"""
import time
from moonshine_voice import (
    MicTranscriber,
    TranscriptEventListener,
    get_model_for_language,
)

# This will download the model files and cache them.
model_path, model_arch = get_model_for_language("en")

# MicTranscriber handles connecting to the microphone, capturing
# the audio data, detecting voice activity, breaking the speech
# up into segments, transcribing the speech, and sending events
# as the results are updated over time.
mic_transcriber = MicTranscriber(
    model_path=model_path, model_arch=model_arch)

# We use an event-driven interface to respond in real time
# as speech is detected.
class TestListener(TranscriptEventListener):
    def on_line_started(self, event):
        print(f"Line started: {event.line.text}")

    def on_line_text_changed(self, event):
        print(f"Line text changed: {event.line.text}")

    def on_line_completed(self, event):
        print(f"Line completed: {event.line.text}")

listener = TestListener()
mic_transcriber.add_listener(listener)
mic_transcriber.start()
print("Listening to the microphone, press Ctrl+C to stop...")

while True:
    time.sleep(0.1)

Other Sources

If you have a different source you're capturing audio from you can supply it directly to a transcriber.

"""Transcribes live audio from an arbitrary audio source."""
from moonshine_voice import (
    Transcriber,
    TranscriptEventListener,
    get_model_for_language,
    load_wav_file,
    get_assets_path,
)
import os
from typing import Iterator, Tuple


def audio_chunk_generator(
    wav_file_path: str, chunk_duration: float = 0.1
) -> Iterator[Tuple[list, int]]:
    """
    Example function that loads a WAV file and yields audio chunks.

    This demonstrates how you can integrate your own proprietary
    audio data capture sources. Replace this function with your own
    implementation that yields (audio_chunk, sample_rate) tuples.

    Args:
        wav_file_path: Path to the WAV file to load
        chunk_duration: Duration of each chunk in seconds

    Yields:
        Tuple of (audio_chunk, sample_rate) where:
        - audio_chunk: List of float audio samples
        - sample_rate: Sample rate in Hz
    """
    audio_data, sample_rate = load_wav_file(wav_file_path)
    chunk_size = int(chunk_duration * sample_rate)

    for i in range(0, len(audio_data), chunk_size):
        chunk = audio_data[i: i + chunk_size]
        yield (chunk, sample_rate)


model_path, model_arch = get_model_for_language("en")

transcriber = Transcriber(
    model_path=model_path, model_arch=model_arch)

stream = transcriber.create_stream(update_interval=0.5)
stream.start()


class TestListener(TranscriptEventListener):
    def on_line_started(self, event):
        print(f"{event.line.start_time:.2f}s: Line started: {event.line.text}")

    def on_line_text_changed(self, event):
        print(
            f"{event.line.start_time:.2f}s: Line text changed: {event.line.text}")

    def on_line_completed(self, event):
        print(f"{event.line.start_time:.2f}s: Line completed: {event.line.text}")


listener = TestListener()
stream.add_listener(listener)

# Feed audio chunks from the generator into the stream.
wav_file_path = os.path.join(get_assets_path(), "two_cities.wav")
for chunk, sample_rate in audio_chunk_generator(wav_file_path):
    stream.add_audio(chunk, sample_rate)

stream.stop()
stream.close()

Voice Commands

We also provide voice command recognition using the IntentRecognizer module. It captures transcribed audio from a MicTranscriber and invokes callback functions that match your programmed intents. Since it relies on an embedding model, you can use a helper function to get started:

from moonshine_voice import (
    MicTranscriber,
    IntentRecognizer,
    ModelArch,
    EmbeddingModelArch,
    get_embedding_model,
    get_model_for_language
)

# Download and load the embedding model for intent recognition
embedding_model_path, embedding_model_arch = get_embedding_model()

Next, create a recognizer and register your intent callbacks:

intent_recognizer = IntentRecognizer(
    model_path=embedding_model_path,
    model_arch=embedding_model_arch
)

def on_lights_on(trigger: str, utterance: str, similarity: float):
    """Handler for turning lights on."""
    print(f"\n💡 LIGHTS ON! (matched '{trigger}' with {similarity:.0%} confidence)")

def on_lights_off(trigger: str, utterance: str, similarity: float):
    """Handler for turning lights off."""
    print(f"\n🌑 LIGHTS OFF! (matched '{trigger}' with {similarity:.0%} confidence)")

intent_recognizer.register_intent("turn on the lights", on_lights_on)
intent_recognizer.register_intent("turn off the lights", on_lights_off)

Finally, create a MicTranscriber, connect it to your IntentRecognizer, and start the audio stream:

# Get the transcription model and initialize a MicTranscriber
model_path, model_arch = get_model_for_language("en")
mic_transcriber = MicTranscriber(model_path=model_path, model_arch=model_arch)

# The intent recognizer will process completed transcript lines and invoke trigger handlers
mic_transcriber.add_listener(intent_recognizer)

mic_transcriber.start()
try:
    while True:
        time.sleep(0.1)
except KeyboardInterrupt:
    print("\n\nStopping...", file=sys.stderr)
finally:
    intent_recognizer.close()
    mic_transcriber.stop()
    mic_transcriber.close()

Multiple Languages

The framework currently supports English, Spanish, Mandarin, Japanese, Korean, Vietnamese, Arabic, and Ukrainian. We are working on wider language support, and you can see which are supported in your version by calling supported_languages(). To use a language, request it using get_model_for_language() passing in the two-letter language code. For example get_model_for_language("es") will download the Spanish models and pass the information you need to create Transcriber objects using them.

Documentation

For more information, see the main Moonshine Voice documentation.

License

The code and English-language models are released under the MIT License - see the main project repository for details. The models used for other languages are released under the Moonshine Community License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

moonshine_voice-0.0.49-py3-none-win_amd64.whl (55.5 MB view details)

Uploaded Python 3Windows x86-64

moonshine_voice-0.0.49-py3-none-manylinux_2_34_x86_64.whl (84.9 MB view details)

Uploaded Python 3manylinux: glibc 2.34+ x86-64

moonshine_voice-0.0.49-py3-none-manylinux_2_34_aarch64.whl (83.9 MB view details)

Uploaded Python 3manylinux: glibc 2.34+ ARM64

moonshine_voice-0.0.49-py3-none-manylinux_2_31_aarch64.manylinux_2_34_aarch64.whl (57.7 MB view details)

Uploaded Python 3manylinux: glibc 2.31+ ARM64manylinux: glibc 2.34+ ARM64

moonshine_voice-0.0.49-py3-none-macosx_15_0_universal2.whl (76.8 MB view details)

Uploaded Python 3macOS 15.0+ universal2 (ARM64, x86-64)

File details

Details for the file moonshine_voice-0.0.49-py3-none-win_amd64.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.49-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 7dc44b982ffadf020094e50b23288936efcce0236a2035407b35c98299997bd5
MD5 b4e6a03f01566ebfcfc2b366fd8fd688
BLAKE2b-256 ce6db04e0acbea4dfbf711392c18769d4dbc41ec8ede4339f68fa5de35ed66c5

See more details on using hashes here.

File details

Details for the file moonshine_voice-0.0.49-py3-none-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.49-py3-none-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 03edc378e5236832421ccd382b267e43b95af8008137ad6643f5baf5ba31b579
MD5 fc2351469ef6a4573f34c9c8bc0e2bb4
BLAKE2b-256 04ed3d942a206bcfd44ccba9b98ce50b6942db74197923922b07a98cd7d1018a

See more details on using hashes here.

File details

Details for the file moonshine_voice-0.0.49-py3-none-manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.49-py3-none-manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 cf2601c10a4a24ce0003833f4fb90285217d560e36bbc6338fb646a3667469ad
MD5 ee1473fd9e649408c27f0599da3aa6f5
BLAKE2b-256 6a57ae239c2d02bdda5b86f4e9414f9369c35ce74b75b93a7150b4a4bfa67194

See more details on using hashes here.

File details

Details for the file moonshine_voice-0.0.49-py3-none-manylinux_2_31_aarch64.manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.49-py3-none-manylinux_2_31_aarch64.manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 dc757f919aba6b7a7711766f2d973d41001785354e9b367a93689d95a882dfa3
MD5 8c298a758654326bc7e4560377ae13d1
BLAKE2b-256 acedfe135e82abdeeea7212552cbc30f8c08cdf1949f46188acc421f466d7dac

See more details on using hashes here.

File details

Details for the file moonshine_voice-0.0.49-py3-none-macosx_15_0_universal2.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.49-py3-none-macosx_15_0_universal2.whl
Algorithm Hash digest
SHA256 5432364e3333322ef50c00d404301d945ce1d08921002b18d366f909089c9058
MD5 88911846299f081ff55b829ff0e52ee2
BLAKE2b-256 db5b56fb0f693f66fa539efe68bb8736cdb088b5f827825430bade5dad16fe70

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page