Skip to main content

Fast, accurate, on-device AI library for building interactive voice applications

Project description

Moonshine Voice Python Package

A fast, accurate, on-device AI library for building interactive voice applications. Join our Discord to get help and support.

Installation

pip install moonshine-voice

Quick Start

# Listens to the microphone, logging to the console when there are 
# speech updates.
python -m moonshine_voice.mic_transcriber

Example

"""Transcribes live audio from the default microphone"""
import time
from moonshine_voice import (
    MicTranscriber,
    TranscriptEventListener,
    get_model_for_language,
)

# This will download the model files and cache them.
model_path, model_arch = get_model_for_language("en")

# MicTranscriber handles connecting to the microphone, capturing
# the audio data, detecting voice activity, breaking the speech
# up into segments, transcribing the speech, and sending events
# as the results are updated over time.
mic_transcriber = MicTranscriber(
    model_path=model_path, model_arch=model_arch)

# We use an event-driven interface to respond in real time
# as speech is detected.
class TestListener(TranscriptEventListener):
    def on_line_started(self, event):
        print(f"Line started: {event.line.text}")

    def on_line_text_changed(self, event):
        print(f"Line text changed: {event.line.text}")

    def on_line_completed(self, event):
        print(f"Line completed: {event.line.text}")

listener = TestListener()
mic_transcriber.add_listener(listener)
mic_transcriber.start()
print("Listening to the microphone, press Ctrl+C to stop...")

while True:
    time.sleep(0.1)

Other Sources

If you have a different source you're capturing audio from you can supply it directly to a transcriber.

"""Transcribes live audio from an arbitrary audio source."""
from moonshine_voice import (
    Transcriber,
    TranscriptEventListener,
    get_model_for_language,
    load_wav_file,
    get_assets_path,
)
import os
from typing import Iterator, Tuple


def audio_chunk_generator(
    wav_file_path: str, chunk_duration: float = 0.1
) -> Iterator[Tuple[list, int]]:
    """
    Example function that loads a WAV file and yields audio chunks.

    This demonstrates how you can integrate your own proprietary
    audio data capture sources. Replace this function with your own
    implementation that yields (audio_chunk, sample_rate) tuples.

    Args:
        wav_file_path: Path to the WAV file to load
        chunk_duration: Duration of each chunk in seconds

    Yields:
        Tuple of (audio_chunk, sample_rate) where:
        - audio_chunk: List of float audio samples
        - sample_rate: Sample rate in Hz
    """
    audio_data, sample_rate = load_wav_file(wav_file_path)
    chunk_size = int(chunk_duration * sample_rate)

    for i in range(0, len(audio_data), chunk_size):
        chunk = audio_data[i: i + chunk_size]
        yield (chunk, sample_rate)


model_path, model_arch = get_model_for_language("en")

transcriber = Transcriber(
    model_path=model_path, model_arch=model_arch)

stream = transcriber.create_stream(update_interval=0.5)
stream.start()


class TestListener(TranscriptEventListener):
    def on_line_started(self, event):
        print(f"{event.line.start_time:.2f}s: Line started: {event.line.text}")

    def on_line_text_changed(self, event):
        print(
            f"{event.line.start_time:.2f}s: Line text changed: {event.line.text}")

    def on_line_completed(self, event):
        print(f"{event.line.start_time:.2f}s: Line completed: {event.line.text}")


listener = TestListener()
stream.add_listener(listener)

# Feed audio chunks from the generator into the stream.
wav_file_path = os.path.join(get_assets_path(), "two_cities.wav")
for chunk, sample_rate in audio_chunk_generator(wav_file_path):
    stream.add_audio(chunk, sample_rate)

stream.stop()
stream.close()

Voice Commands

We also provide voice command recognition using the IntentRecognizer module. It captures transcribed audio from a MicTranscriber and invokes callback functions that match your programmed intents. Since it relies on an embedding model, you can use a helper function to get started:

from moonshine_voice import (
    MicTranscriber,
    IntentRecognizer,
    ModelArch,
    EmbeddingModelArch,
    get_embedding_model,
    get_model_for_language
)

# Download and load the embedding model for intent recognition
embedding_model_path, embedding_model_arch = get_embedding_model()

Next, create a recognizer and register your intent callbacks:

intent_recognizer = IntentRecognizer(
    model_path=embedding_model_path,
    model_arch=embedding_model_arch
)

def on_lights_on(trigger: str, utterance: str, similarity: float):
    """Handler for turning lights on."""
    print(f"\n💡 LIGHTS ON! (matched '{trigger}' with {similarity:.0%} confidence)")

def on_lights_off(trigger: str, utterance: str, similarity: float):
    """Handler for turning lights off."""
    print(f"\n🌑 LIGHTS OFF! (matched '{trigger}' with {similarity:.0%} confidence)")

intent_recognizer.register_intent("turn on the lights", on_lights_on)
intent_recognizer.register_intent("turn off the lights", on_lights_off)

Finally, create a MicTranscriber, connect it to your IntentRecognizer, and start the audio stream:

# Get the transcription model and initialize a MicTranscriber
model_path, model_arch = get_model_for_language("en")
mic_transcriber = MicTranscriber(model_path=model_path, model_arch=model_arch)

# The intent recognizer will process completed transcript lines and invoke trigger handlers
mic_transcriber.add_listener(intent_recognizer)

mic_transcriber.start()
try:
    while True:
        time.sleep(0.1)
except KeyboardInterrupt:
    print("\n\nStopping...", file=sys.stderr)
finally:
    intent_recognizer.close()
    mic_transcriber.stop()
    mic_transcriber.close()

Multiple Languages

The framework currently supports English, Spanish, Mandarin, Japanese, Korean, Vietnamese, Arabic, and Ukrainian. We are working on wider language support, and you can see which are supported in your version by calling supported_languages(). To use a language, request it using get_model_for_language() passing in the two-letter language code. For example get_model_for_language("es") will download the Spanish models and pass the information you need to create Transcriber objects using them.

Documentation

For more information, see the main Moonshine Voice documentation.

License

The code and English-language models are released under the MIT License - see the main project repository for details. The models used for other languages are released under the Moonshine Community License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

moonshine_voice-0.0.48-py3-none-win_amd64.whl (55.5 MB view details)

Uploaded Python 3Windows x86-64

moonshine_voice-0.0.48-py3-none-manylinux_2_34_x86_64.whl (84.9 MB view details)

Uploaded Python 3manylinux: glibc 2.34+ x86-64

moonshine_voice-0.0.48-py3-none-manylinux_2_34_aarch64.whl (83.8 MB view details)

Uploaded Python 3manylinux: glibc 2.34+ ARM64

moonshine_voice-0.0.48-py3-none-manylinux_2_31_aarch64.manylinux_2_34_aarch64.whl (57.6 MB view details)

Uploaded Python 3manylinux: glibc 2.31+ ARM64manylinux: glibc 2.34+ ARM64

moonshine_voice-0.0.48-py3-none-macosx_15_0_universal2.whl (76.7 MB view details)

Uploaded Python 3macOS 15.0+ universal2 (ARM64, x86-64)

File details

Details for the file moonshine_voice-0.0.48-py3-none-win_amd64.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.48-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 4cb403e6e529a6d05bc30bcc2a0b3c76f36462dd683c3b8bee86d2dc11a4e7de
MD5 ffb7c3404ad4e23f7e550062ecb9767f
BLAKE2b-256 187689f9dc821774e70cef425f931d0cf7bc5663ce96f9f38448a4dafa0151f0

See more details on using hashes here.

File details

Details for the file moonshine_voice-0.0.48-py3-none-manylinux_2_34_x86_64.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.48-py3-none-manylinux_2_34_x86_64.whl
Algorithm Hash digest
SHA256 34e8a67c28e8f6045c2b3d66ad9ed88f6a53504ce280728b49203e1a187e6401
MD5 06fa8b2b860bde6aa839acfb50321ed1
BLAKE2b-256 cc285de801b03fe7d341a66e7ebba4eb15dc5ab72746f2572f6aec0425fbb509

See more details on using hashes here.

File details

Details for the file moonshine_voice-0.0.48-py3-none-manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.48-py3-none-manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 260ebafa7fbd613215345f805f923de75e0f162a7e7dc3c0c239d8b3b53a7218
MD5 6b4650e1022298863a4d726a17156dab
BLAKE2b-256 480986b396bb44baeff117a028833469f21de907877fcf06708aa1d48ae3035a

See more details on using hashes here.

File details

Details for the file moonshine_voice-0.0.48-py3-none-manylinux_2_31_aarch64.manylinux_2_34_aarch64.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.48-py3-none-manylinux_2_31_aarch64.manylinux_2_34_aarch64.whl
Algorithm Hash digest
SHA256 0634715e020e32cc4bed7577d10413b93b1e4537633b03163e4f92cd01ebe0fd
MD5 a9ecf7df4370b2dd3a10585795750350
BLAKE2b-256 7d16b08ee1a9544f5e6256da15d15499e6f800b2844c159477a1aa7bfa897b48

See more details on using hashes here.

File details

Details for the file moonshine_voice-0.0.48-py3-none-macosx_15_0_universal2.whl.

File metadata

File hashes

Hashes for moonshine_voice-0.0.48-py3-none-macosx_15_0_universal2.whl
Algorithm Hash digest
SHA256 87c7647f5e0e509dc4cd0e1ed38ca4d8688cdbf72f86b76c44af4d740aab910b
MD5 5cd503b303288f5f28d88fdfdde4342d
BLAKE2b-256 507ef6a8ca7730350df47005941236da8005944ffcd2a8a4e79f350845aa9e8e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page