Skip to main content

SONATA: SOund and Narrative Advanced Transcription Assistant

Project description

SONATA 🎵🔊

License: GPL v3 GitHub stars

SOund and Narrative Advanced Transcription Assistant

SONATA(SOund and Narrative Advanced Transcription Assistant) is advanced ASR system that captures human expressions including emotive sounds and non-verbal cues.

✨ Features

  • 🎙️ High-accuracy speech-to-text transcription using WhisperX
  • 😀 Recognition of 523+ emotive sounds and non-verbal cues
  • 🌍 Multi-language support with 10 languages
  • 👥 SOTA speaker diarization using Silero VAD and WavLM embeddings
  • ⏱️ Rich timestamp information at the word level
  • 🔄 Audio preprocessing capabilities

📚 See detailed features documentation

🚀 Installation

Install the package from PyPI:

pip install sonata-asr

Or install from source:

git clone https://github.com/hwk06023/SONATA.git
cd SONATA
pip install -e .

📖 Quick Start

Basic Transcription

from sonata.core.transcriber import IntegratedTranscriber

# Initialize the transcriber
transcriber = IntegratedTranscriber(asr_model="large-v3", device="cpu")

# Transcribe an audio file
result = transcriber.process_audio("path/to/audio.wav", language="en")
print(result["integrated_transcript"]["plain_text"])

CLI Usage

# Basic usage
sonata-asr path/to/audio.wav

# With speaker diarization
sonata-asr path/to/audio.wav --diarize

# Set number of speakers if known
sonata-asr path/to/audio.wav --diarize --num-speakers 3

Common CLI Options:

General:
  -o, --output FILE           Save transcript to specified JSON file
  -l, --language LANG         Language code (en, ko, zh, ja, fr, de, es, it, pt, ru)
  -m, --model NAME            WhisperX model size (tiny, small, medium, large-v3, etc.)
  -d, --device DEVICE         Device to run models on (cpu, cuda)
  --text-output               Save transcript to text file (defaults to input_name.txt)
  --preprocess                Preprocess audio (convert format and trim silence)

Diarization:
  --diarize                   Enable SOTA speaker diarization using Silero VAD and WavLM
  --num-speakers NUM          Set exact number of speakers (optional)

Audio Events:
  --threshold VALUE           Threshold for audio event detection (0.0-1.0)
  --custom-thresholds FILE    Path to JSON file with custom audio event thresholds
  --deep-detect               Enable multi-scale audio event detection for better accuracy
  --deep-detect-scales NUM    Number of scales for deep detection (1-3, default: 3)
  --deep-detect-window-sizes  Custom window sizes for deep detection (comma-separated)
  --deep-detect-hop-sizes     Custom hop sizes for deep detection (comma-separated)

📚 See full usage documentation
⌨️ See complete CLI documentation

🗣️ Supported Languages

SONATA supports 10 languages including English, Korean, Chinese, Japanese, French, German, Spanish, Italian, Portuguese, and Russian.

🌐 See languages documentation

🔊 Audio Event Detection

SONATA can detect over 500 different audio events, from laughter and applause to ambient sounds and music. The customizable event detection thresholds allow you to fine-tune sensitivity for specific audio events to match your unique use cases, such as podcast analysis, meeting transcription, or nature recording analysis.

🎵 See audio events documentation

👥 Speaker Diarization

SONATA provides state-of-the-art speaker diarization to identify and separate different speakers in recordings. The system uses Silero VAD for speech detection and WavLM embeddings for speaker identification, making it ideal for transcribing multi-speaker content like meetings, interviews, and podcasts.

🎙️ See speaker diarization documentation

🚀 Next Steps

  • 🧠 Advanced ASR model diversity
  • 😢 Improved emotive detection
  • 🔊 Better speaker diarization
  • ⚡ Performance optimization
  • 🛠️ Fix parallel processing issues in deep detection mode for improved reliability

🤝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

📝 See contribution guidelines

📄 License

This project is licensed under the GNU General Public License v3.0.

🙏 Acknowledgements

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sonata_asr-0.1.0.tar.gz (74.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sonata_asr-0.1.0-py3-none-any.whl (91.6 kB view details)

Uploaded Python 3

File details

Details for the file sonata_asr-0.1.0.tar.gz.

File metadata

  • Download URL: sonata_asr-0.1.0.tar.gz
  • Upload date:
  • Size: 74.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for sonata_asr-0.1.0.tar.gz
Algorithm Hash digest
SHA256 accaa552ca09dc7cfd01ca5f15c2eefd965a6b848ec467bdf9cbaf9df4dfea8b
MD5 7fb9dbed631fdd93ecdf4b7ddd827111
BLAKE2b-256 2bfcd764c8e72db8ead198e7deb33377c609657cace875df2450b789457bade5

See more details on using hashes here.

File details

Details for the file sonata_asr-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: sonata_asr-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 91.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.11

File hashes

Hashes for sonata_asr-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 96b4049e742546903cf1b72892d3a77f4fa7629a61c181d5ad43b4936e5b1bc1
MD5 e6215e80b32951fd923b012232c72648
BLAKE2b-256 32028dac382ccace1b3bd6b8e5d0e8a83527332121048de6081bd4d92433442c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page