SONATA: SOund and Narrative Advanced Transcription Assistant
Project description
SONATA 🎵🔊
SOund and Narrative Advanced Transcription Assistant
SONATA(SOund and Narrative Advanced Transcription Assistant) is advanced ASR system that captures human expressions including emotive sounds and non-verbal cues.
✨ Features
- 🎙️ High-accuracy speech-to-text transcription using WhisperX
- 😀 Recognition of 523+ emotive sounds and non-verbal cues
- 🌍 Multi-language support with 10 languages
- 👥 SOTA speaker diarization using Silero VAD and WavLM embeddings
- ⏱️ Rich timestamp information at the word level
- 🔄 Audio preprocessing capabilities
📚 See detailed features documentation
🚀 Installation
Install the package from PyPI:
pip install sonata-asr
Or install from source:
git clone https://github.com/hwk06023/SONATA.git
cd SONATA
pip install -e .
📖 Quick Start
Basic Transcription
from sonata.core.transcriber import IntegratedTranscriber
# Initialize the transcriber
transcriber = IntegratedTranscriber(asr_model="large-v3", device="cpu")
# Transcribe an audio file
result = transcriber.process_audio("path/to/audio.wav", language="en")
print(result["integrated_transcript"]["plain_text"])
CLI Usage
# Basic usage
sonata-asr path/to/audio.wav
# With speaker diarization
sonata-asr path/to/audio.wav --diarize
# Set number of speakers if known
sonata-asr path/to/audio.wav --diarize --num-speakers 3
Common CLI Options:
General:
-o, --output FILE Save transcript to specified JSON file
-l, --language LANG Language code (en, ko, zh, ja, fr, de, es, it, pt, ru)
-m, --model NAME WhisperX model size (tiny, small, medium, large-v3, etc.)
-d, --device DEVICE Device to run models on (cpu, cuda)
--text-output Save transcript to text file (defaults to input_name.txt)
--preprocess Preprocess audio (convert format and trim silence)
Diarization:
--diarize Enable SOTA speaker diarization using Silero VAD and WavLM
--num-speakers NUM Set exact number of speakers (optional)
Audio Events:
--threshold VALUE Threshold for audio event detection (0.0-1.0)
--custom-thresholds FILE Path to JSON file with custom audio event thresholds
--deep-detect Enable multi-scale audio event detection for better accuracy
--deep-detect-scales NUM Number of scales for deep detection (1-3, default: 3)
--deep-detect-window-sizes Custom window sizes for deep detection (comma-separated)
--deep-detect-hop-sizes Custom hop sizes for deep detection (comma-separated)
📚 See full usage documentation
⌨️ See complete CLI documentation
🗣️ Supported Languages
SONATA supports 10 languages including English, Korean, Chinese, Japanese, French, German, Spanish, Italian, Portuguese, and Russian.
🔊 Audio Event Detection
SONATA can detect over 500 different audio events, from laughter and applause to ambient sounds and music. The customizable event detection thresholds allow you to fine-tune sensitivity for specific audio events to match your unique use cases, such as podcast analysis, meeting transcription, or nature recording analysis.
🎵 See audio events documentation
👥 Speaker Diarization
SONATA provides state-of-the-art speaker diarization to identify and separate different speakers in recordings. The system uses Silero VAD for speech detection and WavLM embeddings for speaker identification, making it ideal for transcribing multi-speaker content like meetings, interviews, and podcasts.
🎙️ See speaker diarization documentation
🚀 Next Steps
- 🧠 Advanced ASR model diversity
- 😢 Improved emotive detection
- 🔊 Better speaker diarization
- ⚡ Performance optimization
- 🛠️ Fix parallel processing issues in deep detection mode for improved reliability
🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
📄 License
This project is licensed under the GNU General Public License v3.0.
🙏 Acknowledgements
- WhisperX - Fast speech recognition
- AudioSet AST - Audio event detection
- MIT/ast-finetuned-audioset-10-10-0.4593 - Pretrained model for audio event classification
- Silero VAD - Voice activity detection for speaker diarization
- WavLM - Microsoft's advanced audio understanding model
- microsoft/wavlm-base-plus-sv - Speaker verification model for speaker embeddings
- SpeechBrain - Speaker diarization and embedding extraction
- PyAnnote - Advanced speaker diarization toolkit
- pyannote/segmentation - Speaker change detection
- pyannote/clustering - Speaker clustering
- HuggingFace Transformers - NLP tools and transformer models
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file sonata_asr-0.1.0.tar.gz.
File metadata
- Download URL: sonata_asr-0.1.0.tar.gz
- Upload date:
- Size: 74.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
accaa552ca09dc7cfd01ca5f15c2eefd965a6b848ec467bdf9cbaf9df4dfea8b
|
|
| MD5 |
7fb9dbed631fdd93ecdf4b7ddd827111
|
|
| BLAKE2b-256 |
2bfcd764c8e72db8ead198e7deb33377c609657cace875df2450b789457bade5
|
File details
Details for the file sonata_asr-0.1.0-py3-none-any.whl.
File metadata
- Download URL: sonata_asr-0.1.0-py3-none-any.whl
- Upload date:
- Size: 91.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.11.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
96b4049e742546903cf1b72892d3a77f4fa7629a61c181d5ad43b4936e5b1bc1
|
|
| MD5 |
e6215e80b32951fd923b012232c72648
|
|
| BLAKE2b-256 |
32028dac382ccace1b3bd6b8e5d0e8a83527332121048de6081bd4d92433442c
|