Skip to main content

Real-time audio transcription MCP server for Claude Code

Project description

audio-transcript-mcp

Real-time audio transcription MCP server for Claude Code.

Captures microphone + system audio (WASAPI loopback on Windows) and transcribes via Deepgram (cloud) or faster-whisper (local, GPU/CPU).

Features

  • Dual audio capture: mic + system sound simultaneously
  • Two STT backends switchable at runtime (Deepgram nova-3 / faster-whisper)
  • Stereo opus recording: each session saves a stereo opus file (L=mic, R=system audio)
  • Per-session directories: transcript + audio saved to ~/.audio-transcript-mcp/transcripts/<timestamp>/
  • Chunk overlap with text deduplication (no cut words at boundaries)
  • Native float32 audio pipeline for whisper (no lossy int16 round-trip)
  • High-quality stateful resampling via soxr (no boundary artifacts)
  • Whisper hallucination filter (no_speech_prob + avg_logprob thresholds)
  • Transcript buffer with time-based queries
  • Auto-reconnect for Deepgram WebSocket
  • GPU model unload/reload on stop/start (CUDA memory management)

Architecture

┌─────────────┐     ┌──────────┐     ┌─────────────────┐
│  Mic (int16) ├────►│          │     │  STT Backend    │
│  WASAPI      │     │  Worker  ├────►│  whisper / DG   ├──► Transcript buffer
└─────────────┘     │  Thread  │     └─────────────────┘
                    │          ├────►┌─────────────────┐
┌─────────────┐     │          │     │ StereoOpusRec   │
│ System audio ├────►│          │     │ L=me R=others   ├──► audio.opus
│ Loopback f32 │     └──────────┘     └─────────────────┘
└─────────────┘

Audio pipeline: native capture → stereo→mono → soxr resample → backend/opus

Each audio source runs in its own worker thread. Audio is captured in the device's native format (float32 for loopback, int16 for mic), converted to mono, and routed to both the STT backend and the stereo opus recorder.

Requirements

  • Python 3.10+
  • Windows (WASAPI loopback for system audio capture); mic-only on macOS/Linux
  • NVIDIA GPU recommended for local whisper backend

Installation

From PyPI (recommended)

pip install audio-transcript-mcp

Or run without installing via uvx:

uvx audio-transcript-mcp

From source

git clone https://github.com/llilakoblock/audio-transcript-mcp.git
cd audio-transcript-mcp
pip install -e .

MCP Configuration

Add to your mcp.json (Claude Code settings):

Using PyPI install

{
  "audio-transcript": {
    "type": "stdio",
    "command": "audio-transcript-mcp",
    "env": {
      "STT_BACKEND": "local",
      "DEEPGRAM_API_KEY": "your-deepgram-api-key",
      "DEEPGRAM_LANGUAGE": "ru",
      "DEEPGRAM_MODEL": "nova-3",
      "DEEPGRAM_UTTERANCE_END_MS": "2500",
      "DEEPGRAM_ENDPOINTING": "500",
      "WHISPER_MODEL": "large-v3",
      "WHISPER_DEVICE": "cuda",
      "WHISPER_LANGUAGE": "ru",
      "WHISPER_CHUNK_SEC": "10",
      "WHISPER_OVERLAP_SEC": "2",
      "TRANSCRIPT_MAX_AGE": "3600"
    }
  }
}

Using uvx (no install needed)

{
  "audio-transcript": {
    "type": "stdio",
    "command": "uvx",
    "args": ["audio-transcript-mcp"],
    "env": {
      "STT_BACKEND": "deepgram",
      "DEEPGRAM_API_KEY": "your-deepgram-api-key"
    }
  }
}

Note: System audio capture (loopback) uses WASAPI and is Windows-only. On macOS/Linux only microphone input works out of the box.

Environment Variables

All configuration is done via environment variables in the env block of your MCP config.

General

Variable Default Description
STT_BACKEND deepgram Which speech-to-text engine to use. "deepgram" for cloud (fast, needs API key) or "local" for offline faster-whisper (needs GPU). Switchable at runtime via set_backend tool.
TRANSCRIPT_MAX_AGE 3600 How long (seconds) to keep transcript entries in the in-memory buffer. Older entries are pruned automatically.
TRANSCRIPT_DIR ~/.audio-transcript-mcp/transcripts Root directory for session output. Each session creates a timestamped subdirectory with transcript.txt and audio.opus.

Deepgram (cloud STT)

Used when STT_BACKEND=deepgram. Streams audio over WebSocket, results in real-time.

Variable Default Description
DEEPGRAM_API_KEY Required. Get one at console.deepgram.com.
DEEPGRAM_LANGUAGE ru Language code. Use "multi" for automatic multi-language detection (requires nova-3).
DEEPGRAM_MODEL nova-3 Deepgram model. nova-3 is latest and supports "multi" language. nova-2 is older but cheaper.
DEEPGRAM_UTTERANCE_END_MS 2500 How long to wait (ms) after speech ends before finalizing the utterance. Higher = fewer splits in long pauses. Requires interim_results=true (set automatically).
DEEPGRAM_ENDPOINTING 500 Endpointing sensitivity in ms. Lower = faster response but may split mid-sentence. Higher = waits longer before deciding speech ended.

Whisper (local STT)

Used when STT_BACKEND=local. Runs faster-whisper on your GPU/CPU, fully offline.

Variable Default Description
WHISPER_MODEL large-v3 Model size. Options: tiny, base, small, medium, large-v3. Larger = more accurate but slower and more VRAM. large-v3 needs ~4GB VRAM.
WHISPER_DEVICE cuda "cuda" for NVIDIA GPU (recommended) or "cpu" (much slower).
WHISPER_LANGUAGE Language hint (e.g. "ru", "en"). Empty = auto-detect. Setting a language improves accuracy and speed.
WHISPER_CHUNK_SEC 5 Audio chunk duration in seconds sent to whisper for transcription. Longer chunks = more context but higher latency.
WHISPER_OVERLAP_SEC 2 Overlap between consecutive chunks. Prevents words from being cut at chunk boundaries. Text deduplication removes repeated words automatically.

Full example

{
  "audio-transcript": {
    "type": "stdio",
    "command": "audio-transcript-mcp",
    "env": {
      "STT_BACKEND": "local",
      "DEEPGRAM_API_KEY": "your-deepgram-api-key",
      "DEEPGRAM_LANGUAGE": "multi",
      "DEEPGRAM_MODEL": "nova-3",
      "DEEPGRAM_UTTERANCE_END_MS": "2500",
      "DEEPGRAM_ENDPOINTING": "500",
      "WHISPER_MODEL": "large-v3",
      "WHISPER_DEVICE": "cuda",
      "WHISPER_LANGUAGE": "ru",
      "WHISPER_CHUNK_SEC": "15",
      "WHISPER_OVERLAP_SEC": "3",
      "TRANSCRIPT_MAX_AGE": "3600",
      "TRANSCRIPT_DIR": "C:/Users/you/.audio-transcript-mcp/transcripts"
    }
  }
}

You only need to set the variables for the backend you're using. Deepgram vars are ignored when STT_BACKEND=local and vice versa.

Session Output

Each recording session creates a timestamped directory:

~/.audio-transcript-mcp/transcripts/
  2026-03-06_23-24-48/
    transcript.txt    # Plain text transcript
    audio.opus        # Stereo opus (L=mic, R=system)
    debug.log         # Whisper debug data (local backend only)

The transcript is plain text:

[23:24:50] me — Hello, can you hear me?

[23:24:52] others — Yes, I can hear you fine.

[23:24:55] system — [STARTED: Microphone, 44100Hz, 2ch]

MCP Tools

Tool Description
start_listening Start capturing mic + system audio and transcribing
stop_listening Stop capture, save transcript and opus recording
is_listening Check if capture is active
get_transcript Get transcript for the last N seconds (default 60)
get_full_transcript Get entire transcript buffer
get_transcript_since Get transcript since a Unix timestamp
clear_transcript Clear the transcript buffer
get_backend Show current STT backend
set_backend Switch backend ("deepgram" / "local") at runtime

Project Structure

src/audio_transcript_mcp/
  __init__.py            # Package version
  __main__.py            # python -m entry point
  server.py              # MCP tools (thin wrapper)
  engine.py              # AudioEngine, Buffer, config
  audio_utils.py         # Format conversion (float32↔int16, stereo→mono)
  backends/
    __init__.py          # Backend factory
    whisper.py           # Local faster-whisper STT
    deepgram.py          # Deepgram WebSocket STT
  recorder/
    __init__.py
    opus.py              # StereoOpusRecorder (PyOgg)

Releasing

Releases are automated via GitHub Actions:

# Update version in src/audio_transcript_mcp/__init__.py
git tag v0.1.0
git push origin v0.1.0
# CI automatically builds, publishes to PyPI, and creates a GitHub Release

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

audio_transcript_mcp-0.2.0.tar.gz (14.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

audio_transcript_mcp-0.2.0-py3-none-any.whl (18.2 kB view details)

Uploaded Python 3

File details

Details for the file audio_transcript_mcp-0.2.0.tar.gz.

File metadata

  • Download URL: audio_transcript_mcp-0.2.0.tar.gz
  • Upload date:
  • Size: 14.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for audio_transcript_mcp-0.2.0.tar.gz
Algorithm Hash digest
SHA256 176282adc7da5699f3aad68080c7971c9ba50e69d9a20d35788fd4312a734e75
MD5 ffa73f9329ffd4d203bd4fbcd97cf97e
BLAKE2b-256 599c362338c9aa045843bce70606581d1335205457e2e2d9643c4d627ae62d29

See more details on using hashes here.

Provenance

The following attestation bundles were made for audio_transcript_mcp-0.2.0.tar.gz:

Publisher: release.yml on llilakoblock/audio-transcript-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file audio_transcript_mcp-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for audio_transcript_mcp-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 dff980a22b470eabbac07cc11e8038cbff2b3c8777fe6f3e2ca3f2a5115e62cc
MD5 ed0aecafe01b048f31aa17875849fa6a
BLAKE2b-256 1a6a89a1a327e0ac12724a6da3be1aee76cd792a9a24ec76df08909d1b459c76

See more details on using hashes here.

Provenance

The following attestation bundles were made for audio_transcript_mcp-0.2.0-py3-none-any.whl:

Publisher: release.yml on llilakoblock/audio-transcript-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page