Skip to main content

Real-time audio transcription MCP server for Claude Code

Project description

audio-transcript-mcp

Real-time audio transcription MCP server for Claude Code.

Captures microphone + system audio (WASAPI loopback on Windows) and transcribes via Deepgram (cloud) or faster-whisper (local, GPU/CPU).

Features

  • Dual audio capture: mic + system sound simultaneously
  • Two STT backends switchable at runtime (Deepgram nova-3 / faster-whisper)
  • Stereo opus recording: each session saves a stereo opus file (L=mic, R=system audio)
  • Per-session directories: transcript + audio saved to ~/.audio-transcript-mcp/transcripts/<timestamp>/
  • Chunk overlap with text deduplication (no cut words at boundaries)
  • Native float32 audio pipeline for whisper (no lossy int16 round-trip)
  • High-quality stateful resampling via soxr (no boundary artifacts)
  • Whisper hallucination filter (no_speech_prob + avg_logprob thresholds)
  • Transcript buffer with time-based queries
  • Auto-reconnect for Deepgram WebSocket
  • GPU model unload/reload on stop/start (CUDA memory management)

Architecture

┌─────────────┐     ┌──────────┐     ┌─────────────────┐
│  Mic (int16) ├────►│          │     │  STT Backend    │
│  WASAPI      │     │  Worker  ├────►│  whisper / DG   ├──► Transcript buffer
└─────────────┘     │  Thread  │     └─────────────────┘
                    │          ├────►┌─────────────────┐
┌─────────────┐     │          │     │ StereoOpusRec   │
│ System audio ├────►│          │     │ L=me R=others   ├──► audio.opus
│ Loopback f32 │     └──────────┘     └─────────────────┘
└─────────────┘

Audio pipeline: native capture → stereo→mono → soxr resample → backend/opus

Each audio source runs in its own worker thread. Audio is captured in the device's native format (float32 for loopback, int16 for mic), converted to mono, and routed to both the STT backend and the stereo opus recorder.

Requirements

  • Python 3.11+
  • Windows (WASAPI loopback for system audio capture); mic-only on macOS/Linux
  • NVIDIA GPU recommended for local whisper backend

Installation

From PyPI (recommended)

pip install audio-transcript-mcp

Or run without installing via uvx:

uvx audio-transcript-mcp

From source

git clone https://github.com/llilakoblock/audio-transcript-mcp.git
cd audio-transcript-mcp
pip install -e .

MCP Configuration

Add to your mcp.json (Claude Code settings):

Using PyPI install

{
  "audio-transcript": {
    "type": "stdio",
    "command": "audio-transcript-mcp",
    "env": {
      "STT_BACKEND": "local",
      "DEEPGRAM_API_KEY": "your-deepgram-api-key",
      "DEEPGRAM_LANGUAGE": "ru",
      "DEEPGRAM_MODEL": "nova-3",
      "DEEPGRAM_UTTERANCE_END_MS": "2500",
      "DEEPGRAM_ENDPOINTING": "500",
      "WHISPER_MODEL": "large-v3",
      "WHISPER_DEVICE": "cuda",
      "WHISPER_LANGUAGE": "ru",
      "WHISPER_CHUNK_SEC": "15",
      "WHISPER_OVERLAP_SEC": "3",
      "TRANSCRIPT_MAX_AGE": "3600"
    }
  }
}

Using uvx (no install needed)

{
  "audio-transcript": {
    "type": "stdio",
    "command": "uvx",
    "args": ["audio-transcript-mcp"],
    "env": {
      "STT_BACKEND": "deepgram",
      "DEEPGRAM_API_KEY": "your-deepgram-api-key"
    }
  }
}

Note: System audio capture (loopback) uses WASAPI and is Windows-only. On macOS/Linux only microphone input works out of the box.

Environment Variables

Configuration is done via environment variables in the env block of your MCP config. Most parameters can also be changed at runtime via the set_config MCP tool without restarting the server.

General

Variable Default Description
STT_BACKEND local Which speech-to-text engine to use. "deepgram" for cloud (fast, needs API key) or "local" for offline faster-whisper (needs GPU). Switchable at runtime via set_backend tool.
TRANSCRIPT_MAX_AGE 3600 How long (seconds) to keep transcript entries in the in-memory buffer. Older entries are pruned automatically.
TRANSCRIPT_DIR ~/.audio-transcript-mcp/transcripts Root directory for session output. Each session creates a timestamped subdirectory with transcript.txt and audio.opus.

Deepgram (cloud STT)

Used when STT_BACKEND=deepgram. Streams audio over WebSocket, results in real-time.

Variable Default Description
DEEPGRAM_API_KEY Required. Get one at console.deepgram.com.
DEEPGRAM_LANGUAGE ru Language code. Use "multi" for automatic multi-language detection (requires nova-3).
DEEPGRAM_MODEL nova-3 Deepgram model. nova-3 is latest and supports "multi" language. nova-2 is older but cheaper.
DEEPGRAM_UTTERANCE_END_MS 2500 How long to wait (ms) after speech ends before finalizing the utterance. Higher = fewer splits in long pauses. Requires interim_results=true (set automatically).
DEEPGRAM_ENDPOINTING 500 Endpointing sensitivity in ms. Lower = faster response but may split mid-sentence. Higher = waits longer before deciding speech ended.

Whisper (local STT)

Used when STT_BACKEND=local. Runs faster-whisper on your GPU/CPU, fully offline.

Variable Default Description
WHISPER_MODEL large-v3 Model size. Options: tiny, base, small, medium, large-v3. Larger = more accurate but slower and more VRAM. large-v3 needs ~4GB VRAM.
WHISPER_DEVICE cuda "cuda" for NVIDIA GPU (recommended) or "cpu" (much slower).
WHISPER_LANGUAGE ru Language hint (e.g. "ru", "en"). Empty = auto-detect. Setting a language improves accuracy and speed.
WHISPER_CHUNK_SEC 15 Audio chunk duration in seconds sent to whisper for transcription. Longer chunks = more context but higher latency.
WHISPER_OVERLAP_SEC 3 Overlap between consecutive chunks. Prevents words from being cut at chunk boundaries. Text deduplication removes repeated words automatically.

Full example

{
  "audio-transcript": {
    "type": "stdio",
    "command": "audio-transcript-mcp",
    "env": {
      "STT_BACKEND": "local",
      "DEEPGRAM_API_KEY": "your-deepgram-api-key",
      "DEEPGRAM_LANGUAGE": "multi",
      "DEEPGRAM_MODEL": "nova-3",
      "DEEPGRAM_UTTERANCE_END_MS": "2500",
      "DEEPGRAM_ENDPOINTING": "500",
      "WHISPER_MODEL": "large-v3",
      "WHISPER_DEVICE": "cuda",
      "WHISPER_LANGUAGE": "ru",
      "WHISPER_CHUNK_SEC": "15",
      "WHISPER_OVERLAP_SEC": "3",
      "TRANSCRIPT_MAX_AGE": "3600",
      "TRANSCRIPT_DIR": "C:/Users/you/.audio-transcript-mcp/transcripts"
    }
  }
}

You only need to set the variables for the backend you're using. Deepgram vars are ignored when STT_BACKEND=local and vice versa.

Session Output

Each recording session creates a timestamped directory:

~/.audio-transcript-mcp/transcripts/
  2026-03-06_23-24-48/
    transcript.txt    # Plain text transcript
    audio.opus        # Stereo opus (L=mic, R=system)
    debug.log         # Whisper debug data (local backend only)

The transcript is plain text:

[23:24:50] me — Hello, can you hear me?

[23:24:52] others — Yes, I can hear you fine.

[23:24:55] system — [STARTED: Microphone, 44100Hz, 2ch]

MCP Tools

Tool Description
start_listening Start capturing mic + system audio and transcribing
stop_listening Stop capture, save transcript and opus recording
is_listening Check if capture is active
get_transcript Get transcript for the last N seconds (default 60)
get_full_transcript Get entire transcript buffer
get_transcript_since Get transcript since a Unix timestamp
clear_transcript Clear the transcript buffer
get_backend Show current STT backend
set_backend Switch backend ("deepgram" / "local") at runtime
get_config Show all configuration values (marks dynamic vs static)
set_config Change a dynamic config parameter at runtime (e.g. language, chunk size)

Project Structure

src/audio_transcript_mcp/
  __init__.py            # Package version
  __main__.py            # python -m entry point
  server.py              # MCP tools (thin wrapper)
  engine.py              # AudioEngine, Buffer, config
  audio_utils.py         # Format conversion (float32↔int16, stereo→mono)
  backends/
    __init__.py          # Backend factory
    whisper.py           # Local faster-whisper STT
    deepgram.py          # Deepgram WebSocket STT
  recorder/
    __init__.py
    opus.py              # StereoOpusRecorder (PyOgg)

Releasing

Releases are automated via GitHub Actions:

# Update version in src/audio_transcript_mcp/__init__.py
git tag v0.1.0
git push origin v0.1.0
# CI automatically builds, publishes to PyPI, and creates a GitHub Release

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

audio_transcript_mcp-0.2.1.tar.gz (15.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

audio_transcript_mcp-0.2.1-py3-none-any.whl (18.3 kB view details)

Uploaded Python 3

File details

Details for the file audio_transcript_mcp-0.2.1.tar.gz.

File metadata

  • Download URL: audio_transcript_mcp-0.2.1.tar.gz
  • Upload date:
  • Size: 15.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for audio_transcript_mcp-0.2.1.tar.gz
Algorithm Hash digest
SHA256 10a2c2c528e8e3bdafb77ff186ea037c7a5314a2939db8832909d61b672aafbe
MD5 75e530888ae71c83872c76db236543ac
BLAKE2b-256 3979a9c1526d077922b9874a1fb7e6c1b7b16a70d32fc0b0526c4d7969248233

See more details on using hashes here.

Provenance

The following attestation bundles were made for audio_transcript_mcp-0.2.1.tar.gz:

Publisher: release.yml on llilakoblock/audio-transcript-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file audio_transcript_mcp-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for audio_transcript_mcp-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b73e3b6f6cbf8b12665f002a51a193f0c69aa0a440f69b3d39b37376be3c1c0c
MD5 b5da424c6fb114f60568c03d35c16333
BLAKE2b-256 838d62db0507633e9e27664a4eab99e5dab3d7616067969e54be881fd46816ed

See more details on using hashes here.

Provenance

The following attestation bundles were made for audio_transcript_mcp-0.2.1-py3-none-any.whl:

Publisher: release.yml on llilakoblock/audio-transcript-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page