Skip to main content

AI meeting assistant for macOS — auto-record, live transcription, Claude-powered answers

Project description

Meeting Helper

AI meeting assistant for macOS. Automatically detects when a meeting starts, records and transcribes in real-time, and lets you ask Claude questions about what's being discussed — using your codebase and knowledge base as context.

Features

  • Auto-detection — starts recording automatically when any app activates your microphone (Zoom, Meet, Teams, etc.)
  • Real-time transcription — local Whisper (free, no API key needed) or Deepgram Nova-3 streaming
  • Invisible overlay — native macOS window hidden from screen share via NSWindow.setSharingType_(0)
  • Ask Claude — hotkey or type a question during the meeting; Claude answers using the transcript + your repos + knowledge docs
  • Per-meeting context — select which repos and knowledge folders to load per meeting, so Claude only gets relevant context
  • Post-meeting — full transcription + structured summary saved alongside the MP3 recording

Requirements

  • macOS 13+ (Ventura or later)
  • Python 3.11+
  • Microphone + Screen Recording + Accessibility permissions (prompted on first run)

Installation

Option A — pipx (recommended)

# Install pipx if you don't have it
brew install pipx && pipx ensurepath

# Install meeting-helper
pipx install meeting-helper

Option B — from source

git clone <repo-url> meeting-helper
cd meeting-helper
python3 -m venv .venv
source .venv/bin/activate
pip install -e .

First Run Setup

1. Download the Whisper model

Whisper is used for local transcription (no API key needed). The model needs to be downloaded once before your first meeting:

python3 -c "from faster_whisper import WhisperModel; WhisperModel('tiny', device='cpu', compute_type='int8')"

This downloads ~75MB to ~/.cache/huggingface/. It only happens once.

Optional — better accuracy: Use the small model (~244MB) instead of tiny:

python3 -c "from faster_whisper import WhisperModel; WhisperModel('small', device='cpu', compute_type='int8')"

Then in meeting-helper setup, set the Whisper model to small.

2. Run the setup wizard

meeting-helper setup

You'll be prompted for:

  1. Anthropic API key — get one at console.anthropic.com/settings/keys
  2. Deepgram API key (optional) — get one at console.deepgram.com for lower-latency streaming transcription. Leave blank to use local Whisper.
  3. Recordings directory — where to save MP3s and transcripts (default: ~/meetings)
  4. Repos (optional) — paths to your code repos for Claude context
  5. Knowledge folder (optional) — a folder of .md/.txt files organized by topic

3. Grant macOS permissions

On first run, macOS will prompt for:

  • Microphone — to capture your voice
  • Screen Recording — to capture system audio (meeting participants' voices)
  • Accessibility — for global hotkeys (2×Cmd, 2×ESC)

After granting each permission, run meeting-helper start again if it stops.


Usage

meeting-helper start          # start the background daemon
meeting-helper gui            # open the main window
meeting-helper stop           # stop the daemon
meeting-helper status         # show daemon and meeting status
meeting-helper logs           # tail the daemon log
meeting-helper version        # check for updates
meeting-helper fix-recording      # recover Screen Recording without rebooting
meeting-helper diagnose-recording # dump SCK / TCC diagnostics for paste-back

fix-recording — when system audio goes silent

On managed/corporate laptops the Screen Recording permission sometimes silently breaks: the OS still says it's granted, but ScreenCaptureKit stops producing samples and only mic audio is captured. Reboot is the only known certain fix. Before rebooting, try:

meeting-helper fix-recording

It (1) restarts the daemon to clear any stale capture session inside our process, then (2) optionally restarts tccd and coreaudiod (the macOS daemons that own permission state + audio routing) — needs your sudo password — which often un-wedges the OS-side cause without a reboot.

Flags: --no-sudo skips the sudo nudge; --open-settings opens System Settings → Privacy & Security → Screen Recording at the end.

Once the daemon is running:

  1. Start a Zoom/Meet/Teams call → recording starts automatically
  2. Speak → transcript appears in the overlay and GUI dashboard
  3. Ask Claude → press 2×Cmd (hotkey) or type in the "Ask Claude" box in the GUI
  4. End the call → recording stops, MP3 + live transcript saved

Hotkeys

Combo Action
2× Cmd Trigger Claude query (uses last 5 transcript sentences)
2× ESC Toggle overlay show/hide
ESC + → / ESC + ← Scroll response down/up
ESC + ] / ESC + [ Navigate previous/next response
Cmd + ↑ / Cmd + ↓ Font size up/down in overlay

local-todo Integration

If you use local-todo as your task tracker (via its MCP server), meeting-helper can ground its answers and pre-meeting briefs in your real ticket data.

Setup

In meeting-helper setup, you'll be prompted for the local-todo MCP command (default: npx @local-todo/mcp). Adjust if your install differs. The daemon spawns this command lazily on first use and keeps it alive for its lifetime.

Generate Standup Brief

In the GUI dashboard, click Generate Standup Brief. The agent pulls your current sprint's active tasks from local-todo and produces a first-person script (3-6 bullets) for you to read at standup. Source ticket cards appear below the script so you can verify before reading.

Hotkey-grounded answers

When you press 2× Cmd during a meeting, meeting-helper now searches your local-todo for tickets and docs matching the recent transcript. Matches are passed to Claude as context AND rendered as cards beneath the answer in the overlay and dashboard, so you can see exactly which tickets the answer is grounded in.

If local-todo isn't configured (or the MCP fails), queries fall back to the existing repos/knowledge context — no other behavior changes.

Live topic tracking (TopicWorker)

While a meeting is active, a background TopicWorker keeps track of what you are currently discussing — based on your microphone speech only (system audio is excluded). Every ~5 seconds it:

  1. Reads the rolling self-buffer (your last ~25 sentences).
  2. Extracts the topic, ticket IDs, a search query, and your most recent declarative statement using your configured AI provider (Claude / Gemini / local Ollama — same one you set in Settings).
  3. Pre-fetches matching tickets from local-todo into a warm cache.
  4. Broadcasts the result to the overlay and dashboard so you can see what the agent thinks you're working on.

When you press 2× Cmd, the answer is built using this warm cache instantly — no per-query LLM extraction or local-todo search. Result: faster answers, and the AI gets explicit ## Topic just discussed and ## What I just said sections in its prompt that resolve pronouns and vague follow-ups (e.g. "what's the difference now?" after you mentioned a ticket).

Two-stream audio (mic vs system)

Microphone and system audio (call participants) are now transcribed into separate buffers — self_buffer for what you say, other_buffer for what others say. The TopicWorker reads only self_buffer so the topic is grounded in your speech and not contaminated by what others say. The MP3 recording still mixes both for archival.

Required: wear headphones during meetings. Without headphones the laptop speaker feeds the mic, the call audio gets attributed to your buffer, and the topic detection becomes unreliable.


Per-Meeting Context

By default, no repos or knowledge docs are loaded into Claude's context — loading everything would dilute accuracy. Instead, use the Context Selector in the Dashboard during a meeting:

  1. Chips show your configured repos — click to select
  2. Tree shows your knowledge base folders — check what's relevant
  3. Click Load Context — Claude now has access to only what you selected
  4. Ask questions — Claude answers using that specific context

Post-Meeting

Open the Recordings screen in the GUI to:

  • Play back the MP3
  • Run full transcription (faster-whisper, timestamped)
  • Generate a structured summary with Claude (key points, decisions, action items)

Configuration

Config is saved to ~/.config/meeting-helper/config.json. Run meeting-helper setup to reconfigure.

Whisper model quality vs speed

Model Size Speed Accuracy
tiny 75MB Fastest Good for real-time
small 244MB Fast Better accuracy
base 142MB Medium Balanced

Default is tiny for real-time transcription. Post-meeting transcription always uses small for better quality.


Check for Updates

meeting-helper version

Checks PyPI and upgrades automatically via pipx if a newer version is available.


Troubleshooting

Daemon not detecting meetings

  • Make sure Screen Recording permission is granted in System Settings → Privacy & Security
  • Try meeting-helper logs to see what's happening

No transcription appearing

  • If using Deepgram: check your API key with meeting-helper status
  • If using local Whisper: ensure the model was downloaded (see First Run Setup above)
  • Check meeting-helper logs for errors

Overlay not visible

  • Press 2×ESC to show/hide
  • The overlay is intentionally invisible to screen share — check your physical display, not a recording

Permission denied errors

  • Go to System Settings → Privacy & Security → Microphone / Screen Recording / Accessibility
  • Remove and re-add the terminal app or meeting-helper entry

Testing without real audio

For testing the topic/query flow without a microphone (e.g. you're on a plane, or just want to script a scenario), use:

curl -X POST localhost:7891/debug/say -d '{"role":"self","text":"I am working on ENGT-4212 — frontend contract layer."}'
sleep 6  # let TopicWorker debounce + LLM call finish
curl localhost:7891/current_topic | jq
curl -X POST localhost:7891/query -d '{"text":"what does the feature flag do?"}'

The /debug/say endpoint appends a sentence to self_buffer (or other_buffer if "role":"other"), which is exactly what the transcribers do for real speech. The TopicWorker, broadcasts, and /query handler all behave identically to the real-audio path.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

meeting_helper-2.4.2.tar.gz (175.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

meeting_helper-2.4.2-py3-none-any.whl (175.7 kB view details)

Uploaded Python 3

File details

Details for the file meeting_helper-2.4.2.tar.gz.

File metadata

  • Download URL: meeting_helper-2.4.2.tar.gz
  • Upload date:
  • Size: 175.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.6

File hashes

Hashes for meeting_helper-2.4.2.tar.gz
Algorithm Hash digest
SHA256 6e77fad969e2e7e6160f448f0af73eb6ad0b2ff293bc54fb802f2b6a286575fd
MD5 984c7c161efc8ba962d240029a36ea50
BLAKE2b-256 477fd3f93ecaa15cfd9744b71cb716ed73d4415f736270af2e2c8d365eec87e1

See more details on using hashes here.

File details

Details for the file meeting_helper-2.4.2-py3-none-any.whl.

File metadata

  • Download URL: meeting_helper-2.4.2-py3-none-any.whl
  • Upload date:
  • Size: 175.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.6

File hashes

Hashes for meeting_helper-2.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e0422dee82c4a8897eddf7f6a730b239095f95622221b54c322cbfbffac950ab
MD5 0a4f8957d7efaf9a2228e6dc29076e6e
BLAKE2b-256 78fe4f775d30a62e4123a2c9f5aff5fc8e429e78ea186faf30150585f80bcaf8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page