AI meeting assistant for macOS — auto-record, live transcription, Claude-powered answers
Project description
Meeting Helper
AI meeting assistant for macOS. Automatically detects when a meeting starts, records and transcribes in real-time, and lets you ask Claude questions about what's being discussed — using your codebase and knowledge base as context.
Features
- Auto-detection — starts recording automatically when any app activates your microphone (Zoom, Meet, Teams, etc.)
- Real-time transcription — local Whisper (free, no API key needed) or Deepgram Nova-3 streaming
- Invisible overlay — native macOS window hidden from screen share via
NSWindow.setSharingType_(0) - Ask Claude — hotkey or type a question during the meeting; Claude answers using the transcript + your repos + knowledge docs
- Per-meeting context — select which repos and knowledge folders to load per meeting, so Claude only gets relevant context
- Post-meeting — full transcription + structured summary saved alongside the MP3 recording
Requirements
- macOS 13+ (Ventura or later)
- Python 3.11+
- Microphone + Screen Recording + Accessibility permissions (prompted on first run)
Installation
Option A — pipx (recommended)
# Install pipx if you don't have it
brew install pipx && pipx ensurepath
# Install meeting-helper
pipx install meeting-helper
Option B — from source
git clone <repo-url> meeting-helper
cd meeting-helper
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
First Run Setup
1. Download the Whisper model
Whisper is used for local transcription (no API key needed). The model needs to be downloaded once before your first meeting:
python3 -c "from faster_whisper import WhisperModel; WhisperModel('tiny', device='cpu', compute_type='int8')"
This downloads ~75MB to ~/.cache/huggingface/. It only happens once.
Optional — better accuracy: Use the
smallmodel (~244MB) instead oftiny:python3 -c "from faster_whisper import WhisperModel; WhisperModel('small', device='cpu', compute_type='int8')"Then in
meeting-helper setup, set the Whisper model tosmall.
2. Run the setup wizard
meeting-helper setup
You'll be prompted for:
- Anthropic API key — get one at console.anthropic.com/settings/keys
- Deepgram API key (optional) — get one at console.deepgram.com for lower-latency streaming transcription. Leave blank to use local Whisper.
- Recordings directory — where to save MP3s and transcripts (default:
~/meetings) - Repos (optional) — paths to your code repos for Claude context
- Knowledge folder (optional) — a folder of
.md/.txtfiles organized by topic
3. Grant macOS permissions
On first run, macOS will prompt for:
- Microphone — to capture your voice
- Screen Recording — to capture system audio (meeting participants' voices)
- Accessibility — for global hotkeys (2×Cmd, 2×ESC)
After granting each permission, run meeting-helper start again if it stops.
Usage
meeting-helper start # start the background daemon
meeting-helper gui # open the main window
meeting-helper stop # stop the daemon
meeting-helper status # show daemon and meeting status
meeting-helper logs # tail the daemon log
meeting-helper version # check for updates
meeting-helper fix-recording # recover Screen Recording without rebooting
meeting-helper diagnose-recording # dump SCK / TCC diagnostics for paste-back
fix-recording — when system audio goes silent
On managed/corporate laptops the Screen Recording permission sometimes silently breaks: the OS still says it's granted, but ScreenCaptureKit stops producing samples and only mic audio is captured. Reboot is the only known certain fix. Before rebooting, try:
meeting-helper fix-recording
It (1) restarts the daemon to clear any stale capture session inside our
process, then (2) optionally restarts tccd and coreaudiod (the macOS
daemons that own permission state + audio routing) — needs your sudo
password — which often un-wedges the OS-side cause without a reboot.
Flags: --no-sudo skips the sudo nudge; --open-settings opens System
Settings → Privacy & Security → Screen Recording at the end.
Once the daemon is running:
- Start a Zoom/Meet/Teams call → recording starts automatically
- Speak → transcript appears in the overlay and GUI dashboard
- Ask Claude → press
2×Cmd(hotkey) or type in the "Ask Claude" box in the GUI - End the call → recording stops, MP3 + live transcript saved
Hotkeys
| Combo | Action |
|---|---|
2× Cmd |
Trigger Claude query (uses last 5 transcript sentences) |
2× ESC |
Toggle overlay show/hide |
ESC + → / ESC + ← |
Scroll response down/up |
ESC + ] / ESC + [ |
Navigate previous/next response |
Cmd + ↑ / Cmd + ↓ |
Font size up/down in overlay |
local-todo Integration
If you use local-todo as your task tracker (via its MCP server), meeting-helper can ground its answers and pre-meeting briefs in your real ticket data.
Setup
In meeting-helper setup, you'll be prompted for the local-todo MCP command (default: npx @local-todo/mcp). Adjust if your install differs. The daemon spawns this command lazily on first use and keeps it alive for its lifetime.
Generate Standup Brief
In the GUI dashboard, click Generate Standup Brief. The agent pulls your current sprint's active tasks from local-todo and produces a first-person script (3-6 bullets) for you to read at standup. Source ticket cards appear below the script so you can verify before reading.
Hotkey-grounded answers
When you press 2× Cmd during a meeting, meeting-helper now searches your local-todo for tickets and docs matching the recent transcript. Matches are passed to Claude as context AND rendered as cards beneath the answer in the overlay and dashboard, so you can see exactly which tickets the answer is grounded in.
If local-todo isn't configured (or the MCP fails), queries fall back to the existing repos/knowledge context — no other behavior changes.
Live topic tracking (TopicWorker)
While a meeting is active, a background TopicWorker keeps track of what you are currently discussing — based on your microphone speech only (system audio is excluded). Every ~5 seconds it:
- Reads the rolling self-buffer (your last ~25 sentences).
- Extracts the topic, ticket IDs, a search query, and your most recent declarative statement using your configured AI provider (Claude / Gemini / local Ollama — same one you set in Settings).
- Pre-fetches matching tickets from local-todo into a warm cache.
- Broadcasts the result to the overlay and dashboard so you can see what the agent thinks you're working on.
When you press 2× Cmd, the answer is built using this warm cache instantly — no per-query LLM extraction or local-todo search. Result: faster answers, and the AI gets explicit ## Topic just discussed and ## What I just said sections in its prompt that resolve pronouns and vague follow-ups (e.g. "what's the difference now?" after you mentioned a ticket).
Two-stream audio (mic vs system)
Microphone and system audio (call participants) are now transcribed into separate buffers — self_buffer for what you say, other_buffer for what others say. The TopicWorker reads only self_buffer so the topic is grounded in your speech and not contaminated by what others say. The MP3 recording still mixes both for archival.
Required: wear headphones during meetings. Without headphones the laptop speaker feeds the mic, the call audio gets attributed to your buffer, and the topic detection becomes unreliable.
Per-Meeting Context
By default, no repos or knowledge docs are loaded into Claude's context — loading everything would dilute accuracy. Instead, use the Context Selector in the Dashboard during a meeting:
- Chips show your configured repos — click to select
- Tree shows your knowledge base folders — check what's relevant
- Click Load Context — Claude now has access to only what you selected
- Ask questions — Claude answers using that specific context
Post-Meeting
Open the Recordings screen in the GUI to:
- Play back the MP3
- Run full transcription (faster-whisper, timestamped)
- Generate a structured summary with Claude (key points, decisions, action items)
Configuration
Config is saved to ~/.config/meeting-helper/config.json. Run meeting-helper setup to reconfigure.
Whisper model quality vs speed
| Model | Size | Speed | Accuracy |
|---|---|---|---|
tiny |
75MB | Fastest | Good for real-time |
small |
244MB | Fast | Better accuracy |
base |
142MB | Medium | Balanced |
Default is tiny for real-time transcription. Post-meeting transcription always uses small for better quality.
Check for Updates
meeting-helper version
Checks PyPI and upgrades automatically via pipx if a newer version is available.
Troubleshooting
Daemon not detecting meetings
- Make sure Screen Recording permission is granted in System Settings → Privacy & Security
- Try
meeting-helper logsto see what's happening
No transcription appearing
- If using Deepgram: check your API key with
meeting-helper status - If using local Whisper: ensure the model was downloaded (see First Run Setup above)
- Check
meeting-helper logsfor errors
Overlay not visible
- Press
2×ESCto show/hide - The overlay is intentionally invisible to screen share — check your physical display, not a recording
Permission denied errors
- Go to System Settings → Privacy & Security → Microphone / Screen Recording / Accessibility
- Remove and re-add the terminal app or meeting-helper entry
Testing without real audio
For testing the topic/query flow without a microphone (e.g. you're on a plane, or just want to script a scenario), use:
curl -X POST localhost:7891/debug/say -d '{"role":"self","text":"I am working on ENGT-4212 — frontend contract layer."}'
sleep 6 # let TopicWorker debounce + LLM call finish
curl localhost:7891/current_topic | jq
curl -X POST localhost:7891/query -d '{"text":"what does the feature flag do?"}'
The /debug/say endpoint appends a sentence to self_buffer (or other_buffer if "role":"other"), which is exactly what the transcribers do for real speech. The TopicWorker, broadcasts, and /query handler all behave identically to the real-audio path.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file meeting_helper-2.4.1.tar.gz.
File metadata
- Download URL: meeting_helper-2.4.1.tar.gz
- Upload date:
- Size: 167.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6772e3fc45bc0603f40a34ac4cff5fad5472edcf733f6ab1b759e435a5c0faf6
|
|
| MD5 |
6b81750bcc9ea1631262de670013b281
|
|
| BLAKE2b-256 |
2bcfbc424028da658873d323471c15be23a3922323bd722e1bafaad6b2b152f0
|
File details
Details for the file meeting_helper-2.4.1-py3-none-any.whl.
File metadata
- Download URL: meeting_helper-2.4.1-py3-none-any.whl
- Upload date:
- Size: 169.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c4821e42c6c26167a99df41df8d10464d433730421f412ee93e0731eeea867eb
|
|
| MD5 |
ca5dcf491068f4ecf016824495b0bc39
|
|
| BLAKE2b-256 |
8676a2a773f54d89198ff48cad9aff73328ed64ec195ab7b00ff9aaca21bf48c
|