Local-first meeting intelligence: ingests Google Meet (Gemini) and Granola transcripts, exposes them via an MCP server.
Project description
Meeting Memory
Your meeting transcripts, searchable and queryable through AI. Meeting Memory syncs transcripts from Google Meet (via "Notes by Gemini") and Granola, organizes them into a local archive, and lets you ask questions about your meetings using Claude Code or Cursor.
What you can do once set up:
- Ask Claude "what did I discuss with Polina last week?"
- Search across all your meeting transcripts at once
- Track action items and decisions across meetings
- Build up a knowledge base about people and projects that grows over time
- Compile your knowledge into a browsable wiki with cross-linked person profiles, project pages, and topic articles
Getting Started
You need Python 3.11+, uv, and gcloud CLI (brew install --cask google-cloud-sdk).
# Clone and install
git clone https://github.com/asundiev-devrev/meeting-intelligence-system.git && cd meeting-intelligence-system
uv sync --all-extras
# Authenticate with Google Drive (one-time, opens browser)
gcloud auth login --enable-gdrive-access
# Guided setup (verifies auth, checks folder, writes .env)
uv run meeting-memory setup
# Sync your meetings
uv run meeting-memory sync
# Start auto-sync every 6 hours
uv run meeting-memory daemon
The setup command walks you through everything interactively. Authentication uses your existing gcloud session -- no GCP project, service account, or API keys needed.
Quick start for Granola users
If you use Granola for meeting transcripts, you can sync from its local cache instead of Google Drive:
git clone https://github.com/asundiev-devrev/meeting-intelligence-system.git && cd meeting-intelligence-system
uv sync --all-extras
uv run meeting-memory sync --source granola
Granola stores transcripts at ~/Library/Application Support/Granola/cache-v4.json. No Google auth setup needed.
Granola limitations:
- Speaker attribution -- Only distinguishes "microphone" (you) vs "system" (remote audio). In 1:1s both speakers are identified correctly; in group meetings all remote speakers are labeled "Participant".
- Transcript retention -- Granola deletes transcripts after ~72 hours. Sync regularly or use the daemon (
uv run meeting-memory daemon).- For best data quality (per-speaker diarization, unlimited retention), use Gemini Notes.
Connecting to Claude Code
This is where it gets useful. Add Meeting Memory as an MCP server so Claude can read your meetings.
From the repo directory, run:
claude mcp add meeting-memory -- uv run --directory "$(pwd)" meeting-memory-mcp
Restart Claude Code, then just ask:
- "List my recent meetings"
- "What was discussed in the standup on March 5?"
- "Search my meetings for anything about the mobile redesign"
- "Summarize what I know about Shubham from our 1:1s"
- "Track action items from my last meeting with Polina"
Claude reads your meetings, extracts insights, and can write them back to a persistent knowledge store. Each conversation deepens what the system knows about your people, projects, and topics.
Skills (slash commands)
The repo ships with built-in skills that package common workflows into single commands:
| Command | What it does |
|---|---|
/meeting-prep <name> |
Prepare for a meeting — reviews past meetings, knowledge, and open action items for a person |
/meeting-digest [daily|weekly] |
Summarize your recent meetings with decisions, action items, and themes |
/extract-insights [date] |
Read meetings and extract knowledge about people, projects, and topics into the knowledge store |
/action-items [review|find-new] |
Dashboard of tracked action items, or scan meetings for new ones |
/index-slack <#channel> |
Index a Slack channel or thread into the knowledge store |
/index-notion <page title> |
Index a Notion page into the knowledge store |
/index-figma <file name> |
Index Figma file comments into the knowledge store |
Skills are auto-discovered when you open the project in Claude Code. Type / to see them in autocomplete. After /meeting-prep and /meeting-digest produce their output, you'll get interactive buttons to save insights to the knowledge base, share in Slack, or continue chatting.
The indexing skills (/index-slack, /index-notion, /index-figma) read content on-demand via MCP tools — no bulk scraping or separate API keys needed. Each insight is saved with source provenance (e.g., slack: #ads-core-team) so the wiki compiler can trace where knowledge came from.
Available MCP tools
| Tool | What it does |
|---|---|
list_meetings |
Browse meetings, filter by participant or date range |
get_meeting |
Read full meeting content |
search_meetings |
Full-text search across all transcripts |
get_knowledge |
Recall what's been learned about a person or topic |
save_insight |
Store an observation with source provenance (meeting, Slack, Notion, Figma) |
list_action_items |
See tracked action items (filter by status or owner) |
save_action_item |
Track a new action item |
update_action_item |
Mark an action item as done |
list_topics |
See all topics that have come up across meetings |
Meeting Companion (Google Meet)
A live companion that joins a Google Meet call, listens via captions, and
answers @Computer-prefixed questions in the Meet chat using DevRev's
Computer agent.
One-time setup
- Create a dedicated Gmail account for the bot (no 2FA, or app-password-enabled).
- Export your DevRev PAT:
export DEVREV_APP_PAT="<your PAT>"
- Run setup — a Chromium window will open for you to log in:
uv run meeting-companion setup
This saves the bot's auth state to~/.config/meeting-memory/bot-auth.json.
Running
uv run meeting-companion join https://meet.google.com/abc-defg-hij
- The bot opens Chromium, joins the Meet, and waits to be admitted from the lobby.
- Captions must be on in the Meet for the companion to follow the conversation.
- Type
@Computer <question>in the Meet chat; the bot replies in chat. - Ctrl+C in the terminal to leave.
Session data is written to output/companion-sessions/<session-id>.jsonl
(per-turn log) and <session-id>-transcript.txt (live captions).
Configuration (optional)
Override defaults by creating ~/.config/meeting-memory/companion.toml:
trigger = "@Computer"
devrev_agent_id = "don:core:dvrv-us-1:devo/0:ai_agent/198"
devrev_api_base = "https://api.devrev.ai"
classifier_model = "haiku"
computer_timeout_s = 45
Troubleshooting
meeting-companion doctorvalidates config + DevRev connectivity.- "Host didn't admit bot" → the human host must click Admit in the Meet lobby.
- "Companion selectors missing" → Google changed the Meet DOM. Fix in
src/meeting_memory/companion/selectors.py.
Meeting Companion Extension (Chrome) + Local Server
A Chrome extension that answers @Computer mentions in Google Meet via DevRev
Computer, plus a local FastAPI server that persists transcripts and runs a
post-meeting pipeline writing action items and insights back to the
KnowledgeStore.
Install
-
Server (one-time):
brew tap asundiev-devrev/tap brew install meeting-companion brew services start meeting-companion
Confirm:
curl http://127.0.0.1:8787/health→{"status":"ok",...}. -
Extension (sideload, one-time):
cd extension && npm install && npm run build
Then in Chrome:
chrome://extensions→ enable "Developer mode" → click "Load unpacked" → pick theextension/directory. -
Config:
- Click the extension icon in Chrome. Paste your DevRev PAT.
- Export
DEVREV_APP_PATin your shell profile so the server also has it.
Use
- Open a Google Meet. The extension activates automatically on
meet.google.com/*tabs. - Captions get enabled automatically.
- In the Meet chat, type
@Computer <question>. The reply appears in a private sidebar on the right. Click "Share in chat" to post it to the Meet chat with a🤖 Computer:prefix. - Toggle "Speak replies out loud" in the sidebar for in-browser TTS.
- When you leave the meeting, the extension flushes the transcript + turn log
to the local server. A digest lands in
~/Library/Application Support/meeting-companion/output/companion-digests/.
Smoke test checklist
After install, verify:
-
curl http://127.0.0.1:8787/healthreturns{"status":"ok"}. - Extension popup shows "✓ Server OK" after you paste PAT.
- Joining a Meet shows the Companion sidebar on the right.
- Captions turn on automatically.
-
@Computer helloin chat produces a reply in the sidebar within ~15s. - "Share in chat" posts the reply to the Meet chat.
- After leaving the meeting, a digest markdown file exists at
~/Library/Application Support/meeting-companion/output/companion-digests/<id>.md. - The digest references extracted action items if there were clear commitments in the transcript.
-
curl http://127.0.0.1:8787/action-itemsreturns items written back by the pipeline.
CLI Reference
Syncing meetings
uv run meeting-memory sync # Sync all transcripts
uv run meeting-memory sync --source gemini # From a specific source
uv run meeting-memory sync --since 2026-01-01 # Only recent meetings
uv run meeting-memory sync --dry-run # Preview without writing
uv run meeting-memory sync --force # Reprocess everything
uv run meeting-memory normalize FILE_ID # Process a single document
uv run meeting-memory backfill --since 2026-01-01
Daemon (auto-sync)
uv run meeting-memory daemon # Every 6 hours (default)
uv run meeting-memory daemon --interval-hours 4 # Custom interval
uv run meeting-memory daemon --once --verbose # Single cycle, then exit
Runs in the foreground. Stop with Ctrl+C. Configure interval via DAEMON_INTERVAL_HOURS in .env.
Enrichment (optional)
Enrichment adds derived metadata to your meetings after sync.
uv run meeting-memory enrich --steps meeting_type # Classify 1:1, standup, etc.
uv run meeting-memory enrich --steps meeting_type,roles # Also assign manager/report roles
| Step | What it does | Config needed |
|---|---|---|
meeting_type |
Classifies meetings as 1:1, standup, or empty |
None |
roles |
Assigns manager/direct_report roles in 1:1s |
OWNER_EMAIL in .env |
Knowledge compiler
Compile your knowledge store into a structured markdown wiki. Uses Claude Code (claude -p) for LLM synthesis -- no additional API keys needed.
uv run meeting-memory compile --dry-run # Preview what would be compiled
uv run meeting-memory compile # Compile all stale entities
uv run meeting-memory compile --entity "Polina" # Compile a single entity
uv run meeting-memory compile --type project # Only projects
uv run meeting-memory compile --force # Recompile everything
uv run meeting-memory compile --model sonnet # Override LLM model
Output goes to output/wiki/:
| File | Content |
|---|---|
index.md |
Master index linking all pages, grouped by type |
timeline.md |
Chronological events by week |
people/{slug}.md |
Person profiles with role, activity, action items |
projects/{slug}.md |
Project pages with status, stakeholders, timeline |
topics/{slug}.md |
Topic articles with key discussions and open questions |
Pages include YAML frontmatter, meeting ID references for provenance, and auto-generated "See Also" cross-links. Incremental by default -- only entities whose insights have changed since last compile are reprocessed.
Inspection
uv run meeting-memory status # Manifest summary
uv run meeting-memory verify # Check corpus integrity
uv run meeting-memory init # Validate setup
uv run meeting-memory sources # List available sources
All commands
| Command | Purpose |
|---|---|
setup |
Interactive guided configuration |
init |
Validate setup |
sync |
Discover and process transcripts |
normalize |
Process a single document by file ID |
backfill |
Reprocess transcripts since a date |
enrich |
Run enrichment steps |
verify |
Check corpus integrity |
status |
Show manifest summary |
sources |
List available source adapters |
compile |
Compile knowledge store into a markdown wiki |
daemon |
Auto-sync on a schedule |
meeting-memory-mcp |
MCP server (separate entry point) |
Configuration
All settings are optional and have sensible defaults. Configure via .env or environment variables.
| Variable | Default | Description |
|---|---|---|
SOURCE_FOLDER_PATH |
Meet Recordings |
Drive folder containing transcripts |
LOCAL_OUTPUT_DIR |
output |
Where normalized files are stored |
MANIFEST_PATH |
output/manifests/manifest.json |
Processing tracker |
SOURCE |
gemini |
Transcript source adapter |
ENRICHMENT_STEPS |
(empty) | Auto-run enrichment (e.g., meeting_type,roles) |
OWNER_EMAIL |
(empty) | Your email, for the roles step |
DAEMON_INTERVAL_HOURS |
6 |
Hours between daemon cycles |
LOG_LEVEL |
INFO |
Logging verbosity |
How It Works
Pipeline
Google Drive --> Normalize --> Store --> Enrich --> Compile (wiki) --> Query (MCP)
| Stage | What happens | Output |
|---|---|---|
| Source | Fetch raw transcripts from Google Drive via the Drive API | Raw documents |
| Normalize | Parse, extract metadata, generate IDs | Markdown + YAML frontmatter files |
| Store | Write to disk, track in manifest for idempotency | output/normalized/YYYY/MM/*.md |
| Enrich | Classify meeting types, assign roles | Updated meeting metadata |
| Compile | LLM synthesizes knowledge into cross-linked wiki pages | output/wiki/ markdown files |
| Query | MCP server exposes meetings + knowledge to AI assistants | Natural-language answers |
What gets extracted
Gemini "Notes by Gemini" Google Docs have two sections:
- Notes -- AI-generated summary with decisions and action items
- Transcript -- timestamped, speaker-labeled transcript
The pipeline extracts metadata (date, title, participants), the transcript, and the AI summary, then writes a normalized Markdown file with YAML frontmatter.
Output format
output/normalized/YYYY/MM/{meeting_id}.md
Meeting IDs are deterministic: YYYY-MM-DD-slugified-title-hash8 (e.g., 2026-03-06-alice-x-bob-weekly-a1b2c3d4).
Knowledge store
When you discuss meetings with Claude, it can save insights to output/knowledge/knowledge.json -- a persistent store of what's been learned about people, projects, topics, and action items. This accumulates over time across conversations.
Insights carry source provenance -- each one records where it came from (source_type + source_ref), so the wiki compiler can trace knowledge back to its origin across meetings, Slack threads, Notion pages, and Figma comments.
Sources
| Source | Type | Description |
|---|---|---|
gemini |
Sync adapter | Google Meet transcripts via Gemini Notes |
granola |
Sync adapter | Granola local meeting transcripts |
| Slack | On-demand (/index-slack) |
Channel messages and threads via Slack MCP |
| Notion | On-demand (/index-notion) |
Page content via Notion MCP |
| Figma | On-demand (/index-figma) |
File comments via Figma MCP |
Sync adapters run via meeting-memory sync. On-demand sources are indexed interactively through skills -- no bulk scraping, no separate API keys.
Architecture
src/meeting_memory/
cli.py # CLI entry point (Click)
config.py # Settings from .env (pydantic-settings)
models.py # Data models (Meeting, Participant, Source, etc.)
schema.py # Markdown+YAML serialization
manifest.py # Idempotent processing tracker
meeting_id.py # Deterministic ID generation
pipeline.py # Orchestration: discover -> normalize -> store
knowledge.py # Knowledge store (people, topics, action items)
llm.py # LLM client (shells out to claude -p)
mcp_server.py # MCP server for AI assistants
compiler/
collector.py # Aggregate entity data into bundles
manifest.py # Incremental compile tracking
prompts.py # Prompt templates per entity type
generators.py # LLM-powered page generation
index.py # Deterministic index + timeline generation
crosslinks.py # Backlink insertion across pages
adapters/
base.py # SourceAdapter protocol
gemini.py # Google Drive adapter (Gemini Notes)
granola.py # Granola local cache adapter
registry.py # Adapter registry
normalization/
gemini_parser.py # Gemini Notes parser
normalizer.py # Source-agnostic normalizer
enrichment/
base.py # EnrichmentStep protocol + runner
meeting_type.py # 1:1 / standup classifier
roles.py # Manager/report role assignment
registry.py # Step registry
storage/
base.py # StorageBackend protocol
local.py # Local filesystem storage
drive_storage.py # Google Drive storage (placeholder)
gws/
client.py # Google Drive API client (via gcloud auth)
Extending
New source adapter: Create adapters/new_source.py implementing the SourceAdapter protocol, register in adapters/registry.py, use with --source new-source.
New enrichment step: Create enrichment/new_step.py implementing the EnrichmentStep protocol, register in enrichment/registry.py, use with --steps new_step.
Programmatic usage
from meeting_memory.adapters.registry import get_adapter
from meeting_memory.config import Settings
from meeting_memory.normalization.normalizer import normalize
from meeting_memory.schema import meeting_to_markdown
settings = Settings()
adapter = get_adapter("gemini", settings)
raw_doc = adapter.fetch("DRIVE_FILE_ID")
meeting = normalize(raw_doc, source_system=adapter.system_name, parser=adapter.parse)
markdown = meeting_to_markdown(meeting)
Current Limitations
- Knowledge compiler requires Claude Code CLI (
claude -p) for LLM calls - No Calendar/Meet metadata enrichment (event IDs, meet links)
- Granola: limited speaker attribution in group meetings (see Granola limitations above)
- Participant emails not resolved -- only names from transcripts
- Knowledge store is JSON-backed -- works for individual use, not concurrent access
Running Tests
uv run pytest # ~276 tests
uv run pytest --cov # with coverage
Google Drive Authentication
Meeting Memory uses the Google Drive REST API with gcloud for authentication. This is simpler than the previous rclone-based approach -- no remote configuration or separate OAuth flow needed.
How it works
gcloud auth login --enable-gdrive-accessgrants Drive access to your Google account (one-time, opens browser).- The
GoogleDriveClientcallsgcloud auth print-access-tokento get a bearer token. - Tokens are cached in memory and automatically refreshed when they expire.
- All Drive operations (list files, export docs) use direct HTTP calls to the Drive v3 REST API.
Troubleshooting
If you see authentication errors:
# Re-authenticate with Drive scope
gcloud auth login --enable-gdrive-access
# Verify it works
gcloud auth print-access-token
No GCP project, service account, or API key is required -- gcloud uses its built-in OAuth client.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file meeting_memory-0.5.0.tar.gz.
File metadata
- Download URL: meeting_memory-0.5.0.tar.gz
- Upload date:
- Size: 73.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0aec9e89e9c2cb5486a59e1ed6192d2e5910c61389783e2e7d5f58461d628257
|
|
| MD5 |
3a34f98c081f7de908c5730c062f2779
|
|
| BLAKE2b-256 |
25e1a99391b6c9537e2752a1fb261a837468f729811258762e7e9a9c6ee2cd37
|
File details
Details for the file meeting_memory-0.5.0-py3-none-any.whl.
File metadata
- Download URL: meeting_memory-0.5.0-py3-none-any.whl
- Upload date:
- Size: 102.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.7 {"installer":{"name":"uv","version":"0.10.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f3cefc4573916987ba227a8b7988622909cd93d4470c940b1ee352a82e107234
|
|
| MD5 |
fbe9375da8fe2fd472bf0671981b50b8
|
|
| BLAKE2b-256 |
c05152ecac00957ab588ea9662e944d77d81b049741ecd671e429716521e397d
|