MCP server for querying blood work data and knowledge base from getbased
Project description
getbased MCP Server
An MCP server that exposes blood work data and an optional RAG knowledge base from getbased as tools. Works with any MCP-compatible client (Claude Code, Hermes, Claude Desktop, etc.).
Installing for the first time? The getbased-agent-stack meta-package bundles this MCP with the RAG engine it talks to, plus example configs for Claude Code and Hermes. One command and you're up.
How it works
getbased (browser)
├── your data, your mnemonic
├── generates a read-only token
└── pushes lab context to sync gateway on every save
Sync Gateway (sync.getbased.health/api/context)
└── stores context text behind token auth
RAG Server (localhost, optional)
├── Vector database with embedded chunks
├── Embedding model for semantic search
└── Your curated health knowledge base
This MCP Server (on your machine)
├── fetches blood work context from sync gateway
├── queries RAG server for knowledge base searches (optional)
└── exposes everything as tools to any MCP client
Your mnemonic never leaves your browser. The MCP server receives the same lab context text the getbased AI chat uses — not raw data.
Tools
| Tool | Description |
|---|---|
getbased_lab_context |
Full lab summary with biomarkers, context cards, supplements, goals. Pass profile to target a specific profile. |
getbased_section |
Get a specific section (e.g. hormones, lipids) or list all available sections |
getbased_list_profiles |
List available profiles |
knowledge_search |
Semantic search across the active library on your knowledge base (requires RAG server). Returns relevant passages with source attribution. |
knowledge_list_libraries |
List all knowledge base libraries and show which is active |
knowledge_activate_library |
Switch the active library — subsequent searches target the new one until switched again |
knowledge_stats |
Per-source chunk counts for the active library — useful for diagnosing missing results |
getbased_lens_config |
Show RAG endpoint config for getbased's Knowledge Base (External server) |
getbased_section
Query-aware context: pull just the section you need instead of the full dump. Saves tokens and allows deeper analysis of specific areas.
# No args — returns section index with names, updated dates, and line counts
getbased_section()
# With section name — returns just that section's content
getbased_section(section="hormones")
# With profile — query a specific profile
getbased_section(section="hormones", profile="mne8m9hf")
Section names are matched by prefix, so hormones matches hormones updated:2026-03-13.
knowledge_search
What is RAG? Retrieval-Augmented Generation (RAG) is a technique where an AI assistant's responses are grounded in a specific knowledge base. Instead of relying solely on training data, the assistant first searches a curated collection of documents for relevant passages, then uses those passages to inform its answer. This makes the AI's output more accurate, more specific, and traceable to real sources.
The knowledge_search tool searches your knowledge base using semantic similarity — meaning it finds passages that match the meaning of your query, not just keywords. Results include the passage text and source attribution.
# Basic search
knowledge_search(query="blue light DHA mitochondrial damage")
# With result count (1–10, default 5)
knowledge_search(query="MTHFR methylation folate", n_results=5)
Note: This tool requires the RAG server to be running. Without it, all blood work tools still work — the MCP degrades gracefully.
Multi-library (v0.2+)
The Lens server (0.2+ of getbased-rag) supports multiple libraries — keep research papers, clinical guides, and personal notes in separate collections and switch between them. knowledge_search always targets the currently active library.
# See what's available and which is active
knowledge_list_libraries()
# Switch. Subsequent knowledge_search calls hit this library until switched again
knowledge_activate_library(library_id="<id-from-list>")
# Confirm what's indexed in the active library
knowledge_stats()
Multi-profile
The gateway stores context per profile ID. To work with multiple profiles:
- Use
getbased_list_profilesto see available profiles and their IDs - Pass
profile="id"to any tool to query a specific profile - Omit the
profileparam to use the default profile - Each profile's context is pushed automatically when data is saved or the profile is switched in getbased
Setup
1. Enable messenger access in getbased
Go to Settings > Data > Messenger Access and toggle it on. Copy the read-only token.
2. Set up a RAG server (optional — for knowledge_search)
The knowledge base runs as a separate service. You need:
- A vector database (e.g. Qdrant, ChromaDB) loaded with your document chunks and embeddings
- A FastAPI (or similar) server that accepts
POST /querywith{version: 1, query: "...", top_k: N}and returns{chunks: [{text: "...", source: "..."}]} - An embedding model (e.g. BGE-M3) for semantic search
The RAG server handles embedding, similarity search, and filtering. This MCP just sends HTTP queries to it — no models loaded here.
RAG server contract:
| Field | Required | Description |
|---|---|---|
POST /query |
Yes | Accepts JSON body with version (int), query (string), top_k (int) |
Authorization |
Recommended | Bearer token auth |
GET /health |
Optional | Returns {"status": "ok", "rag_ready": bool, "chunks": int} |
| Response | Yes | {"chunks": [{"text": "...", "source": "..."}]} |
3. Configure your MCP client
Claude Code / Claude Desktop
Add to your MCP config (~/.claude/claude_desktop_config.json or similar):
{
"mcpServers": {
"getbased": {
"command": "python3",
"args": ["/path/to/getbased_mcp.py"],
"env": {
"GETBASED_TOKEN": "your-token-here"
}
}
}
}
Hermes Agent
hermes mcp add getbased \
--command python3 \
--args /path/to/getbased_mcp.py
Then set GETBASED_TOKEN in ~/.hermes/.env or in the MCP server's env config in config.yaml:
mcp_servers:
getbased:
command: python3
args: [/path/to/getbased_mcp.py]
env:
GETBASED_TOKEN: your-token-here
4. Use it
Ask about your labs in any connected conversation:
"How's my vitamin D?" "What markers are out of range?" "Summarize my latest blood work" "What does the knowledge base say about blue light and DHA?"
Environment variables
| Variable | Required | Description |
|---|---|---|
GETBASED_TOKEN |
Yes | Read-only token from getbased Settings > Data > Messenger Access |
GETBASED_GATEWAY |
No | Context gateway URL (default: https://sync.getbased.health) |
LENS_URL |
No | RAG server URL (default: http://localhost:8322). Overrides LENS_PORT |
LENS_PORT |
No | RAG server port, only used to build default LENS_URL (default: 8322) |
LENS_API_KEY_FILE |
No | Path to RAG API key file. Default: $XDG_DATA_HOME/getbased/lens/api_key (getbased-rag's canonical location). If that file doesn't exist but the legacy ~/.hermes/rag/lens_api_key does, the legacy path is used instead — upgrades from standalone getbased-mcp ≤ 0.1.0 keep working without config changes. |
LENS_MCP_ACTIVITY_LOG |
No | JSONL path where tool-call activity is appended. Default: $XDG_STATE_HOME/getbased/mcp/activity.jsonl. Each record: {ts, tool, duration_ms, ok, error?} — arguments are never logged (queries may contain sensitive health info). Set to off / false / 0 to disable. The getbased-dashboard Activity tab tails this file. |
Custom Knowledge Source (getbased app)
The same RAG server that powers knowledge_search for your AI client can also back the in-app AI chat. To connect them:
- Run
getbased_lens_config— it returns the endpoint URL, API key, and recommendedtop_k - In getbased, go to Settings → AI → Custom Knowledge Source
- Paste the endpoint URL, API key, and set
top_kto 5 - Enable it — the chat-header Lens badge will light up green when active
Every chat question and focus card will now be enriched with RAG-retrieved passages from your knowledge base.
Troubleshooting
knowledge_search returns "Lens server not reachable"
The RAG server isn't running. Start it and verify with:
curl http://localhost:8322/health
knowledge_search returns "Lens API key not found"
getbased-rag generates its API key on first start and writes it to $XDG_DATA_HOME/getbased/lens/api_key (e.g. ~/.local/share/getbased/lens/api_key on Linux). If you're upgrading from the standalone getbased-mcp ≤ 0.1.0 and your key is at ~/.hermes/rag/lens_api_key, that legacy path is still auto-detected — no config change needed. If the file is missing entirely, restart the RAG server and it will create a new one.
knowledge_list_libraries / knowledge_stats return "this lens server doesn't expose library management"
The lens server you're pointed at is older than getbased-rag 0.1.0 and doesn't implement the /libraries or /stats endpoints. knowledge_search still works against older lens servers since /query is protocol-stable. To get library management, either upgrade the lens, or set LENS_URL to a library-capable endpoint.
Blood work tools work but knowledge_search doesn't
That's expected — they're independent. Blood work tools talk to the sync gateway; knowledge_search talks to the RAG server. The MCP degrades gracefully: if the RAG server is down, all blood work tools continue to work normally.
Security
- Read-only: the token grants access to lab context text only — no raw data, no write access
- Self-hosted: the MCP server runs on your own machine
- Revocable: regenerate the token in getbased to revoke access instantly
- No mnemonic exposure: the token is independent of your sync mnemonic
- No models in-process: RAG queries go through the external server — no embedding models loaded in the MCP process
Related projects
- getbased — the health dashboard. This MCP reads the same lab context the in-app AI chat uses, and queries the same Knowledge Source endpoint configured in Settings → AI → Custom Knowledge Source. The endpoint contract is shared — one server backs both the app and this MCP.
License
GPL-3.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file getbased_mcp-0.2.3.tar.gz.
File metadata
- Download URL: getbased_mcp-0.2.3.tar.gz
- Upload date:
- Size: 18.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4dbd3405d231d746642cca9bfc44e40283dd082b56c69a41971f3df6f9e45ecb
|
|
| MD5 |
5f1213cbb04dcb8416a6a70a8354ebfd
|
|
| BLAKE2b-256 |
d14e9306e5d7695369203d5cb87a7407dd6cb6be1ca1b40b5a438facfcf3a467
|
Provenance
The following attestation bundles were made for getbased_mcp-0.2.3.tar.gz:
Publisher:
publish.yml on elkimek/getbased-agents
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
getbased_mcp-0.2.3.tar.gz -
Subject digest:
4dbd3405d231d746642cca9bfc44e40283dd082b56c69a41971f3df6f9e45ecb - Sigstore transparency entry: 1340653872
- Sigstore integration time:
-
Permalink:
elkimek/getbased-agents@0646896d2074afbd7004589fc037fa432ccb7fe1 -
Branch / Tag:
refs/tags/mcp-v0.2.3 - Owner: https://github.com/elkimek
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@0646896d2074afbd7004589fc037fa432ccb7fe1 -
Trigger Event:
push
-
Statement type:
File details
Details for the file getbased_mcp-0.2.3-py3-none-any.whl.
File metadata
- Download URL: getbased_mcp-0.2.3-py3-none-any.whl
- Upload date:
- Size: 13.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
efc8ef172d10c588b8b3501d5db130a736a15b9eb8841cb0049ec09f775b6404
|
|
| MD5 |
fdb57a1339eb2bf31805d6cb4e7e7c22
|
|
| BLAKE2b-256 |
697a2b29d7683c4b696053eb9ae8d41ee0e2f6e3e45d598623eee42779a0a29d
|
Provenance
The following attestation bundles were made for getbased_mcp-0.2.3-py3-none-any.whl:
Publisher:
publish.yml on elkimek/getbased-agents
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
getbased_mcp-0.2.3-py3-none-any.whl -
Subject digest:
efc8ef172d10c588b8b3501d5db130a736a15b9eb8841cb0049ec09f775b6404 - Sigstore transparency entry: 1340653879
- Sigstore integration time:
-
Permalink:
elkimek/getbased-agents@0646896d2074afbd7004589fc037fa432ccb7fe1 -
Branch / Tag:
refs/tags/mcp-v0.2.3 - Owner: https://github.com/elkimek
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@0646896d2074afbd7004589fc037fa432ccb7fe1 -
Trigger Event:
push
-
Statement type: