Skip to main content

LLM-assisted biomedical literature screening and structured extraction for PubMed and GEO.

Project description

biolit

mcp-name: io.github.rachadele/biolit

LLM-assisted biomedical literature screening and structured extraction. Accepts PubMed alert emails and mixed lists of PMIDs, DOIs, and GEO accessions in any combination. Retrieves full text from PMC, Europe PMC, bioRxiv/medRxiv, Unpaywall, and Semantic Scholar. Supports multiple LLM providers and exposes all functionality as an MCP server.

Setup

Requirements: Python 3.8+

Install from PyPI:

pip install biolit

Or install from source for development:

pip install -e .

Copy .env.example to .env and add your API key:

cp .env.example .env
# edit .env and set ANTHROPIC_API_KEY (or OPENAI_API_KEY)

Usage

The tool accepts a PubMed alert email (.eml) or a plain-text file of identifiers, as well as inline identifiers via --ids. Identifiers can be PMIDs, DOIs, or GEO accessions — mixed lists are supported in a single run.

Input How to pass Example
PubMed alert email positional .eml file alert.eml
BibTeX file positional .bib file refs.bib
Identifier file (mixed) positional plain-text file, one per line identifiers.txt
Inline identifiers --ids flag, comma-separated --ids 41795042,GSE53987,10.1101/2025.03.17.25324098

Use --default to run with schizophrenia genomics defaults (no prompts):

biolit docs/alert.eml --default
biolit docs/pmids.txt --default
biolit docs/geo_accessions.txt --default
biolit --ids 41795042,41792186,GSE53987 --default
biolit --ids 10.1101/2025.03.17.25324098 --default

Or specify criterion and fields as flags:

biolit identifiers.txt \
  --criterion "Is this about treatment-resistant schizophrenia?" \
  --fields "methodology, sample_size, treatment, outcomes"

Add --markdown (or --md) to also write a prose .md summary alongside the CSV. Each record gets a markdown section with ### field subsections; records that failed or were skipped appear as stub entries:

biolit refs.bib --config my_config.json --markdown
biolit refs.bib --config my_config.json --markdown --markdown-max-tokens 2048

Or use a JSON config file to store reusable parameters (CLI flags take precedence). The config can include ids or input_file (path to an .eml, .bib, or identifier list), and "markdown": true to enable markdown output:

biolit alert.eml --config my_config.json
biolit refs.bib --config my_config.json   # DOIs extracted from .bib automatically
biolit --config my_config.json            # ids or input_file supplied by config

The fields key in a config file can be a comma-separated string or a JSON object mapping field names to extraction descriptions. When a string is used, an extra LLM call converts the field names into descriptions before extraction. When a dict is used, that call is skipped — the descriptions are passed directly to the model:

{
  "fields": {
    "tf_name": "HGNC symbol of the transcription factor perturbed in this experiment",
    "organism": "scientific name of the organism used",
    "platform": "GPL accession of the microarray platform"
  }
}

Omit --criterion to skip screening (all records are extracted). Omit --fields to use the default fields (methodology, sample_type, causal_claims, summary):

# fetch + extract with defaults (no screening)
biolit alert.eml

# fetch + screen only, then extract with defaults
biolit alert.eml --criterion "Is this about treatment-resistant schizophrenia?"

Single-record screening

Use biolit screen to quickly check one paper or GEO record for relevance without running the full extraction pipeline:

biolit screen --pmid 41627908 --default
biolit screen --accession GSE53987 --default
biolit screen --doi 10.64898/2026.02.16.706214 --default
biolit screen --pmid 41627908 --criterion "Is this about treatment-resistant schizophrenia?"

Output is a single line to stdout:

RELEVANT [abstract] — Paper uses GWAS to investigate schizophrenia risk loci.

Mixed identifier lists

PMIDs, DOIs, and GEO accessions can be freely mixed in a file or via --ids. Each identifier is auto-detected by format:

  • 41795042 → PMID (all digits)
  • 10.1101/2025.03.17.25324098 → DOI (starts with 10.)
  • GSE53987 → GEO accession (starts with GSE, GDS, GSM, or GPL)
biolit --ids 41795042,GSE53987,10.1101/2025.03.17.25324098 --default

GEO records additionally include a linked_pmids column. All record types share pmid, doi, and geo_accession columns (null when not applicable).

Full-text retrieval

Full-text retrieval runs automatically for every PMID and DOI (including preprints). For GEO records, the pipeline attempts full-text retrieval via each linked PMID in order, falling back to the GEO record metadata if no linked paper has accessible full text. The pipeline tries each source in order:

  1. PMC JATS XML (open access)
  2. Europe PMC JATS XML (broader open-access coverage)
  3. Preprint XML (bioRxiv / medRxiv)
  4. Unpaywall PDF (requires --unpaywall-email)
  5. Semantic Scholar open-access PDF
  6. Abstract fallback

To enable Unpaywall (step 4), pass your email:

biolit alert.eml --default --unpaywall-email you@example.com

Limit which sections are sent to the LLM:

biolit alert.eml --default --sections methods,results

LLM providers

The tool supports Anthropic (default), OpenAI, and local Ollama models:

# OpenAI
biolit pmids.txt --default --provider openai --model gpt-4o

# Ollama (local)
biolit pmids.txt --default --provider ollama --model llama3

You can also set LLM_PROVIDER and LLM_MODEL as environment variables.

Output

Each run creates a timestamped directory (e.g. run_20260313_142000/) containing:

  • results.csv — one row per relevant record
  • results.md — prose markdown summary (written when --markdown or "markdown": true in config)
  • artifacts/<id>/ — per-record folder with the text sent to the LLM, metadata, and any retrieved full-text files

Records that fail at any pipeline stage (fetch error, not found, no content, screening or extraction error) are excluded from the CSV but appear in the markdown as stub entries with a failure note.

With default fields, the CSV columns are:

Column Description
title Paper title
authors Author list (comma-separated; parsed from PubMed XML, bioRxiv/medRxiv API, or GEO contributors)
url Link to PubMed, GEO, or DOI
pmid PubMed ID (null for unindexed preprints)
doi DOI (null for GEO records)
geo_accession GEO accession (null for non-GEO records)
text_source Where the text came from (abstract, pmc_fulltext, europepmc_fulltext, preprint_fulltext, unpaywall_pdf, s2_pdf, geo_linked_fulltext, geo_linked_abstract, geo_record)
citation_count Citation count from Semantic Scholar (null if not found)
methodology General method (e.g. GWAS, scRNA-seq, proteomics)
sample_type Tissue/sample type and origin
causal_claims Statements about causes of schizophrenia inferred from the data
summary 2-3 sentence plain-language summary for triage

GEO records additionally include a linked_pmids column listing all associated PubMed IDs.

The CSV can be imported directly into Google Sheets (File → Import).

MCP server

biolit ships an MCP server that exposes the pipeline as tools for any MCP-compatible client (Claude Desktop, Claude CLI, OpenAI Agents SDK, etc.).

Start the server:

biolit-mcp

Or test interactively with the MCP inspector:

mcp dev biolit/mcp_server.py

Configure Claude Desktop

Add to ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "biolit": {
      "command": "biolit-mcp"
    }
  }
}

Restart Claude Desktop. The tools will appear in the tool picker.

Configure Claude CLI

Add a .mcp.json in your project root:

{
  "mcpServers": {
    "biolit": {
      "command": "biolit-mcp"
    }
  }
}

Available tools

Batch pipeline (equivalent to the biolit CLI):

Tool Description
run_pipeline Fetch, optionally screen, and optionally extract a mixed list of PMIDs, DOIs, and/or GEO accessions; write results CSV (and optionally a .md summary when markdown=True). Accepts ids (comma-separated), bib_path (.bib file), or ids_file (plain-text identifier file). Use max_tokens to cap input text (default 12500), extraction_max_tokens for field extraction output (default 4096), and markdown_max_tokens for markdown rendering (default 1024). Pass 0 for any token param to use the default. All parameters optional — pass only config_path to drive the entire run from a JSON file.

Low-level (for custom workflows):

Tool Description
fetch_pubmed_metadata Fetch PubMed metadata by PMID
fetch_geo_record Fetch and parse a GEO record by accession
fetch_fulltext Retrieve full text for a PMID (6-step chain)
fetch_geo_fulltext Retrieve full text for a GEO accession via its linked PMIDs
screen_paper LLM relevance screen given pre-fetched text
extract_fields Structured field extraction given pre-fetched text
resolve_doi Resolve a DOI to PMID + PMCID via the NCBI ID Converter
lookup_s2_pdf Check whether Semantic Scholar has an open-access PDF for a DOI
read_pmids_from_eml Parse PMIDs from a PubMed alert .eml file
get_version Return the installed biolit package version

Use as a Python library

The pipeline functions are importable directly:

from biolit.pipeline import run, screen_paper, fetch_record
from biolit.llm import get_llm_client

client = get_llm_client("anthropic")

# Batch pipeline — PMIDs, DOIs, and GEO accessions can be mixed freely
# criterion and fields_description are optional; omit either to skip that step
# markdown=True writes results.md alongside the CSV
# Returns (csv_path, record_count)
csv_path, count = run(client, ids=["41627908", "GSE53987", "10.1101/2025.03.17.25324098"],
    criterion="...", fields_description="methodology, summary", output_path="results.csv",
    markdown=True)

# Fetch + write metadata only (no LLM calls)
csv_path, count = run(client, ids=["41627908", "GSE53987"])

# Fetch a single record (auto-detects PMID / DOI / GEO)
paper = fetch_record("10.1101/2025.03.17.25324098")

# Screen pre-fetched text
result = screen_paper(client, paper, "Is this about schizophrenia genomics?", paper["abstract"])
# {"relevant": True, "reason": "..."}

Custom full-text fetchers

The built-in chain (PMC → Europe PMC → preprint → Unpaywall → Semantic Scholar → abstract) leaves coverage gaps for closed-access or recently-published work. You can plug in additional sources of full text — a Zotero library, a flat directory of PDFs, an institutional full-text database — without forking biolit.

Reference fetchers (opt-in via env vars)

Two ship with biolit and self-register on import when the relevant environment variables are set.

Zotero. Searches the user's Zotero library by DOI then PMID, resolves attachment search hits up to their parent items, finds an attached PDF, downloads it, and parses it with biolit's PDF parser. When the Zotero /file API endpoint returns 404 (linked_file attachments, or imported attachments on accounts without sync), falls back to reading the PDF from local Zotero storage at $ZOTERO_DATA_DIR/storage/<key>/<filename> (default data dir ~/Zotero).

export ZOTERO_API_KEY=...
export ZOTERO_USER_ID=...           # or ZOTERO_GROUP_ID for a group library
# Optional:
export ZOTERO_PRIORITY=5.0          # lower = tried earlier (default 5.0)
export ZOTERO_DATA_DIR=~/Zotero     # only needed if Zotero is not at ~/Zotero

Local PDF directory. Looks up papers by DOI in a pre-built JSON index. Filenames are arbitrary — DOIs are extracted from each PDF's /Info metadata dict and (failing that) its first-page text.

Build the index once (and rebuild whenever the directory contents change):

python -m biolit.fetchers.local_pdf --dir ~/Papers
python -m biolit.fetchers.local_pdf --dir ~/Papers --rebuild

Then point biolit at the same directory:

export BIOLIT_LOCAL_PDF_DIR=~/Papers
export BIOLIT_LOCAL_PDF_PRIORITY=3.0  # default 3.0

The fetcher itself never builds the index — it only consults it. PDFs without an extractable DOI are listed in the index's unindexed_sample for visibility.

When configured, the text_source field in CSV/markdown output is zotero_pdf or local_pdf for hits from these sources. The raw bytes are persisted into artifacts/<id>/zotero_pdf / local_pdf exactly like the built-in PMC/Europe PMC artifacts.

Writing your own fetcher

A fetcher is any callable that takes a FetchContext and returns either a FetchResult (when it found something) or None (when it didn't).

from biolit.fetchers import FetchContext, FetchResult, register_fetcher

def my_internal_db_fetcher(ctx: FetchContext) -> FetchResult | None:
    pmid = ctx.paper.get("pmid")
    if not pmid:
        return None
    text = my_db.lookup_fulltext(pmid)  # whatever you have
    if not text:
        return None
    return FetchResult(text=text, source="internal_db", artifacts={})

register_fetcher(my_internal_db_fetcher, priority=1.0, name="internal_db")

Register before the first call to run / screen_by_* (e.g. at module import time). Registered fetchers are tried before the built-in chain in priority order; the first one to return a non-empty FetchResult.text wins. Exceptions inside a fetcher are logged to stderr and the next fetcher is tried.

Validation

An independent evaluation of the GEO screening and metadata extraction workflow is available at rachadele/biolit-eval. It uses a bootstrap resampling pipeline to estimate precision, recall, and F1 against a manually curated ground truth of 509 GEO accessions labelled for transcription factor perturbation experiments.

Known Limitations

  • Papers without abstracts or accessible full text are skipped silently.
  • GEO records attempt full-text retrieval via linked PMIDs. text_source will be geo_linked_fulltext, geo_linked_abstract, or geo_record depending on what was accessible.
  • bioRxiv/medRxiv JATS XML is frequently blocked by Cloudflare regardless of headers. The pipeline falls back to the title and abstract from the bioRxiv API (text_source: preprint_abstract).
  • The Semantic Scholar API allows roughly 100 unauthenticated requests per day. Set SEMANTIC_SCHOLAR_API_KEY in .env for higher limits.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

biolit-0.1.26.tar.gz (68.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

biolit-0.1.26-py3-none-any.whl (51.7 kB view details)

Uploaded Python 3

File details

Details for the file biolit-0.1.26.tar.gz.

File metadata

  • Download URL: biolit-0.1.26.tar.gz
  • Upload date:
  • Size: 68.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for biolit-0.1.26.tar.gz
Algorithm Hash digest
SHA256 6afce64e6837ad91e6b9938967fd34873f3a2e9765efa281fd34855bcd107b27
MD5 1d110a6805a5153ade2c6fd3d0249a32
BLAKE2b-256 602d5adc11336faf294b72247441a8a1bad3331af3e3a93a33d7c503839d5d35

See more details on using hashes here.

Provenance

The following attestation bundles were made for biolit-0.1.26.tar.gz:

Publisher: publish.yml on rachadele/biolit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file biolit-0.1.26-py3-none-any.whl.

File metadata

  • Download URL: biolit-0.1.26-py3-none-any.whl
  • Upload date:
  • Size: 51.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for biolit-0.1.26-py3-none-any.whl
Algorithm Hash digest
SHA256 9ee145263d512ad344a509443bb5c0f18f140a6d613f639c60ffc493142aaa06
MD5 5cf347df89539aec2d82769afc77d185
BLAKE2b-256 33b21080c6cf3d7c27d96c9aa157e959f35df4c4169ff21d03328cfad348ccd4

See more details on using hashes here.

Provenance

The following attestation bundles were made for biolit-0.1.26-py3-none-any.whl:

Publisher: publish.yml on rachadele/biolit

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page