Skip to main content

A plugin-based desktop application for human-in-the-loop systematic literature screening

Project description

metaScreener

A plugin-based desktop application for human-in-the-loop systematic literature screening.

License: MIT Python 3.10+ DOI Platform: Windows Platform: macOS/Linux Tests


Overview

metaScreener is an open-source, cross-platform desktop application that automates citation screening for systematic literature reviews. It combines deterministic heuristic-based filters with large language model (LLM) inference in a sequential, auditable pipeline — all through a graphical interface that requires no programming expertise.

The software is designed around three principles:

  • GUI-first: every function is accessible through a graphical interface built on Python/Tkinter — no command-line interaction, no scripts, no API knowledge required.
  • Bundle pipeline: each plugin stage consumes a ZIP archive produced by the preceding stage and emits a new archive containing the full accumulated state, ensuring that every intermediate decision is preserved and portable.
  • Human-in-the-loop: no record is silently excluded. Records for which automated decisions cannot be grounded in sufficient evidence are routed to an explicit human review queue.

In a demonstration use case comprising 776 candidate records, the pipeline reduced the corpus to 73 records requiring full human review — a 90.6% reduction — with deterministic pre-filtering accounting for 98.3% of exclusions.


Pipeline architecture

metaScreener organises its screening workflow into seven plugins across four functional groups:

Corpus ingestion

# Plugin Description Method
01 Citations AI Extracts citation records from a PRISMA flow diagram image (PDF/PNG) GPT-4o vision API
02 References-of-X AI Resolves and enriches bibliographic references via federated queries OpenAlex, Crossref, Semantic Scholar

Criteria structuring

# Plugin Description Method
03 Criteria Parser Converts free-text inclusion/exclusion criteria into a structured, machine-executable criteria table (criteria_harmonized.csv) Rule-based inference + optional LLM refinement

The Criteria Parser accepts plain-text criteria (e.g., ic_ec_12.txt) and automatically assigns each criterion to the appropriate pipeline stage (EH/IH for deterministic rules, EL/IL for semantic rules) based on six pattern categories: language, year, document type, venue, DOI, and keyword-in-text. An optional LLM refinement pass adjusts the assignments under structural guardrails (row-count and identifier invariance). The harmonized output should always be reviewed by the researcher before proceeding.

Deterministic heuristic-based filtering

# Plugin Description Method
04 EH (Exclusion by Heuristic) Removes records matching any exclusion criterion at title/abstract level Keyword / regex matching
05 IH (Inclusion by Heuristic) Retains only records matching at least one inclusion criterion Keyword / regex matching

These stages execute without LLM inference, incur no token cost, and impose no latency. They are designed to handle the bulk of exclusions before records reach the LLM stages.

LLM-assisted filtering

# Plugin Description Method
06 EL (Exclusion by LLM) Applies LLM-based eligibility adjudication against exclusion criteria over full record text OpenAI-compatible endpoint, T=0.0
07 IL (Inclusion by LLM) Applies LLM-based eligibility adjudication against inclusion criteria over full record text OpenAI-compatible endpoint, T=0.0

Both LLM stages implement evidence gating: a screening decision is accepted only when the model provides (1) a confidence score meeting or exceeding a configurable threshold (default 0.6) and (2) a verbatim quotation verifiable as a substring of the source record. Records failing either condition receive a PASS_FLAGGED outcome and are routed to the human review queue. All LLM responses are persisted in a local cache keyed by content hash, enabling exact re-runs without additional API cost.


Bundle format and audit trail

Each plugin produces a bundle ZIP archive containing:

  • manifest.json — pipeline configuration (criteria file hash, prompt version, model ID, UTC timestamp)
  • data/current.csv — the canonical citation table at the current stage
  • criteria/criteria_harmonized.csv — the machine-executable criteria specification
  • reports/ — per-stage decision reports with full evidence trails
  • cache/ — JSONL caches of LLM responses (one file per stage)

Bundles are integrity-verified using SHA-256 hashes at ingestion and export. Any modification to the record set or configuration between stages is detectable.


Installation

Prerequisites

  • Python 3.10 or later (with Tkinter — included by default on Windows and macOS; on Linux, install python3-tk)
  • An OpenAI API key (required for Plugins 01, 03, 06, 07; not required for Plugins 02, 04, 05)

Windows

# Clone the repository
git clone https://github.com/lars-ulaval/metaScreener.git
cd metaScreener

# Create and activate a virtual environment
python -m venv .venv
.\.venv\Scripts\Activate.ps1

# Install dependencies
pip install -r requirements.txt

# Configure your API key
copy .env.example .env
# Edit .env and add your OpenAI API key

# Run
python run.py

macOS

# Clone the repository
git clone https://github.com/lars-ulaval/metaScreener.git
cd metaScreener

# Create and activate a virtual environment
python3 -m venv .venv
source .venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Configure your API key
cp .env.example .env
# Edit .env and add your OpenAI API key

# Run
python run.py

Linux (Ubuntu/Debian)

# Ensure Tkinter is available
sudo apt-get install python3-tk

# Clone the repository
git clone https://github.com/lars-ulaval/metaScreener.git
cd metaScreener

# Create and activate a virtual environment
python3 -m venv .venv
source .venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Configure your API key
cp .env.example .env
# Edit .env and add your OpenAI API key

# Run
python run.py

Note on Tesseract: Plugin 01 (Citations AI) can optionally use Tesseract OCR for fallback text extraction. If needed, install Tesseract separately for your platform and ensure tesseract is on your PATH.


Quick start

  1. Launch the application with python run.py. You will be prompted for your OpenAI API key.

  2. Prepare your inputs:

    • A criteria file in plain text (see docs_/samples/ic_ec_12.txt for format — one criterion per line with IC-N / EC-N identifiers)
    • A citation corpus as an aggregate CSV (see docs_/samples/20260122_1654_aggregate.csv for the expected schema)
    • Or, if starting from scratch, a PRISMA flow diagram PDF for Plugin 01
  3. Run the pipeline sequentially through the tabs:

    • Tab 1 (Citations AI): supply a PDF, extract references
    • Tab 2 (References-of-X AI): resolve and enrich extracted references
    • Tab 3 (Criteria Parser): load criteria + aggregate CSV, review the harmonized output, export a bundle ZIP
    • Tab 4 (EH): load the bundle, run exclusion by heuristic
    • Tab 5 (IH): load the EH output bundle, run inclusion by heuristic
    • Tab 6 (EL): load the IH output bundle, run LLM exclusion
    • Tab 7 (IL): load the EL output bundle, run LLM inclusion
  4. Review results: the final bundle ZIP contains reports/IL_FULL.csv with every record and its per-criterion decision evidence, and reports/IL_SURVIVORS.csv with the final included set.


Sample data

The docs_/samples/ directory contains minimal sample inputs for testing:

File Description
ic_ec_12.txt Sample inclusion/exclusion criteria (4 IC + 4 EC) for a VR/HMD workplace training review
20260122_1654_aggregate.csv Sample aggregate citation corpus (776 records) with structured metadata fields
ex_ref_2.txt Sample free-text reference list for Plugin 02

Dependencies

Package Role Stage(s)
openai (≥1.40.0) LLM API client 01, 03, 06, 07
pymupdf PDF parsing and image extraction 01
pillow Image processing 01
pytesseract OCR fallback (optional) 01
rapidfuzz Fuzzy title matching for reference resolution 02
requests HTTP client for bibliographic API queries 02
pandas CSV/XLSX data handling 02, 03
openpyxl Excel file support 03
langdetect Language detection 04, 05

All dependencies are listed in requirements.txt.


Platform compatibility

Platform Status Notes
Windows 10+ ✅ Developed and tested Primary development platform
macOS 12+ 🔄 In progress Tkinter is included with Python on macOS; testing underway
Linux (Ubuntu 24.04) 🔄 In progress Requires python3-tk package; testing underway

The application is pure Python with no compiled extensions. It is expected to work on any platform supporting Python 3.10+ and Tkinter. Cross-platform validation is currently being conducted and will be documented here upon completion.


Testing

Automated test coverage is currently being developed. The test suite will cover:

  • Criteria Parser: free-text parsing, operator/stage inference, guardrail enforcement
  • EH/IH: deterministic filtering against known input bundles with expected decision outcomes
  • EL/IL: bundle integrity verification, cache key construction, evidence gating logic
  • End-to-end: full pipeline execution on the sample corpus with output comparison against reference bundles

In the meantime, the pipeline can be validated manually by running it on the sample data provided in docs_/samples/ and comparing the resulting bundle contents against the screening funnel documented in the paper.

Status: 🔄 Test suite in development. Contributions welcome.


Configuration

Environment variables

Variable Required Default Description
OPENAI_API_KEY Yes (for LLM stages) Your OpenAI API key
SCREENA_EL_MODEL No gpt-4o-mini Model identifier for the EL stage
SCREENA_EL_TRUNC_CHARS No 1500 Maximum characters per field sent to the LLM
SCREENA_EL_BATCH_SIZE No 50 Number of records per LLM API call
SCREENA_EL_USE_CACHE No 1 Enable (1) or disable (0) the persistent decision cache

Copy .env.example to .env and set your API key. The application will prompt for confirmation on each launch.

LLM endpoint compatibility

metaScreener targets any OpenAI-compatible API endpoint. This includes:

  • OpenAI (GPT-4o, GPT-4o-mini, etc.)
  • Azure OpenAI
  • Locally hosted models via compatible inference frameworks (e.g., Ollama, LM Studio, vLLM)

Note: open-weight model compatibility with the evidence gating protocol (which requires models to produce verbatim substring quotations) has not been formally tested. If you test with a local model, we welcome your feedback via the issue tracker.


Project structure

metaScreener/
├── run.py                       # Application entry point
├── prisma_hub/
│   ├── main.py                  # Main window and tab orchestration
│   ├── plugin_api.py            # BasePlugin / PluginMeta contract
│   └── plugin_manager.py        # Dynamic plugin discovery and loading
├── plugins/
│   ├── 01_prisma_citations_ai_v3_1/   # Plugin 01: Citations AI
│   ├── 02_references_of_x/            # Plugin 02: References-of-X AI
│   ├── 03_harmoniser/                 # Plugin 03: Criteria Parser
│   ├── 04_eh/                         # Plugin 04: EH (Exclusion by Heuristic)
│   ├── 05_ih/                         # Plugin 05: IH (Inclusion by Heuristic)
│   ├── 06_el/                         # Plugin 06: EL (Exclusion by LLM)
│   └── 07_il/                         # Plugin 07: IL (Inclusion by LLM)
├── docs_/
│   └── samples/                 # Sample input files
├── requirements.txt
├── .env.example
└── LICENSE                      # MIT License

Extending metaScreener

metaScreener's plugin architecture is designed for extensibility. To create a new plugin:

  1. Create a new directory under plugins/ (e.g., plugins/08_my_plugin/)
  2. Add a plugin.py file that either:
    • Defines a build_tab(parent) function returning a tk.Frame, or
    • Defines a class inheriting from BasePlugin with a build_tab(self, parent) method
  3. Set TAB_TITLE = "My Plugin" at the module level
  4. The plugin manager will automatically discover and load it on the next launch

Plugins communicate exclusively through bundle ZIP files — there is no shared state or database. Each plugin reads a bundle, processes it, and emits a new bundle.


Citation

If you use metaScreener in your research, please cite:

@article{reyesconsuelo2026metascreener,
  author    = {Reyes-Consuelo, Alejandro and Kiss, Jocelyne and Voisin, Julien},
  title     = {metaScreener: A Plugin-Based Desktop Application for Human-in-the-Loop Systematic Literature Screening},
  journal   = {Journal of Open Research Software},
  year      = {2026},
  note      = {Submitted},
  doi       = {10.5281/zenodo.19360125}
}

Contributing

Contributions are welcome. To contribute:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/my-improvement)
  3. Commit your changes
  4. Push to the branch and open a pull request

Please ensure your code follows the existing style. For bug reports and feature requests, use the issue tracker.


License

metaScreener is released under the MIT License.


Acknowledgements

This work is supported by the Center of Interdisciplinary Research in Rehabilitation and Social Integration (CIRRIS), Laval University, Québec, Canada, and the International Observatory on the Societal Impacts of AI and Digital Technologies (OBVIA).


Developed by LARS — Laboratoire d'automatisation des recherches situées, Laval University

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

metascreener_lars_ulaval-3.0.0.tar.gz (461.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

metascreener_lars_ulaval-3.0.0-py3-none-any.whl (516.8 kB view details)

Uploaded Python 3

File details

Details for the file metascreener_lars_ulaval-3.0.0.tar.gz.

File metadata

  • Download URL: metascreener_lars_ulaval-3.0.0.tar.gz
  • Upload date:
  • Size: 461.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0

File hashes

Hashes for metascreener_lars_ulaval-3.0.0.tar.gz
Algorithm Hash digest
SHA256 7ab9c9c92ce1fa5ecb42fff70e235dfbbedf8029f9c3efe376be3fad2f97b181
MD5 f4ebc72f0f11c82b3da7c4ba94b962bb
BLAKE2b-256 203c3a57d12a9316bf059f66c3c726a1aa0ad5f2fc4f7ef59b3e79010ea760f8

See more details on using hashes here.

File details

Details for the file metascreener_lars_ulaval-3.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for metascreener_lars_ulaval-3.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 95c630d6ce19d853281f7dcf1c12098f4cdc4011f9915ca5697280c6949626eb
MD5 da19c51691d9fe18e2a8094d74d3056e
BLAKE2b-256 6fff43d600b69bd7aed6ed245482a62b7a4916a4410648134413b446a938f047

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page