Skip to main content

Converting longitudinal patient data into text for LLM-based event prediction and forecasting.

Project description

TwinWeaver Logo

License Python 3.8+ Documentation arXiv

TwinWeaver is a longitudinal framework for LLM-based Patient Digital Twins. It serializes longitudinal patient histories into text, enabling unified event prediction as well as forecasting with large language models (LLMs). This framework transforms structured patient history—including demographics, labs, treatments, and genetics—into a single, human-readable text prompt, enabling LLMs to jointly forecast continuous biomarkers and predict discrete clinical events, initially presented in our preprint "TwinWeaver: An LLM-Based Foundation Model Framework for Pan-Cancer Digital Twins.".

This project is a collaboration between Roche and Helmholtz Munich, as part of the Munich School of Data Science (MUDS) program.

⚙️ Installation

Install from PyPi

To install the package:

pip install twinweaver

Requirements

  • Python 3.8 or higher
  • Core dependencies: pandas, numpy, transformers, scikit-learn

The following sections will explain the tutorials/examples and afterwards the quick start guide.

📚 Documentation

Full documentation is available https://mendenlab.github.io/TwinWeaver/.

💡 Tutorials & Examples

The examples/ directory provides comprehensive tutorials to help you get up and running.

🔰 Core Tutorials

These notebooks cover the primary workflows for most users:

  • 0. Raw Data Preprocessing: examples/data_preprocessing/raw_data_preprocessing.ipynb
    • Start here if you have raw clinical data (e.g., EHR exports). Shows how to transform raw data into the three TwinWeaver dataframes (df_events, df_constant, df_constant_description), including handling death events and other time-to-event outcomes.
  • 1. Basics Overview: examples/01_data_preparation_for_training.ipynb
    • Demonstrates how to convert raw patient data (events, constants, genetics) into the instruction-tuning text format used by TwinWeaver. This is the core step for preparing data for fine-tuning.
  • 2. Inference: examples/02_inference_prompt_preparation.ipynb
    • Shows how to run inference using the TwinWeaver framework, including setting up the data manager and generating prompts.
  • 3. End-to-End Workflow: examples/03_end_to_end_llm_finetuning.ipynb
    • A complete guide covering the entire pipeline from data ingestion to LLM fine-tuning.
    • NOTE: please install the packages required via the exact following line pip install twinweaver[fine-tuning-example] (torch CUDA version might need to be adapted to your system)

🚀 Advanced Usage & Integrations

For users needing custom behavior or specific integrations:

🏗️ Framework Overview

TwinWeaver addresses the challenge of modeling sparse, multi-modal clinical time series by leveraging the generative capabilities of LLMs.

Core Components

  1. Text Serialization: Transforms multi-modal inputs (diagnoses, laboratory measurements, genetic mutation panels) into a structured textual representation of longitudinal patient trajectories.
  2. Unified Task Support:
    • Time-Series Forecasting: Forecasting frequently measured values such as blood biomarkers or vital signs.
    • Landmark Event Prediction: Predicting patient event status (e.g., survival, disease progression) at future time points using a landmarking framework.
  3. Flexible Horizon: Supports sampling split times and prediction horizons to avoid overfitting to specific canonical time points.

🚀 Quick Start

Here's a minimal example to get you started with TwinWeaver:

import pandas as pd

from twinweaver import (
    DataManager,
    Config,
    DataSplitterForecasting,
    DataSplitterEvents,
    ConverterInstruction,
    DataSplitter,
)

# Initialize config and set up splitting/prediction variables
config = Config()

# <---------------------- CRITICAL CONFIGURATION ---------------------->
# 1. Event category used for data splitting (e.g., split data around Lines of Therapy 'lot')
# Has to be set for all instruction tasks
config.split_event_category = "lot"

# 2. List of event categories we want to forecast (e.g., forecasting 'lab' values)
# Only needs to be set if you want to forecast variables
config.event_category_forecast = ["lab"]

# 3. Mapping of specific time to events to predict (e.g., we want to predict 'death' and 'progression')
# Only needs to be set if you want to do time to event prediction
config.data_splitter_events_variables_category_mapping = {
    "death": "death",
    "progression": "next progression",  # Custom name in prompt: "next progression" instead of "progression"
}

# Load your patient data <----- assuming your data is in df_events, df_constant and df_constant_description
dm = DataManager(config=config)
dm.load_indication_data(df_events=df_events, df_constant=df_constant, df_constant_description=df_constant_description)
dm.process_indication_data()
dm.setup_unique_mapping_of_events()
dm.setup_dataset_splits()
dm.infer_var_types()

# This data splitter handles event prediction tasks
data_splitter_events = DataSplitterEvents(dm, config=config)
data_splitter_events.setup_variables()

# This data splitter handles forecasting tasks
data_splitter_forecasting = DataSplitterForecasting(
    data_manager=dm,
    config=config,
)

# We will also use the easier interface that combines both data splitters
data_splitter = DataSplitter(data_splitter_events, data_splitter_forecasting)

# Set up the converter instruction
converter = ConverterInstruction(
    nr_tokens_budget_total=8192,
    config=config,
    dm=dm,
    variable_stats=data_splitter_forecasting.variable_stats,  # Optional, needed for forecasting QA tasks
)

patient_data = dm.get_patient_data("patient_id_0")  # <--- Set your patient id here

forecasting_splits, events_splits, reference_dates = data_splitter.get_splits_from_patient_with_target(patient_data)

split_idx = 0
training_data = converter.forward_conversion(
    forecasting_splits=forecasting_splits[split_idx],
    event_splits=events_splits[split_idx],
    override_mode_to_select_forecasting="both",
)

# training_data now contains (Input, Target) pairs ready for LLM fine-tuning

For complete tutorials, see the Tutorials & Examples section above, and the full documentation at https://mendenlab.github.io/TwinWeaver/.

📊 Dataset Format

TwinWeaver expects three primary dataframes (or CSV files) as input. Example files can be found in examples/example_data/.

1. Longitudinal Events (events.csv)

Contains time-varying clinical data where each row represents a single event.

patientid date event_descriptive_name event_category event_name event_value meta_data source
Unique identifier for the patient Date of the event (processable by pandas.to_datetime) Human-readable name used in the text output (Optional) Category (e.g., lab, drug), used for determining splits & tasks (Optional) Specific event identifier Value associated with the event, used in text output (Optional) Additional metadata (Optional) Modality of data - default to "events", alternatively "genetic"

2. Patient Constants (constant.csv)

Contains static patient information (demographics, baseline characteristics). One row per patient.

patientid e.g. birthyear e.g. gender ...
Unique identifier for the patient e.g. Patient's year of birth e.g. Patient's gender Any other static patient attributes

3. Constant Descriptions (constant_description.csv)

Maps columns in the constant table to human-readable descriptions for the text prompt.

variable comment
Name of the column in the constant table Description of the variable for the text prompt

Generally, we prefer to keep as much as possible inot the long events table, and only put things into constant that cannot go anywhere else.

Further details at https://mendenlab.github.io/TwinWeaver/dataset-format/.

📂 Dataset Types: Instruction vs. Pretraining

TwinWeaver supports two primary data formats, each serving a distinct stage in the model training pipeline:

  1. Pretraining Data:

    • Purpose: Continued Pretraining (CPT) to adapt a general-purpose LLM to the clinical domain.
    • Format: A narrative-style serialization of the entire patient history. It does not contain specific questions or answers but rather presents the patient's chronological journey as a continuous text.
    • Goal: Enables the model to learn medical terminology, clinical relationships, and temporal dynamics in an unsupervised manner (next-token prediction).
    • Converter: twinweaver.pretrain.converter_manual_template.ConverterPretrain
  2. Instruction Data:

    • Purpose: Supervised Fine-Tuning (SFT) to teach the model to perform specific clinical tasks.
    • Format: Structured into "Input" (Prompt) and "Target" (Completion) pairs.
      • Input: Patient history up to a specific time point + a list of specific questions (e.g., "Forecast the next 3 weeks of hemoglobin values").
      • Target: The ground truth answers to those questions.
    • Goal: Optimizes the model for specific downstream applications like forecasting and risk stratification.
    • Converter: twinweaver.instruction.converter_manual_instruction.ConverterInstruction

Further details at https://mendenlab.github.io/TwinWeaver/framework/.

📝 Paper, Authors & Citation

The paper can be found on Arxiv.

The core authors are: Nikita Makarov, Maria Bordukova, Lena Voith von Voithenberg, Estrella Villanueva Pivel, Sabrina Mielke, Jonathan Wickes, Hanchen Wang, Derek Ma, Keunwoo Choi, Kyunghyun Cho, Stephen Ra, Raul Rodriguez-Esteban, Fabian Schmich, Michael Menden

If you use the package, please cite

@misc{makarov2026twinweaver,
      title={TwinWeaver: An LLM-Based Foundation Model Framework for Pan-Cancer Digital Twins},
      author={Nikita Makarov and Maria Bordukova and Lena Voith von Voithenberg and Estrella Pivel-Villanueva and Sabrina Mielke and Jonathan Wickes and Hanchen Wang and Mingyu Derek Ma and Keunwoo Choi and Kyunghyun Cho and Stephen Ra and Raul Rodriguez-Esteban and Fabian Schmich and Michael Menden},
      year={2026},
      eprint={2601.20906},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2601.20906},
}

The logo was generated with Nano Banana Pro.

For questions or issues, please raise a Github issue or contact nikita.makarov@roche.com or michael.menden@unimelb.edu.au.

🧞🧞 Genie Digital Twin (GDT)

Note: The specific implementation, training, and evaluation code for the GDT model mentioned in the TwinWeaver paper is located in MendenLab/GDT.

GDT is a pan-cancer model instantiated using TwinWeaver, trained on over 93,000 patients across 20 cancer types.

Performance

GDT significantly reduces forecasting error, achieving a median Mean Absolute Scaled Error (MASE) of 0.87 compared to 0.97 for strong time-series baselines. Furthermore, it improves risk stratification, achieving an average C-index of 0.703 across survival, progression, and therapy switching tasks. GDT also demonstrates capabilities in zero-shot generalization to out-of-distribution clinical trials and supports an interpretable clinical reasoning extension.

� Testing

To run the test suite:

pip install pytest pytest-cov
pytest tests/

📜 License

TwinWeaver is licensed under the Apache License 2.0. See LICENSE for details.

🤝 Contributing

We welcome contributions to TwinWeaver! Please follow these steps to contribute.

Development Setup

  1. Clone the repository and install dependencies:

    git clone https://github.com/MendenLab/TwinWeaver
    cd twinweaver
    pip install -e .
    pip install -r examples/requirements.txt
    pip install pre-commit pytest pytest-cov
    pip install -r docs/requirements.txt
    
  2. Install pre-commit hooks: We use pre-commit to ensure code formatting and quality checks run before you commit.

    pre-commit install
    

Running Tests

We use pytest for testing. To run the full test suite:

pytest tests/

Building Documentation

The documentation is built with mkdocs. To preview it locally:

mkdocs serve

Contribution Workflow

  1. Create a New Branch: Always create a new branch for your feature or fix.
    git checkout -b feature/my-new-feature
    
  2. Make Changes: Implement your feature or fix.
  3. Run Tests & Linting: Ensure your code passes all tests and pre-commit hooks.
  4. Submit a Merge Request:
    • Push your branch to the repository.
    • Open a Merge Request (Pull Request) against the main branch.
    • Describe your changes clearly in the MR description.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

twinweaver-0.2.0.tar.gz (99.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

twinweaver-0.2.0-py3-none-any.whl (98.0 kB view details)

Uploaded Python 3

File details

Details for the file twinweaver-0.2.0.tar.gz.

File metadata

  • Download URL: twinweaver-0.2.0.tar.gz
  • Upload date:
  • Size: 99.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for twinweaver-0.2.0.tar.gz
Algorithm Hash digest
SHA256 ef5ae0baf1112065e2a8da2a6cc431dfd8435f961a467ac5cf6919b703db86b9
MD5 d0f84ed49a0d73d8269e42d8c0ce0201
BLAKE2b-256 82aa956efdb15d081111481f5ecf5692833b015fb7d19d93bd357c2ced2aa726

See more details on using hashes here.

File details

Details for the file twinweaver-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: twinweaver-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 98.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for twinweaver-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5bc2dfac11c20e11049d5bebc9723359f4b5fd27524306b365a3415a78f0fe67
MD5 d593505b7468cbabdeb32b8103912985
BLAKE2b-256 4055a4a396a6e253e016e07ff11d5db7704367bda14dfdfa31e761d59e1f4180

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page