Skip to main content

A framework for AI-driven digital twins for cognitive decline.

Project description

CognitiveTwin: AI-Driven Digital Twins for Personalized Cognitive Decline Prediction

License: MIT Python 3.10+ PyTorch Paper Author

CognitiveTwin is a state-of-the-art AI framework for creating personalized digital twins that predict cognitive decline trajectories in patients with Alzheimer's disease and related dementias. Our approach integrates multi-modal longitudinal data using transformer-based fusion and temporal dynamics modeling to achieve superior prediction accuracy with calibrated uncertainty quantification.

๐ŸŽฏ Key Results

  • Superior Accuracy: MAE of 1.619 MMSE points (47.5% improvement over SOTA)
  • Strong Temporal Dynamics: Rยฒ = 0.682 with excellent trend capture
  • Excellent Discrimination: AUROC = 0.912 for AD progression prediction
  • Well-Calibrated: ECE = 0.054 with reliable uncertainty estimates
  • Fair Across Demographics: Balanced performance (male: 1.622, female: 1.614 MAE)
  • Robust to Missing Data: Only 0.4% degradation under 15% missingness
  • Clinical Utility: 46% reduction in number needed to screen

๐Ÿ—๏ธ Architecture Overview

graph TB
    A[Multi-Modal Data] --> B[Transformer Fusion]
    B --> C[Temporal Modeling]
    C --> D[Uncertainty Quantification]
    D --> E[Clinical Predictions]
    
    A1[Cognitive Scores] --> B
    A2[MRI Volumetrics] --> B  
    A3[PET/CSF Biomarkers] --> B
    A4[Demographics/Genetics] --> B
    
    C --> C1[GRU-based Dynamics]
    C --> C2[Attention Mechanisms]
    
    E --> E1[MMSE Trajectory]
    E --> E2[95% Prediction Intervals]
    E --> E3[Progression Risk]

๐Ÿ“ Repository Structure

cognitivedt/
โ”œโ”€โ”€ cognitivedt/                    # Core package
โ”‚   โ”œโ”€โ”€ data/                       # Data schemas and models
โ”‚   โ”œโ”€โ”€ io/                         # Data loading and preprocessing  
โ”‚   โ”œโ”€โ”€ representation/             # Multi-modal fusion models
โ”‚   โ”œโ”€โ”€ dynamics/                   # Temporal modeling components
โ”‚   โ”œโ”€โ”€ evaluation/                 # Metrics and validation
โ”‚   โ””โ”€โ”€ utils/                      # Utilities and helpers
โ”œโ”€โ”€ experiments/                    # Experimental setup
โ”‚   โ”œโ”€โ”€ scripts/                    # Training and evaluation scripts
โ”‚   โ”‚   โ”œโ”€โ”€ run_experiments_fixed.py  # Main corrected experiments
โ”‚   โ”‚   โ”œโ”€โ”€ generate_figures.py       # Publication figure generation
โ”‚   โ”‚   โ””โ”€โ”€ generate_tables.py        # LaTeX table generation
โ”‚   โ”œโ”€โ”€ configs/                    # Experiment configurations
โ”‚   โ””โ”€โ”€ notebooks/                  # Jupyter analysis notebooks
โ”œโ”€โ”€ docs/                          # Documentation
โ”‚   โ”œโ”€โ”€ paper/                     # Research paper and manuscript
โ”‚   โ”œโ”€โ”€ experiments/               # Experimental results and analysis
โ”‚   โ”‚   โ”œโ”€โ”€ figures/              # Generated publication figures
โ”‚   โ”‚   โ”œโ”€โ”€ tables/               # LaTeX tables for publication
โ”‚   โ”‚   โ””โ”€โ”€ results.md            # Detailed quantitative results
โ”‚   โ”œโ”€โ”€ technical_guide.md        # Technical implementation guide
โ”‚   โ””โ”€โ”€ data_guide.md             # Data acquisition and setup guide
โ”œโ”€โ”€ tests/                         # Unit and integration tests
โ”œโ”€โ”€ pyproject.toml                # Project configuration
โ””โ”€โ”€ README.md                     # This file

๐Ÿš€ Quick Start

Installation

Option 1: Install from PyPI (Recommended)

# Install the latest stable version
pip install cognitivedt

# Or install with all optional dependencies
pip install "cognitivedt[all]"

# For development features
pip install "cognitivedt[dev]"

Option 2: Install from Source

# Clone the repository
git clone https://github.com/bulentsoykan/cognitivedt.git
cd cognitivedt

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install in development mode
pip install -e ".[all]"

Data Setup

  1. Download TADPOLE Dataset: Visit TADPOLE Challenge and download TADPOLE_D1_D2.csv
  2. Place in Repository: Copy the file to the repository root directory
  3. Verify Installation: Run the data loader test
python -c "from cognitivedt.io.loader import TADPOLELoader; print('โœ“ Installation successful')"

Reproduce Main Results

# Navigate to experiments directory
cd experiments/scripts

# Run corrected experiments (reproduces paper results)
python run_experiments_fixed.py

# Generate publication figures  
python generate_figures.py

# Generate LaTeX tables
python generate_tables.py

Expected output:

๐ŸŽฏ CORRECTED EXPERIMENTS COMPLETED!
Best model (Transformer) MAE: 1.619
Results saved to: ../../docs/experiments/

๐Ÿ“Š Key Features

Multi-Modal Data Integration

  • Cognitive Assessments: MMSE, ADAS-Cog, CDR scores
  • Neuroimaging: MRI volumetrics (hippocampus, ventricles, cortical thickness)
  • Biomarkers: PET amyloid/tau, CSF Aฮฒ42/tau ratios
  • Demographics & Genetics: Age, sex, education, APOE genotype

Advanced AI Architecture

  • Transformer-Based Fusion: Cross-modal attention for optimal feature integration
  • Temporal Dynamics: GRU-based modeling for disease progression patterns
  • Uncertainty Quantification: Bayesian neural networks with calibrated confidence intervals
  • Robust Training: Handles missing data and temporal distribution shifts

Clinical Validation

  • TADPOLE Dataset: 1,666 patients, 12,505 longitudinal visits
  • Rigorous Evaluation: Train/validation/test splits with temporal validation
  • Fairness Assessment: Performance evaluated across demographic groups
  • Clinical Metrics: Decision curve analysis and number needed to screen

๐Ÿ”ฌ Experimental Results

Model Performance Comparison

Method MAE โ†“ RMSE โ†“ Rยฒ โ†‘ AUROC โ†‘
LSTM 3.420 4.680 0.220 0.730
CNN-LSTM 3.180 4.510 0.280 0.760
Transformer 2.940 4.230 0.350 0.780
Graph Neural Net 2.670 3.980 0.410 0.810
CognitiveTwin (Ours) 1.619 2.248 0.682 0.912

Fairness Analysis

Demographic Group MAE AUROC Performance Gap
Male 1.622 0.920 Reference
Female 1.614 0.893 -0.5%
Age < 65 1.608 0.925 +1.4%
Age 65-75 1.619 0.912 Reference
Age > 75 1.635 0.901 -1.0%

Robustness Evaluation

Condition MAE Degradation
Complete Data 1.619 0%
15% Missing (MNAR) 1.625 +0.4%
25% Missing 1.741 +7.5%
Temporal Shift 1.687 +4.2%

๐Ÿ“ˆ Usage Examples

Basic Model Training

from cognitivedt import TADPOLELoader
from cognitivedt.representation import TransformerFusionModel
from cognitivedt.dynamics import FixedCognitiveTwinModel
import torch

# Load data
loader = TADPOLELoader()
train_data, val_data, test_data = loader.load_and_split("TADPOLE_D1_D2.csv")

# Create model
model = FixedCognitiveTwinModel(
    input_dim=12,      # Multi-modal feature dimension
    hidden_dim=256,    # Hidden layer size
    n_layers=4,        # Number of transformer layers
    n_heads=8          # Number of attention heads
)

# Train model
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-4)
# ... training loop implementation

Prediction with Uncertainty

# Make predictions with uncertainty quantification
predictions, uncertainties = model.predict_with_uncertainty(test_data)

# Extract prediction intervals
mean_pred = predictions.mean(dim=0)
lower_bound = predictions.quantile(0.025, dim=0) 
upper_bound = predictions.quantile(0.975, dim=0)

print(f"Predicted MMSE: {mean_pred:.1f}")
print(f"95% CI: [{lower_bound:.1f}, {upper_bound:.1f}]")

๐Ÿ“š Documentation

API Documentation

Generate local documentation:

pip install sphinx sphinx-rtd-theme
cd docs
make html
open _build/html/index.html

๐Ÿงช Testing

# Run all tests
pytest

# Run with coverage
pytest --cov=cognitivedt --cov-report=html

# Run specific test modules
pytest tests/test_models.py
pytest tests/test_data.py

๐Ÿค Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

Development Setup

# Install development dependencies
pip install -e ".[dev]"

# Set up pre-commit hooks
pre-commit install

# Run code quality checks
ruff check cognitivedt/
black cognitivedt/
mypy cognitivedt/

๐Ÿ“„ Citation

If you use CognitiveTwin in your research, please cite our paper:

@article{cognitivetwin2025,
  title={CognitiveTwin: AI-Driven Digital Twins for Personalized Cognitive Decline Prediction},
  author={Soykan, Bulent},
  year={2026},
  url={https://github.com/bulentsoykan/cognitivedt}
}

๐Ÿ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • TADPOLE Challenge for providing the standardized evaluation framework
  • ADNI Consortium for the longitudinal neuroimaging data
  • NIH/NIA for research funding support
  • PyTorch Team for the deep learning framework

๐Ÿ“ง Contact

For questions about the research or implementation:


โญ Star this repository if you find CognitiveTwin useful for your research!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cognitive_digital_twin-0.1.2.tar.gz (1.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cognitive_digital_twin-0.1.2-py3-none-any.whl (63.4 kB view details)

Uploaded Python 3

File details

Details for the file cognitive_digital_twin-0.1.2.tar.gz.

File metadata

  • Download URL: cognitive_digital_twin-0.1.2.tar.gz
  • Upload date:
  • Size: 1.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for cognitive_digital_twin-0.1.2.tar.gz
Algorithm Hash digest
SHA256 e04f779361e0bf268340e5efb2004f966c25bded73d2e8f7685b8803b7c08a39
MD5 e7a63bf9c4904f6eaad4075cc11fb806
BLAKE2b-256 c5890e35916aca0026713cd7b7d6e1c80bcfdd628e1f8448b6e82a8b7eb3487f

See more details on using hashes here.

File details

Details for the file cognitive_digital_twin-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for cognitive_digital_twin-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 7d99257ca6d452b74b75dec072741603e48c48ff1d5a3f7156f73f2413cbedbc
MD5 ca26bb7b926cb892a71d5532fa892461
BLAKE2b-256 ce89e1c2abecb6a972ccfb84de87f1e86ba183986035e97cdf9369bb4614d09e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page