Skip to main content

A deep-learning based multi-omics bulk sequencing data integration suite with a focus on (pre-)clinical endpoint prediction.

Project description

logo

Downloads benchmarks tutorials

flexynesis

A deep-learning based multi-omics bulk sequencing data integration suite with a focus on (pre-)clinical endpoint prediction. The package includes multiple types of deep learning architectures such as simple fully connected networks, supervised variational autoencoders, graph convolutional networks, multi-triplet networks different options of data layer fusion, and automates feature selection and hyperparameter optimisation. The tools are continuosly benchmarked on publicly available datasets mostly related to the study of cancer. Some of the applications of the methods we develop are drug response modeling in cancer patients or preclinical models (such as cell lines and patient-derived xenografts), cancer subtype prediction, or any other clinically relevant outcome prediction that can be formulated as a regression, classification, survival, or cross-modality prediction problem.

workflow

Citing our work

In order to refer to our work, please cite our manuscript currently available at BioRxiv.

Getting started with Flexynesis

Command-line tutorial

Jupyter notebooks for interactive usage

Benchmarks

For the latest benchmark results see: https://bimsbstatic.mdc-berlin.de/akalin/buyar/flexynesis-benchmark-datasets/dashboard.html

The code for the benchmarking pipeline is at: https://github.com/BIMSBbioinfo/flexynesis-benchmarks

Defining Kernel for the Jupyter Notebook

For interactively using flexynesis on Jupyter notebooks, one can define the kernel to make flexynesis and its dependencies available on the jupyter session.

Assuming you have already defined an environment and installed the package:

conda activate flexynesisenv 
python -m ipykernel install --user --name "flexynesisenv" --display-name "flexynesisenv"

Compiling Notebooks

papermill can be used to compile the tutorials under examples/tutorials.

If the purpose is to quickly check if the notebook can be run; set HPO_ITER to 1. This sets hyperparameter optimisation steps to 1. For longer training runs to see more meaningful results from the notebook, increase this number to e.g. 50.

Example:

papermill examples/tutorials/brca_subtypes.ipynb brca_subtypes.ipynb -p HPO_ITER 1 

The output from papermill can be converted to an html file as follows:

jupyter nbconvert --to html brca_subtypes.ipynb 

Documentation

Documentation generated using mkdocs

pip install mkdocstrings[python]
mkdocs build --clean

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flexynesis-0.2.9.tar.gz (74.9 kB view hashes)

Uploaded Source

Built Distribution

flexynesis-0.2.9-py3-none-any.whl (95.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page