Package containing scripts used in lynference pipelines
Project description
What are these lyscripts
?
This package provides convenient scripts for performing inference and learning regarding the lymphatic spread of head & neck cancer. Essentially, it provides a command line interface (CLI) to the lymph
library.
We are making these "convenience" scripts public, because doing so is one necessary requirement to making our research easily and fully reproducible. There exists another repository, lynference
, where we store the pipelines that produce(d) our published results in a persistent way. Head over there to learn more about how to reproduce our work.
Installation
These scripts can be installed via pip
:
pip install lyscripts
or installed from source by cloning this repo
git clone https://github.com/rmnldwg/lyscripts.git
cd lyscripts
pip install .
Usage
After installing the package, run lyscripts --help
to see the following output:
USAGE: lyscripts [-h] [-v]
{app,data,evaluate,plot,predict,sample,temp_schedule} ...
Utility for performing common tasks w.r.t. the inference and prediction tasks one
can use the `lymph` package for.
POSITIONAL ARGUMENTS:
{app,data,evaluate,plot,predict,sample,temp_schedule}
app Module containing scripts to run different `streamlit`
applications.
data Provide a range of commands related to datasets on
patterns of lymphatic progression. Currently, the
following modules provide additional commands: 1. The
`lyscripts.data.clean` module that converts a LyProX-style
table of patient information into a simplified format that
is used by the `lymph` model. 2. `lyscripts.data.enhance`,
a module for computing consensus diagnoses and to ensure
that super- and sublevels are consistently reported. 3.
The module `lyscripts.data.generate` for creating
synthetic datasets with certain characteristics. 4.
Submodule `lyscripts.data.join` to concatenate two
datasets, e.g. from different institutions. 5.
`lyscripts.data.split`, a module with which datasets may
be split into random sets of patient data. The split data
may then be used e.g. for cross-validation.
evaluate Evaluate the performance of the trained model by computing
quantities like the Bayesian information criterion (BIC)
or (if thermodynamic integration was performed) the actual
evidence (with error) of the model.
plot Provide various plotting utilities for displaying results
of e.g. the inference or prediction process. At the
moment, three subcommands are grouped under
`lyscripts.plot`: 1. `lyscripts.plot.corner`, which simply
outputs a corner plot with nice labels for a set of drawn
samples. 2. The module `lyscripts.plot.histograms` can be
used to draw histograms, e.g. the ones over risks and
prevalences as computed by the `lyscripts.predict` module.
3. Module `lyscripts.plot.thermo_int` allows comparing
rounds of thermodynamic integration for different models.
predict This module provides functions and scripts to predict the
risk of hidden involvement, given observed diagnoses, and
prevalences of patterns for diagnostic modalities. The
submodules for prediction are currently: 1. The
`lyscripts.predict.prevalences` module for computing
prevalences of certain involvement patterns that may also
be compared to observed prevalences. 2. A module
`lyscripts.predict.risks` for predicting the risk of any
specific pattern of involvement given any particular
diagnosis.
sample Learn the spread probabilities of the HMM for lymphatic
tumor progression using the preprocessed data as input and
MCMC as sampling method. This is the central script
performing for our project on modelling lymphatic spread
in head & neck cancer. We use it for model comparison via
the thermodynamic integration functionality and use the
sampled parameter estimates for risk predictions. This
risk estimate may in turn some day guide clinicians to
make more objective decisions with respect to defining the
*elective clinical target volume* (CTV-N) in radiotherapy.
temp_schedule Generate inverse temperature schedules for thermodynamic
integration using various different methods. Thermodynamic
integration is quite sensitive to the specific schedule
which is used. I noticed in my models, that within the
interval $[0, 0.1]$, the increase in the expected
log-likelihood is very steep. Hence, the inverse
temparature $\beta$ must be more densely spaced in the
beginning. This can be achieved by using a power sequence:
Generate $n$ linearly spaced points in the interval $[0,
1]$ and then transform each point by computing $\beta_i^k$
where $k$ could e.g. be 5.
OPTIONAL ARGUMENTS:
-h, --help show this help message and exit
-v, --version Display the version of lyscripts (default: False)
Each of the individual subcommands provides a help page like this respectively that detail the positional and optional arguments along with their function.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for lyscripts-1.0.0a0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9d698e5e180e512f4285965b07c886c54176ad0f82a4484785516cd7e38525bd |
|
MD5 | 2c9334ca00180feba146bdabbecf3bcb |
|
BLAKE2b-256 | efff1b2631e5e7475ea729322e9a13e300e26c4748ecfce620d5cdb1067f98f9 |