Skip to main content

Suite of tools for analysing the independence between training and evaluation biosequence datasets and to generate new generalisation-evaluating hold-out partitions

Project description

Hestia

Computational tool for generating generalisation-evaluating evaluation sets.

Tutorials GitHub

Contents

Table of Contents

Installation

Installing in a conda environment is recommended. For creating the environment, please run:

conda create -n hestia python
conda activate hestia

1. Python Package

1.1.From PyPI

pip install hestia-ood

1.2. Directly from source

pip install git+https://github.com/IBM/Hestia-OOD

3. Optional dependencies

3.1. Molecular similarity

RDKit is a dependency necessary for calculating molecular similarities:

pip install rdkit

3.2. Sequence alignment

# static build with AVX2 (fastest) (check using: cat /proc/cpuinfo | grep avx2)
wget https://mmseqs.com/latest/mmseqs-linux-avx2.tar.gz; tar xvfz mmseqs-linux-avx2.tar.gz; export PATH=$(pwd)/mmseqs/bin/:$PATH

# static build with SSE4.1  (check using: cat /proc/cpuinfo | grep sse4)
wget https://mmseqs.com/latest/mmseqs-linux-sse41.tar.gz; tar xvfz mmseqs-linux-sse41.tar.gz; export PATH=$(pwd)/mmseqs/bin/:$PATH

# static build with SSE2 (slowest, for very old systems)  (check using: cat /proc/cpuinfo | grep sse2)
wget https://mmseqs.com/latest/mmseqs-linux-sse2.tar.gz; tar xvfz mmseqs-linux-sse2.tar.gz; export PATH=$(pwd)/mmseqs/bin/:$PATH

# MacOS
brew install mmseqs2  

To use Needleman-Wunch, either:

conda install -c bioconda emboss

or

sudo apt install emboss

3.3. Structure alignment

# Linux AVX2 build (check using: cat /proc/cpuinfo | grep avx2)
wget https://mmseqs.com/foldseek/foldseek-linux-avx2.tar.gz; tar xvzf foldseek-linux-avx2.tar.gz; export PATH=$(pwd)/foldseek/bin/:$PATH

# Linux SSE2 build (check using: cat /proc/cpuinfo | grep sse2)
wget https://mmseqs.com/foldseek/foldseek-linux-sse2.tar.gz; tar xvzf foldseek-linux-sse2.tar.gz; export PATH=$(pwd)/foldseek/bin/:$PATH

# Linux ARM64 build
wget https://mmseqs.com/foldseek/foldseek-linux-arm64.tar.gz; tar xvzf foldseek-linux-arm64.tar.gz; export PATH=$(pwd)/foldseek/bin/:$PATH

# MacOS
wget https://mmseqs.com/foldseek/foldseek-osx-universal.tar.gz; tar xvzf foldseek-osx-universal.tar.gz; export PATH=$(pwd)/foldseek/bin/:$PATH

Documentation

1. DatasetGenerator

The HestiaDatasetGenerator allows for the easy generation of training/validation/evaluation partitions with different similarity thresholds. Enabling the estimation of model generalisation capabilities. It also allows for the calculation of the ABOID (Area between the similarity-performance curve (Out-of-distribution) and the In-distribution performance).

from hestia.dataset_generator import HestiaDatasetGenerator, SimilarityArguments

# Initialise the generator for a DataFrame
generator = HestiaDatasetGenerator(df)

# Define the similarity arguments (for more info see the documentation page https://ibm.github.io/Hestia-OOD/datasetgenerator)
args = SimilarityArguments(
    data_type='protein', field_name='sequence',
    similarity_metric='mmseqs2+prefilter', verbose=3
)

# Calculate the similarity
generator.calculate_similarity(args)

# Calculate partitions
generator.calculate_partitions(min_threshold=0.3,
                               threshold_step=0.05,
                               test_size=0.2, valid_size=0.1)

# Save partitions
generator.save_precalculated('precalculated_partitions.gz')

# Load pre-calculated partitions
generator.from_precalculated('precalculated_partitions.gz')

# Training code

for threshold, partition in generator.get_partitions():
    train = df.iloc[partition['train']]
    valid = df.iloc[partition['valid']]
    test = df.iloc[partition['test']]

# ...

# Calculate AU-GOOD
generator.calculate_augood(results, 'test_mcc')

# Plot GOOD
generator.plot_good(results, 'test_mcc')

# Compare two models
results = {'model A': [values_A], 'model B': [values_B]}
generator.compare_models(results, statistical_test='wilcoxon')

2. Similarity calculation

Calculating pairwise similarity between the entities within a DataFrame df_query or between two DataFrames df_query and df_target can be achieved through the calculate_similarity function:

from hestia.similarity import sequence_similarity_mmseqs
import pandas as pd

df_query = pd.read_csv('example.csv')

# The CSV file needs to have a column describing the entities, i.e., their sequence, their SMILES, or a path to their PDB structure.
# This column corresponds to `field_name` in the function.

sim_df = sequence_similarity_mmseqs(df_query, field_name='sequence', prefilter=True)

More details about similarity calculation can be found in the Similarity calculation documentation.

3. Clustering

Clustering the entities within a DataFrame df can be achieved through the generate_clusters function:

from hestia.similarity import sequence_similarity_mmseqs
from hestia.clustering import generate_clusters
import pandas as pd

df = pd.read_csv('example.csv')
sim_df = sequence_similarity_mmseqs(df, field_name='sequence')
clusters_df = generate_clusters(df, field_name='sequence', sim_df=sim_df,
                                cluster_algorithm='CDHIT')

There are three clustering algorithms currently supported: CDHIT, greedy_cover_set, or connected_components. More details about clustering can be found in the Clustering documentation.

4. Partitioning

Partitioning the entities within a DataFrame df into a training and an evaluation subsets can be achieved through 4 different functions: ccpart, graph_part, reduction_partition, and random_partition. An example of how cc_part would be used is:

from hestia.similarity import sequence_similarity_mmseqs
from hestia.partition import ccpart
import pandas as pd

df = pd.read_csv('example.csv')
sim_df = sequence_similarity_mmseqs(df, field_name='sequence')
train, test = cc_part(df, threshold=0.3, test_size=0.2, sim_df=sim_df)

train_df = df.iloc[train, :]
test_df = df.iloc[test, :]

License

Hestia is an open-source software licensed under the MIT Clause License. Check the details in the LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hestia_ood-0.0.34.tar.gz (31.5 kB view details)

Uploaded Source

Built Distribution

hestia_ood-0.0.34-py3-none-any.whl (30.6 kB view details)

Uploaded Python 3

File details

Details for the file hestia_ood-0.0.34.tar.gz.

File metadata

  • Download URL: hestia_ood-0.0.34.tar.gz
  • Upload date:
  • Size: 31.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for hestia_ood-0.0.34.tar.gz
Algorithm Hash digest
SHA256 a58638521184d759688292a9d0aad19b4ab5bc009d1e2309d965e7e860b4e938
MD5 a012e1726da968efe9b3895a182586f7
BLAKE2b-256 8a19c4cf36b9c463b70bcbf5306b485803ea1cc351acc7d95bbcf21054c1afed

See more details on using hashes here.

File details

Details for the file hestia_ood-0.0.34-py3-none-any.whl.

File metadata

  • Download URL: hestia_ood-0.0.34-py3-none-any.whl
  • Upload date:
  • Size: 30.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for hestia_ood-0.0.34-py3-none-any.whl
Algorithm Hash digest
SHA256 8dd2814426a169deec2fb2ea05cebb56592f7b8faa60698f2bcc79e233e1a524
MD5 1a9767c0ff8eeeefbeff0e9e419e0d07
BLAKE2b-256 d4835d93f82aa52013f4206d1da00a357c4c02305d1c50d5b61452855acb1d74

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page