Skip to main content

A tool for metagenomic taxonomic profiling and abundance matrix generation

This project has been archived.

The maintainers of this project have marked this project as archived. No new releases are expected.

Project description

toxolib (v0.1.12)

A Python package for metagenomic taxonomic profiling and abundance matrix generation.

Installation

Using pip

pip install toxolib

Install directly from GitHub

pip install git+https://github.com/dhruvac29/toxolib.git

Using conda

We recommend using conda to install all dependencies. An environment file is included in the package:

# Clone the repository
git clone https://github.com/dhruvac29/toxolib.git
cd toxolib

# Create and activate the conda environment
conda env create -f environment.yml
conda activate taxonomy_env

# Install the package
pip install -e .

Requirements

This package requires the following external tools to be installed and available in your PATH:

  • Kraken2
  • Bracken
  • Krona (for visualization)
  • fastp (for preprocessing)
  • bowtie2 (for host removal)
  • samtools

All these dependencies are included in the conda environment file.

Database Setup

Automated Database Setup

Toxolib provides automated database setup for both local and HPC environments.

Local Database Setup

# Set up both Kraken2 and corn genome databases
toxolib db-setup -o /path/to/databases --kraken --corn

# Set up only Kraken2 database
toxolib db-setup -o /path/to/databases --kraken

# Set up only corn genome database
toxolib db-setup -o /path/to/databases --corn

# Force re-download of databases even if they exist
toxolib db-setup -o /path/to/databases --kraken --corn --force

After setup, you should set the environment variable for Kraken2:

export KRAKEN2_DB_DIR=/path/to/databases/Kraken2_DB

HPC Database Setup

When submitting jobs to the HPC, you can automatically download and set up the databases locally and upload them to the HPC:

# Automatically download locally and upload both databases to the HPC
toxolib hpc -r sample1_L001_R1.fastq.gz sample1_L001_R2.fastq.gz sample1_L002_R1.fastq.gz sample1_L002_R2.fastq.gz -o /path/on/hpc/output_dir \
    --setup-kraken-db --setup-corn-db

# Automatically download locally and upload only Kraken2 database
toxolib hpc -r sample1_L001_R1.fastq.gz sample1_L001_R2.fastq.gz sample1_L002_R1.fastq.gz sample1_L002_R2.fastq.gz -o /path/on/hpc/output_dir \
    --setup-kraken-db

# Automatically download locally and upload only corn genome database
toxolib hpc -r sample1_L001_R1.fastq.gz sample1_L001_R2.fastq.gz sample1_L002_R1.fastq.gz sample1_L002_R2.fastq.gz -o /path/on/hpc/output_dir \
    --setup-corn-db

When using these options, toxolib will:

  1. Download the databases to your local machine
  2. Extract the databases locally
  3. Upload the extracted databases to the HPC
  4. Configure the Snakefile to use the correct database paths

This approach works even if your HPC has restricted internet access or firewalls that prevent direct downloads.

Manual Database Setup

If you prefer to set up the databases manually, you can follow these steps:

Kraken2 Database

You can download the standard Kraken2 database from: https://genome-idx.s3.amazonaws.com/kraken/k2_standard_20240112.tar.gz

wget https://genome-idx.s3.amazonaws.com/kraken/k2_standard_20240112.tar.gz
tar -xzf k2_standard_20240112.tar.gz -C /path/to/kraken2/database
export KRAKEN2_DB_DIR=/path/to/kraken2/database

Corn Genome Database

For host removal, you can download the corn genome reference from: https://glwasoilmetagenome.s3.us-east-1.amazonaws.com/corn_db.zip

wget https://glwasoilmetagenome.s3.us-east-1.amazonaws.com/corn_db.zip
unzip corn_db.zip -d /path/to/corn_db

Usage

Local Usage

Generate abundance matrix from raw data

toxolib abundance -r raw_data_1.fastq.gz raw_data_2.fastq.gz -o output_directory

This will:

  1. Run Kraken2 on the raw data
  2. Run Bracken on the Kraken2 results
  3. Generate an abundance matrix from the Bracken results

Create abundance matrix from existing Bracken files

toxolib matrix -i sample1_species.bracken sample2_species.bracken -o abundance_matrix.csv

HPC Usage

Toxolib can run the analysis pipeline on an HPC cluster using SLURM for job scheduling.

1. Set up HPC connection

toxolib hpc-setup --hostname your-hpc-server.edu --username your-username --key-file ~/.ssh/id_rsa

This will save your HPC connection details to ~/.toxolib/hpc_config.yaml.

2. Run the pipeline on HPC

toxolib hpc -r raw_data_1.fastq.gz raw_data_2.fastq.gz -o /path/on/hpc/output_dir \
    --kraken-db /path/on/hpc/kraken2_db \
    --corn-db /path/on/hpc/corn_db \
    --partition normal --threads 32 --memory 200 --time 144:00:00

This will:

  1. Upload your raw data files to the HPC
  2. Create a Snakemake workflow file
  3. Upload an environment.yml file to the HPC
  4. Submit a SLURM job to run the analysis
  5. Return a job ID for tracking
Automatic Conda Environment Creation

When submitting a job to the HPC, toxolib will automatically:

  1. Upload a conda environment.yml file to the HPC
  2. Create a conda environment in the output directory if it doesn't exist
  3. Activate the environment before running the analysis

This ensures all required dependencies are available on the HPC without requiring manual environment setup.

3. Check job status

toxolib hpc-status --job-id your_job_id

4. Download results when complete

toxolib hpc-download --job-id your_job_id --output-dir ./local_results

5. HPC File Management

Toxolib provides several commands to manage files and directories on the HPC. All commands support the --keep-open flag to maintain a persistent connection.

Interactive HPC Shell
toxolib hpc-shell

This starts an interactive shell session with the HPC that keeps the connection open until you explicitly exit. Features include:

  • Persistent connection until you type exit or quit
  • Colored prompt showing username, hostname, and current directory
  • Built-in commands like help, cd, and pwd
  • Direct execution of any shell command
The --keep-open Flag: Persistent Connections

All HPC commands support the --keep-open flag, which:

  1. Executes the requested command
  2. Keeps the connection open
  3. Starts an interactive shell session
  4. Maintains the connection until you type exit or quit

This is especially useful for running multiple commands without reconnecting each time:

# Example workflow with persistent connection:
toxolib hpc-pwd --keep-open

# Once in the interactive shell:
$ ls -la
$ cd some_directory
$ mkdir new_folder
$ cd new_folder
$ pwd
$ exit  # Only now will the connection close
Get Current Working Directory
# Show current directory and close connection
toxolib hpc-pwd

# Show current directory and keep connection open
toxolib hpc-pwd --keep-open
Change Directory
# Change directory and close connection
toxolib hpc-cd --path /path/to/directory

# Change directory and keep connection open
toxolib hpc-cd --path /path/to/directory --keep-open

# Go up one level
toxolib hpc-cd --path ..

# Go up one level and keep connection open
toxolib hpc-cd --path .. --keep-open
Create Directory
# Create directory and close connection
toxolib hpc-mkdir --path /path/to/new/directory

# Create directory and keep connection open
toxolib hpc-mkdir --path /path/to/new/directory --keep-open
List Files
# List files in current directory and close connection
toxolib hpc-ls

# List files in current directory and keep connection open
toxolib hpc-ls --keep-open

# List files in specific directory
toxolib hpc-ls --path /path/to/directory

# List files in specific directory and keep connection open
toxolib hpc-ls --path /path/to/directory --keep-open

# Long format listing (like ls -l)
toxolib hpc-ls --long

# Show hidden files (like ls -a)
toxolib hpc-ls --all

# Combine options
toxolib hpc-ls --path /path/to/directory --long --all

# Combine options and keep connection open
toxolib hpc-ls --path /path/to/directory --long --all --keep-open

Manual Setup on HPC

When using the HPC functionality, you can manually upload and extract these databases on your HPC system:

# On your local machine, download the databases
wget https://genome-idx.s3.amazonaws.com/kraken/k2_standard_20240112.tar.gz
wget https://glwasoilmetagenome.s3.us-east-1.amazonaws.com/corn_db.zip

# Upload to HPC (using scp)
scp k2_standard_20240112.tar.gz your-username@your-hpc-server.edu:/path/on/hpc/
scp corn_db.zip your-username@your-hpc-server.edu:/path/on/hpc/

# SSH into HPC and extract
ssh your-username@your-hpc-server.edu
mkdir -p /path/on/hpc/kraken2_db
tar -xzf /path/on/hpc/k2_standard_20240112.tar.gz -C /path/on/hpc/kraken2_db
mkdir -p /path/on/hpc/corn_db
unzip /path/on/hpc/corn_db.zip -d /path/on/hpc/corn_db

Then when running toxolib, specify these paths:

toxolib hpc -r raw_data_1.fastq.gz raw_data_2.fastq.gz -o /path/on/hpc/output_dir \
    --kraken-db /path/on/hpc/kraken2_db \
    --corn-db /path/on/hpc/corn_db

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

toxolib-0.1.12.tar.gz (26.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

toxolib-0.1.12-py3-none-any.whl (25.7 kB view details)

Uploaded Python 3

File details

Details for the file toxolib-0.1.12.tar.gz.

File metadata

  • Download URL: toxolib-0.1.12.tar.gz
  • Upload date:
  • Size: 26.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.2

File hashes

Hashes for toxolib-0.1.12.tar.gz
Algorithm Hash digest
SHA256 8811456673203358879666a3752dec77caf918e852b66f1188f8a72759233b4f
MD5 efb6ab02648a0139f10163cbeaaa5d6d
BLAKE2b-256 40024409ded6ffb2e6da287d47025194b9e6b346e50647c4fd74d9466d1674b7

See more details on using hashes here.

File details

Details for the file toxolib-0.1.12-py3-none-any.whl.

File metadata

  • Download URL: toxolib-0.1.12-py3-none-any.whl
  • Upload date:
  • Size: 25.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.2

File hashes

Hashes for toxolib-0.1.12-py3-none-any.whl
Algorithm Hash digest
SHA256 dba5623138d90f86ad5f8a6dc076ed18094ff4874de8fafce966e4d29adbed45
MD5 e183e16a05734e632bb4d7eefe7152ae
BLAKE2b-256 48b6b727dfd3d044eb297f9eda85ea740f27c2c6c26fcd01fbe5f129f0483de8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page