High-accuracy SNV calling for bacterial isolates using AccuSNV
Project description
High-accuracy SNV calling for bacterial isolates using AccuSNV
Version: V0.0.0.1 (Last update on 2025-04-26)
Note: This tool is powered by Lieberman Lab SNV calling pipeline - WideVariant.
Install
Git clone:
git clone https://github.com/liaoherui/AccuSNV.git
Option-1 (via pre-built conda env - Linux only, recommended!)
cd AccuSNV/snake_pipeline
If you don't have gdown, pleae install it first:
pip install gdown
Download pre-built environments:
sh download_install_env.sh
Note: Please ignore the error message: tar: Exiting with failure status due to previous errors. You can still use the environment despite receiving this error message.
Activate the pre-built environment
source accusnv_env/bin/activate
Change the permission of the file:
chmod 777 slurm_status_script.py
Option-2 (via .yaml file)
cd AccuSNV/snake_pipeline
Build the conda environment:
conda env create -n accusnv_env --file accusnv.yaml or mamba env create -n accusnv_env --file accusnv.yaml
Activate the conda environment:
conda activate accusnv_env
Copy conda-env-based Snakefile:
cp Snakefiles_diff_options/Snakefile_conda_env.txt ./Snakefile
Change the permission of the file:
chmod 777 slurm_status_script.py
Option-3 (via Bioconda, under processing)
Once you finish the install, you can test the tool with the command lines below :
Test snakemake pipeline - Under snake_pipeline folder:
sh test_run.sh
sh scripts/dry-run.sh
sbatch scripts/run_snakemake.slurm
Test downstream analysis - Under local_analysis folder:
sh test_local.sh
Overview
This pipeline and toolkit is used to detect and analyze single nucleotide differences between closely related bacterial isolates.
-
Noteable features
- Avoids false negatives from low coverage and false positives through a deep learning method, while also enabling visualization of raw data.
- Enables easy evolutionary analysis, including phylogenetic construction, nonsynonmous vs synonymous mutation counting, and parallel evolution, etc.
-
Inputs (to Snakemake cluster step):
- short-read sequencing data of closely related bacterial isolates
- an annotated reference genome
-
Outputs (of downstream analysis step):
- table of high-quality SNVs that differentiate isolates from each other
- parsimony tree of how the isolates are related to each other
- More details can be found in here
The pipeline is split into two main components, as described below.
1. Snakemake pipeline
The first portion of AccuSNV aligns raw sequencing data from bacterial isolates to a reference genome, identifies candidate SNV positions, and creates useful data structure for supervised local data filtering. This step is implemented in a workflow management system called Snakemake and is executed on a SLURM cluster. More information is available here.
Please ensure the right permission of the file slurm_status_script.py:
chmod 777 slurm_status_script.py
Step-1: run the python script:
python accusnv_snakemake.py -i <input_sample_info_csv> -r <ref_dir> -o <output_dir>
One example with test data can be found in snake_pipeline/test_run.sh
If you cloned the repository (e.g. a new download) and have already downloaded the pre-built Conda environment (e.g., /path/snake_pipeline/accusnv_sub), there's no need to download it again. Just try:
python accusnv_snakemake.py -i <input_sample_info_csv> -c /path/snake_pipeline/accusnv_sub -r <ref_dir> -o <output_dir>
Step-2: check the pipeline using "dry-run"
sh scripts/dry-run.sh
Step-3: submit your slurm job.
sbatch scripts/run_snakemake.slurm
Note: If you need to modify any slurm job configuration, you can edit the config.yaml file generated in your output folder: <output_dir>/conf/config.yaml
2.1. Local python analysis
Note: This step has been incorporated into the Snakemake pipeline and will be executed automatically by default. However, you can still use this local Python script to rerun the analysis with different parameters if needed.
python new_snv_script.py -i <input_mutation_table> -c <input_raw_coverage_matrix> -r <ref_dir> -o <output_dir>
One example with test data can be found in local_analysis/test_local.sh
The second portion of AccuSNV filters candidate SNVs based on data arrays generated in the first portion and generates a high-quality SNV table and a parsimony tree. This step utilizes deep learning and is implemented with a custom Python script. More information can be found here.
2.2. Local downstream analysis
Based on the identified SNVs and the output final mutation table (in .npz format), AccuSNV offers a set of downstream analysis modules (e.g. dN/dS calculation). You can run these modules using the command below.
python accusnv_downstream.py -i test_data/candidate_mutation_table_final.npz -r ../snake_pipeline/reference_genomes/Cae_ref -o cae_accusnv_ds_pe
Full command-line options
Snakemake pipeline - accusnv_snakemake.py
AccuSNV - SNV calling tool for bacterial isolates using deep learning.
options:
-h, --help show this help message and exit
-i INPUT_SP, --input_sample_info INPUT_SP
The dir of input sample info file --- Required
-t TF_SLURM, --turn_off_slurm TF_SLURM
If set to 1, the SLURM system will not be used for automatic job
submission. Instead, all jobs will run locally or on a single
node. (Default: 0)
-c CP_ENV, --conda_prebuilt_env CP_ENV
The absolute dir of your pre-built conda env. e.g.
/path/snake_pipeline/accusnv_sub
-r REF_DIR, --ref_dir REF_DIR
The dir of your reference genomes
-s MIN_COV_SAMP, --min_cov_for_filter_sample MIN_COV_SAMP
Before running the CNN model, low-quality samples with more than
45% of positions having zero aligned reads will be filtered out.
(default "-s 45") You can adjust this threshold with this
parameter; to include all samples, set "-s 100".
-v MIN_COV, --min_cov_for_filter_pos MIN_COV
For the filter module: on individual samples, calls must have at
least this many reads on the fwd/rev strands individually. If
many samples have low coverage (e.g. <5), then you can set this
parameter to smaller value. (e.g. -v 2). Default is 5.
-e EXCLUDE_SAMP, --excluse_samples EXCLUDE_SAMP
The names of the samples you want to exclude (e.g. -e S1,S2,S3).
If you specify a number, such as "-e 1000", any sample with more
than 1,000 SNVs will be automatically excluded.
-g GENERATE_REP, --generate_report GENERATE_REP
If not generate html report and other related files, set to 0.
(default: 1)
-o OUT_DIR, --output_dir OUT_DIR
Output dir (default: current dir/wd_out_(uid), uid is generated
randomly)
Local downstream analysis - accusnv_downstream.py
SNV calling tool for bacterial isolates using deep learning.
options:
-h, --help show this help message and exit
-i INPUT_MAT, --input_mat INPUT_MAT
The input mutation table in npz file
-r REF_DIR, --ref_dir REF_DIR
The dir of your reference genomes
-c MIN_COV, --min_cov_for_call MIN_COV
For the fill-N module: on individual samples, calls must have at
least this many fwd+rev reads. Default is 1.
-q MIN_QUAL, --min_qual_for_call MIN_QUAL
For the fill-N module: on individual samples, calls must have at
least this minimum quality score. Default is 30.
-b EXCLUDE_RECOMB, --exclude_recomb EXCLUDE_RECOMB
Whether included SNVs from potential recombinations. Default
included. Set "-b 1" to exclude these positions in downstream
analysis modules.
-f MIN_FREQ, --min_freq_for_call MIN_FREQ
For the fill-N module: on individual samples, a call's major
allele must have at least this freq. Default is 0.75.
-o OUTPUT_DIR, --output_dir OUTPUT_DIR
The output dir
Output
The main output folder structure of Snakemake pipeline is shown below:
1-Mapping - Alignment temporary files
2-Case - candidate mutation tables for 3-AccuSNV
3-AccuSNV - Main output of Snakemake pipeline
Important and major output files:
| Header | Description |
|---|---|
| candidate_mutation_table_final.npz | NPZ table used for downstream analysis modules. |
| snv_table_merge_all_mut_annotations_final.tsv | Text report - contain identified SNVs and related information. |
| snv_qc_heatmap_*.png | QC figures |
| snv_table_with_charts_final.html | Html report - display the comprehensive information about identified SNVs. Note, if you want to see the bar charts in the html file, make sure you have the folder "bar_charts" under the same folder with the html file. |
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file accusnv-1.0.0.5.tar.gz.
File metadata
- Download URL: accusnv-1.0.0.5.tar.gz
- Upload date:
- Size: 1.2 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.23
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
47181d53fac4e4baf992542813864bf7d162caa9ec9f2054f84ae37ab8384ac6
|
|
| MD5 |
d8ba252be16a15ca25a6bc1d914a2921
|
|
| BLAKE2b-256 |
aecf8867975ae367b45de83fab72414e6bf83cf6faf93b6172f3dee2808978ab
|
File details
Details for the file accusnv-1.0.0.5-py3-none-any.whl.
File metadata
- Download URL: accusnv-1.0.0.5-py3-none-any.whl
- Upload date:
- Size: 1.2 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.23
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
17ebe5f30a312d65b3da62d458abb5b8f267917015a8af8a4e1e21bb251e9966
|
|
| MD5 |
308f8d13f9734438a72a777b4a82a8fd
|
|
| BLAKE2b-256 |
c2cf1875f50492fd577ec828faecdf9075d2e994b88cf7cec261bd31262734c1
|