Skip to main content

Tool for the integration of viral consensus sequences obtained by de novo and mapping strategies, supported by prior information.

Project description

PriorCons

This repository provides tools to:

  1. Generate Integrated Consensus (integrate_consensus.py)
    Produces a high-quality viral consensus by strategically using ABACAS sequences to fill missing regions in the mapping consensus. It employs a sliding-window approach that verifies the evolutionary plausibility of ABACAS content against empirical priors before incorporation.

  2. Build Evolutionary Priors (build_priors.py)
    Constructs empirical prior distributions from large multiple-sequence alignments. These priors model expected genetic variation across genomic windows and provide likelihood thresholds for quality control during consensus integration.

  3. Access Supporting Utilities (utils scripts)
    Provides modular helper functions for alignment processing, window scoring, and consensus construction used by both main workflows.


Installation

pip install priorcons

CLI usage

# Create priors
priorcons build-priors --input sequences.fasta --ref REF_ID --output priors.parquet

# Run consensus integration
priorcons integrate-consensus --input alignment.aln --ref REF_ID --prior priors.parquet --output_dir results

🚀 Main Script: integrate_consensus.py

This is the entrypoint of the tool. It creates a integrated consensus sequence by combining mapping consensus and ABACAS output, both aligned to a reference sequence, but only after performing quality control (QC) at the window level.

🔑 Inputs

  • --input → path to an alignment file (.aln) containing at least:
    • 1º Reference sequence
    • 2º Mapping consensus sequence
    • 3º ABACAS consensus sequence

The sequences in the alignment file must be provided in the specified order, as they will be identified by their position.

  • --ref → ID of the reference sequence in the alignment.

  • --prior → path to a priors table (.parquet) generated with build_priors.py.

  • --output_dir → directory to save the results.


🧪 Workflow

  1. Start with mapping consensus as the baseline
  2. Identify missing/unreliable regions in mapping consensus
  3. For each window:
    • If mapping has coverage → keep mapping sequence
    • If mapping has missing data → evaluate ABACAS for that window:
      • Check fragmentation and quality
      • Verify evolutionary plausibility using priors (nLL score)
      • If ABACAS passes QC → use ABACAS to fill missing regions
  4. Construct final consensus combining mapping baseline with validated ABACAS fills
  5. Restore mapping-specific insertions
  6. QC reporting: compute coverage, substitutions, and insertion metrics comparing the final integrated consensus to MAPPING.

📦 Outputs

The script produces three files inside --output_dir:

  1. Integrated consensus FASTA

    • File: <basename>-INTEGRATED.fasta
    • Contains the final consensus sequence after merging and reinserting insertions.
  2. Window QC trace (CSV)

    • File: windows_trace.csv
    • One row per window, recording:
      • start, end → genomic coordinates.
      • MISSING_MAPPING, MISSING_ABACAS → counts of missing bases.
      • ABACAS_MORE_INFO → whether ABACAS has fewer missing bases than MAPPING.
      • ABACAS_FRAGMENTS → fragmentation level of ABACAS in this window (keep: 0 < n fragments < 3 ).
      • WINDOW_PRIOR_nLL_p95 → threshold from priors.
      • WINDOW_SCORE_nLL → score of ABACAS in this window.
      • WINDOW_QC_PASSED → True/False decision.
  3. Consensus QC summary (JSON)

    • File: qc.json
    • Provides overall metrics comparing the MAPPING consensus and the integrated consensus:
      • MAPPING_COVERAGE → % of genome covered in MAPPING.
      • FINAL_COVERAGE → % of genome covered in integrated consensus.
      • MAPPING_SUBSTITUTIONS → substitutions vs. reference in MAPPING.
      • FINAL_SUBSTITUTIONS → substitutions vs. reference in integrated consensus.
      • EXPECTED_SUBSTITUTIONS → expected number of substitutions, extrapolated from mapping.
      • OBS-EXP_SUBSTITUTIONS → difference between observed and expected substitutions.
      • N_INSERTIONS → number of insertions added back.
      • TOTAL_INSERTIONS_LENGTH → total inserted length.
      • INSERTIONS → list of insertions with their coordinates.

▶️ Example run

python integrate_consensus.py \
  --input /path/to/<sample_name>.aln \
  --ref RSV_BD \
  --prior /path/to/RSVBD_win100_ovlp50_priors.parquet \
  --output_dir results

This will generate:

  • results/<sample_name>-INTEGRATED.fasta
  • results/windows_trace.csv
  • results/qc.json

🛠 Script: build_priors.py

This script creates empirical priors (overlapped windows) from a large multiple sequence alignment.
These priors are later used by integrate_consensus.py to evaluate windows.

🔑 Inputs

  • -i / --input → aligned FASTA file with multiple sequences.
  • -r / --ref → ID of the reference sequence.
  • -o / --output → output file (.parquet).
  • --win → window size (default: 100).
  • --overlap → overlap size (default: 10).

▶️ Example run

python build_priors.py \
  -i alignment.fasta \
  -r ReferenceID \
  -o priors.parquet \
  --win 100 \
  --overlap 10

📦 Output

A .parquet file with one row per window, containing:

  • start, end → window coordinates.
  • nLL_p95, nLL_p99 → empirical thresholds.
  • profile → base probability distributions for each position in the window.

🧮 Methodology (build_priors.py)

1. Probability distributions per position

For each window of size W bases (e.g., W = 100), and for each position j within that window, we compute the probability of observing each nucleotide:

P_j(b)

Where:

  • c_j(b) = number of sequences with base b at position j.
  • \alpha = pseudocount (Laplace smoothing, default \alpha=1) to avoid zero probabilities.
  • Bases N are ignored in the counts.

This gives a per-position categorical distribution.


2. Log-likelihood of a sequence in a window

Given a query sequence Q, we compute its probability under the window profile.
For each valid (non-N) position j with observed base q_j:

logL

The normalized negative log-likelihood (nLL) is:

nLL

Where:

  • N_valid = number of positions in the window where Q has a non-N base.

Smaller nLL values indicate sequences more likely under the empirical profile.

3. Empirical priors

To characterize "normal variation" for each window:

  1. Score all sequences from the alignment against the window profile.
  2. Collect the distribution of nLL values.
  3. Extract percentiles (e.g., 95th and 99th) to serve as thresholds.

Thus, for each window we store:

  • The distribution (profile).
  • Empirical thresholds: nLL_p95 and nLL_p99.

A new sequence can later be compared:

  • If nLL < nLL_p95 → typical.
  • If nLL > nLL_p99 → unusually variable, possibly unreliable region.


� Supporting utils

Several utility scripts provide reusable functions for both processes:

  • utils.py → basic alignment and scoring functions:

    • load_alignment, extract_ref_positions, sliding_windows, score_window.
  • utils_integrate_consensus.py → additional helpers for consensus integration:

    • missingness and fragmentation counts,
    • insertion handling,
    • QC calculations,
    • consensus merging,
    • window evaluation wrapper.

These modular functions keep the pipeline clean and reusable.

QC ANALYSIS

This tool allows also an analyis QC.

The input is a directory with all the results folders (one for each sample) there are stored the qc files:

  • input: priorcons_path / / qc_files

It also need a gtf file and an outdir.

priorcons qc --input_dir /path/to/results/PRIORCONS/ \
             --gff_file /path/to/rsv.gff \
             --output_dir /path/to/output_dir_plots

Test dataset for integration in nf-core/viralrecon pipeline

https://zenodo.org/records/17454552/files/PriorCons_Test_data.zip?download=1

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

priorcons-0.1.2.tar.gz (28.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

priorcons-0.1.2-py3-none-any.whl (29.2 kB view details)

Uploaded Python 3

File details

Details for the file priorcons-0.1.2.tar.gz.

File metadata

  • Download URL: priorcons-0.1.2.tar.gz
  • Upload date:
  • Size: 28.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.2

File hashes

Hashes for priorcons-0.1.2.tar.gz
Algorithm Hash digest
SHA256 985a2161170702582ce4e6ac0e2bb1defde58adb93184ef8f6ac5266f9e8a268
MD5 953ad0565bf992e2dfa83818a8eb5acd
BLAKE2b-256 d06e9bff4c6387dc597490fdd701b44316db62deb8cdab2a540194090388eab1

See more details on using hashes here.

File details

Details for the file priorcons-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: priorcons-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 29.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.2

File hashes

Hashes for priorcons-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f1f9611b2dfad3cdcc6ea55d3a0e8325dc5348112fba367380c1e85a0c1daa97
MD5 6e6cabc30e7fdffd4e65e9e695738300
BLAKE2b-256 ff6ecff04983a55c047cd7d9cd1b5195e0457aa3e9af5cd5b8fff17e24553d69

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page