Skip to main content

A tool for submitting to NCBI (SRA, BioSample, & GenBank).

Project description

ncbi-submit

Submitting data to public databases is super important for publically funded laboratories, but it is not always a quick or intuitive process. ncbi-submit provides a simple and repeatable way to upload programmatic submissions to NCBI's SRA and GenBank with shared or unique BioProjects and BioSamples. Data can be uploaded as XML or zip files to either the Test or Production environments, and once there, the reports produced by NCBI can be analyzed to check on submission status and get BioSample accessions.


Installation:

To install from PyPI in a virtual environment .venv:

python3 -m venv .venv
. .venv/bin/activate
pip install ncbi-submit

To install from conda (not yet set up) in a new environment ncbi:

conda create -n ncbi ncbi-submit

Testing

Add NCBI credentials to file ./.login_credentials or edit them in either:

  • ./example/test.sh or
  • ./config/config.py

To test creating all example files, run:

./example/test.sh

This script ^^^ could also be a good starting point for your own NCBI submission pipelines. Note: There are several blocks of code in there can be commented in/out, as needed.


Usage

ncbi_submit is intended for use on the command line, but the class ncbi.NCBI can be imported and used within custom python scripts.

There are three main actions the script can do:

  • file_prep:
    • Prepares .tsv & .xml files for SRA, BioSample, & BioProject submissions
    • Used to prepare all files for initial submission to NCBI
    • To add in biosample accessions and prepare for GenBank submission, include the flag prep_genbank:
      • Prepares .zip, .sbt, & .tsv files for GenBank Submission
      • Used to add BioSample accessions from a BioSample submission for a GenBank submission
  • ftp submission or checkup:
    • Interacts with NCBI's ftp host to do either of the following:
      • submit data to NCBI databases
      • check on previous ftp submissions
      • get-accessions from all previous ftp submissions
  • example:
    • Writes out example files for one or both of:
      • config.py file (tells ncbi_submit lots of important info)
      • template.sbt (used for genbank submission)

Setup

The required parameters vary by which of the above actions you're attempting but at minimum require a plate and outdir. To limit the number of parameters required via command line, a config file must be used. When running from the command line, one of the three actions (file_prep or ftp) must be specified. With python, these are associated methods you may use on a single NCBI object.

Run this command to get a example config.py file in a directory called './ncbi':

ncbi_submit example --config --outdir "./nbci"

How to create a BioProject accession

A BioProject accession can be created in NCBI's submission portal, but it can also be created by ncbi-submit either as part of a BioSample/SRA submission or all by itself.

Steps for creating a new BioProject accession via ncbi-submit:

  1. In your config.py file, set bioproject['create_new'] = True
  2. Follow the below file preparation advice
  3. Follow the below file submission advice, but if you're only creating a BioProject and don't want to submit any other data, you can omit the --fastq_dir and --plate options and specify a --subdir instead (as the name of the directory to be used in NCBI's ftp site)
  4. Once you have results, add the new accession to your config.py file at bioproject['bioproject_accession'] and set bioproject['create_new'] = False

File Preparation

Python instantiation (not needed on command line):

Note: This is the minimum required info for preparing data. Other parameters may be necessary for more functionality or other tasks.

from ncbi_submit import ncbi_submit
ncbi = ncbi_submit.NCBI(
    fastq_dir = myFastqDir,
    seq_report = mySeqReport,
    plate = myPlate,
    outdir = myOutdir,
    config_file = myConfig,
    )
ncbi.write_presubmission_metadata()

Shell:

ncbi_submit file_prep \
    --test_mode --test_dir \
    --config "${NCBI_CONFIG}" \
    --seq_report "${SEQ_REPORT}" \
    --primer_map "${PRIMER_MAP}" \
    --primer_scheme "${SCHEME_VERSION}" \
    --outdir "${NCBI_DIR}" \
    --gisaid_log "${GENERIC_GISAID_LOG//PLATE/$PLATE}" \
    --fastq_dir ${FASTQS} \
    --plate "${PLATE}"

Python:

ncbi.write_presubmission_metadata()

File Submission

NOTE: Once you're ready, you can drop the --test_mode and --test_dir flags

Shell:

# if submitting to BioSample and SRA (and if creating a new BioProject):
ncbi_submit ftp submit \
    --db bs_sra \
    --test_mode --test_dir \
    --config "${NCBI_CONFIG}" \
    --outdir "${NCBI_DIR}" \
    --fastq_dir "${FASTQS}"

# if only creating a new BioProject:
ncbi_submit ftp submit \
    --db 'bp' \
    --plate \
    --test_mode --test_dir \
    --config "${NCBI_CONFIG}" \
    --subdir "${NCBI_SUBDIR}" \
    --outdir "${NCBI_DIR}" 

# wait a while and try this to download reports and view submission status
ncbi_submit ftp check \
    --plate \
    --db bs_sra \
    --test_mode --test_dir \
    --config "${NCBI_CONFIG}" \
    --outdir "${NCBI_DIR}" 

Python:

# if submitting to BioSample and SRA (and if creating a new BioProject):
ncbi.submit(db="bs_sra")
# if only creating a new BioProject:
ncbi.submit(db="bp")

# wait awhile and try this to download reports and view submission status
ncbi.check(db="bs_sra")

GenBank submission

(NOTE: not fully tested) To link your fasta in GenBank to the associated reads, you'll want to add in the BioSample accessions before submitting.

  • Acquire BioSample accessions via one of these methods:
    • download accessions.tsv file from NCBI and then use ncbi_submit
      • (Do this if you submitted to BioSample via NCBI's Submission Portal)
    • use ncbi_submit for everything
      • (Do this to avoid manual uploads via NCBI's Submission Portal) Shell:
# dowload report.xml files to get accesssions from
ncbi_submit ftp check \
    --db ${DB} \
    --outdir "${NCBI_DIR}" \
    --config "${NCBI_CONFIG}" \
    -u "${ncbi_username}" \
    -p "${ncbi_password}" \
    --plate "${PLATE}" \
    --fastq_dir "${FASTQS}"

# add accessions to genbank.tsv
ncbi_submit --prep_genbank \
    --outdir "${NCBI_DIR}" \
    --config ${NCBI_CONFIG} \
    --fasta "${GENERIC_CONSENSUS//PLATE/$PLATE}" \
    --plate "${PLATE}"

# submit to GenBank (NOTE: db='gb')
ncbi_submit ftp submit \
    --db gb \
    --test_mode --test_dir \
    --config "${NCBI_CONFIG}" \
    --outdir "${NCBI_DIR}" \
    --fastq_dir "${FASTQS}"

Python:

# dowload report.xml files to get accesssions
ncbi.check(db="bs_sra")
# prepare genbank submission files and submit
ncbi.submit(db="gb")

## or

# files can also be prepared without submitting via:
ncbi.write_genbank_submission_zip()

Check Submission Status

Wait awhile (10+ minutes) for NCBI to start processing the submission. Then run this to download reports and view submission status. This works for whichever db you want to check on. If not specified, you'll get results on all submitted dbs.

Shell:

# check GenBank submission status (NOTE: db='gb')
ncbi_submit ftp check \
    --db gb \
    --test_mode --test_dir \
    --config "${NCBI_CONFIG}" \
    --outdir "${NCBI_DIR}"

Python:

# check GenBank submission status (NOTE: db='gb')
ncbi.check(db='gb')

How to get accessions (BioSample, SRA)

To acquire the accessions for all samples submitted via ftp under your group's account, ncbi_submit can download all xml report files and parse out the accession details. A directory will be created in outdir containing all submission-specific directories, each containing its report files. The -f or --files flag allows the use of a list of report files to parse. If provided, those files will be parsed for accession details rather than downloading the latest report files. NCBI only stores uploads for a certain amount of time, so accessions found in newly downloaded reports are combined with those from previously downloaded report files to get the most complete picture. This means it's important that you run ncbi_submit ftp check after each submission has been processed to ensure accurate results. The database can be specifed to indicate which accessions are desired and yield csvs (for the BioProject associated with your current config file) at <outdir>/accessions_<bioproject>.csv with the following fields:

database fields
'bs_sra' sample_name, BioSample, SRA
'bs' sample_name, BioSample
'sra' sample_name, SRA

Get accessions by downloading report.xml files

Shell:

ncbi_submit ftp get-accessions \
    --db "bs_sra" \
    --config "${NCBI_CONFIG}" \
    --outdir "${REPORT_DIR}" \
    -u "${ncbi_username}" \
    -p "${ncbi_password}" \

Python:

ncbi.get_all_accessions(db="bs_sra")

Get accessions from list of report.xml files

Shell:

ncbi_submit ftp get-accessions \
    --db "bs_sra" \
    --config "${NCBI_CONFIG}" \
    --outdir "${REPORT_DIR}" \
    -u "${ncbi_username}" \
    -p "${ncbi_password}" \
    -f s1/report.xml s2/report.xml

Python:

ncbi.get_all_accessions(db="bs_sra",report_files=["file1", "file2"])

Updating samples that have already been submitted

Fastq read updates

If you want to update the reads for a sample you've already submitted, you must do the followind:

  1. Email nlm-support@nlm.nih.gov and supply them with a list of SRA runs to suppress.
  2. Once suppressed, you can upload a new version of the sample where the submission.xml
  • references the BioSample (rather than submitting a new BioSample block) and
  • has a new, unique SPUID for the SRA action block.

The submission.xml can be prepared as shown below and then submitted as discussed previously in File Submission. Whereas normally an error would occur if a previously-submitted sample appears in the seq_report file, the flag --update_reads tells ncbi_submit to search for BioSasmple accessions of and include previously-submitted samples in the submission.xml. In most cases, if you are updating reads for a sample, a new SRA spuid is required. The --spuid_endings flag takes a parameter mapping samples that are being updated to a suffix. For any explicitely names samples, the suffix(es) will be added at the end of the automatically-generated SPUID. Usually '2' is a good suffix choice (unless another update has already been made using that same suffix for the sample of interest).

Other metadata updates

These are not currently supported but could be added in the future if they seem important/useful.

Shell:

ncbi_submit file_prep \
    --config "${NCBI_CONFIG}" \
    --seq_report "${SEQ_REPORT}" \
    --outdir "${NCBI_DIR}" \
    --fastq_dir ${FASTQS} \
    --plate "${PLATE}" \
    --update_reads \
    --spuid_endings 'suffix1:samp1,samp2;suffix2:samp3'

Python:

ncbi.write_presubmission_metadata(update_reads=True,spuid_endings={"sample1":"suffix1", "sample2":"suffix1", "sample3":"suffix2"})

Input File Paths Explained

Required Files

  • config: Contains preset values and details about your lab, team, and submission plans that are necessary for submission.
  • seq_report: Main metadata file with sample details - can be equivalent to NCBI's BioSample TSV for use with the Submission Portal.

Optional Files

  • exclude_file: Contains a list of "sample_name"s to exclude from NCBI submission (each one on a new line).
  • barcode_map: Used as a cross-reference. If all samples from barcode_map appear in seq_report, that's great. Otherwise, you'll get a warning with directions for adding samples to the exclude_file if they shouldn't be submitted. File should have no headers. Lines must be: "{barcode}\t{sample_name}".

Sometimes Required Paths

  • fastq_dir: Required for file_prep and ftp if submitting reads to SRA. Indicates where the fastqs should be gathered from. Any fastqs with "sample_name" values that aren't supposed to be submitted will be ignored.
  • outdir: Highly recommended but will defualt to "./ncbi" or "./ncbi_test". A directory to house output (submission reports, exclude_file, output from file_prep). Will be created, if needed.
  • subdir: Only used for ftp tasks. A prefix to use for submissions for the given dataset. Defaults to plate, if plate is provided.

Links to xml template examples/schema:

File type BioProject BioSample SRA GenBank Description/Link
Webpage Protocols & TSVs for use at Submission Portal
XML create create create SRA submission w/ new BioSample & BioProject
XML link create create SRA submission w/ new BioSample & existing BioProject
XML link link create SRA submission w/ existing BioSample & BioProject
XML create GenBank XML
doc example Example GenBank submission zip
XSD schema BioSample XML Schema
XSD schema BioProject XML Schema
err validate Submission Error Explanations

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ncbi_submit-0.8.1.tar.gz (432.0 kB view details)

Uploaded Source

Built Distribution

ncbi_submit-0.8.1-py3-none-any.whl (456.4 kB view details)

Uploaded Python 3

File details

Details for the file ncbi_submit-0.8.1.tar.gz.

File metadata

  • Download URL: ncbi_submit-0.8.1.tar.gz
  • Upload date:
  • Size: 432.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.8.6 Linux/4.18.0-240.22.1.el8_3.x86_64

File hashes

Hashes for ncbi_submit-0.8.1.tar.gz
Algorithm Hash digest
SHA256 54003c5f344f872a809d0fb36870f8e81c8441ef4f569a24f989e1704795c8e0
MD5 6c770fadb8b026c24e40b0828a81ad39
BLAKE2b-256 c535eceec50ef93e2c80bce6f3d336108160b558302d329c0202c0f7333fe8bb

See more details on using hashes here.

File details

Details for the file ncbi_submit-0.8.1-py3-none-any.whl.

File metadata

  • Download URL: ncbi_submit-0.8.1-py3-none-any.whl
  • Upload date:
  • Size: 456.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.8.6 Linux/4.18.0-240.22.1.el8_3.x86_64

File hashes

Hashes for ncbi_submit-0.8.1-py3-none-any.whl
Algorithm Hash digest
SHA256 da984d2ff0911a1fc7d815d094fbaf73a092076cf76eef55bf8ab681fecdd8cf
MD5 66008acfd76965159adb2f833530de80
BLAKE2b-256 ae6dcb39a1ca81bce0fa44b0c1080f89d97f34cafb29d5e4295dc04fbbe8cf0f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page