Multithreaded ncbi edirect and ftract
Project description
medirect - a multiprocessed utility for retrieving records and parsing feature tables from ncbi
utilities
mefetch
ftract
about
As a bioinformatician I build a lot of bacterial dna reference databases. Part of my job is to gather sequences data where it is available from outside sources. One of those sources is the NCBI nucleotide database. I designed this package to help me gather data from NCBI quickly by utilizing multiple processors to make multiple data requests from the NCBI database servers. The utilities mefetch and ftract are designed to work like efetch and xtract and can be slotted in along with other ncbi utilities and follow the same edirect documentation, guidelines and requirements and usage policies. The utilities have primarily been tested on the nucleotide database but should work on any type of data available through the NCBI servers.
The mefetch utility is designed to be fast and can easily overwhelm the NCBI servers. For this reason I highlight two points from the usage policy:
Run retrieval scripts on weekends or between 9 pm and 5 am Eastern Time weekdays for any series of more than 100 requests.
Make no more than 3 requests every 1 second.
The ftract utility pattern matches features based on the three column table structure described here: feature tables It is designed to parse data and coordinates from feature tables which are magnitudes smaller and faster to parse than the xml tables and xtract parser utility available as part of the standard edirect package. The entire edirect package is available here: ftp downloads
dependencies
installation
medirect can be installed in two ways:
For regular users:
% pip3 install medirect
For developers:
% pip3 install git://github.com/crosenth/medirect.git # or % git clone git://github.com/crosenth/medirect.git % cd medirect % python3 setup.py install
examples
The mefetch executable works exactly like edirect efetch with a an additional multiprocessing argument -proc and a few more features.
By allowing additional processes to download records The -proc argument allows a linear download speed increase downloading large datasets.
Here is an example downloading 255,303 Rhizobium sequence accessions using one processor:
% esearch -db nucleotide -query 'Rhizobium' | time mefetch -email user@ema.il -mode text -format acc -proc 1 > accessions.txt 0.53s user 0.11s system 0% cpu 12:43.11 total
Which is equivalent to ncbi efetch:
% esearch -db nucleotide -query 'Rhizobium' | time efetch -mode text -format acc > accessions.txt 0.53s user 0.11s system 0% cpu 12:47.54 total
Adding another processor -proc 2:
% esearch -db nucleotide -query 'Rhizobium' | time mefetch -email user@ema.il -proc 2 -mode text -format acc > accessions.txt 0.46s user 0.08s system 0% cpu 5:17.51 total
And another -proc 3 (default):
% esearch -db nucleotide -query 'Rhizobium' | time mefetch -email user@ema.il -proc 3 -mode text -format acc > accessions.txt 0.35s user 0.10s system 0% cpu 2:57.01 total
And -proc 4 (see usage policy):
% esearch -db nucleotide -query 'Rhizobium' | time mefetch -email user@ema.il -proc 4 -mode text -format acc > accessions.txt 0.35s user 0.08s system 0% cpu 1:40.54 total
Results can be returned in the exact order they intended by the NCBI server using the -in-order argument. Otherwise, the order will be determined by how fast ncbi returns results per process.
The -retmax argument (or chunksize) determines the number of results returned per -proc. By default, it is set to the 10,000 max records per documentation. Setting the -retmax to higher than 10,000 will automatically be set back down to 10,000.
By default the -id reads stdin xml output from esearch. The -id argument can also take input as a comma delimited list of ids or text file of ids. When coupled with the -csv argument the input can be a csv file with additional argument columns. This is useful for bulk downloads with different positional arguments.
ftract allows csv output of different features from ncbi feature tables. The required -feature argument is comma separated feature_key:qualifier_key:qualifier_value
% mefetch -id KN150849 -db nucleotide -email user@ema.il -format ft | ftract --feature rrna:product:16s id,seq_start,seq_stop,strand KN150849.1,594136,595654,2 KN150849.1,807985,809503,2 KN150849.1,2227751,2229271,1
And pipe this back into mefetch to download these three regions in genbank format:
% mefetch -id KN150849 -db nucleotide -email user@ema.il -format ft | ftract --feature rrna:product:16s | mefetch -db nucleotide -email crosenth@uw.edu -csv -format gb
And finally combining all these concepts, return all the Burkholderia gladioli 16s rrna products in fasta format using the default -proc 3 like this:
% esearch -query 'Burkholderia gladioli AND sequence_from_type[Filter]' -db 'nucleotide' | mefetch -email user@ema.il -format ft | ftract --feature rrna:product:16s | mefetch -db nucleotide -email user@ema.il -csv -format fasta 0.24s user 0.05s system 1% cpu 18.596 total
issues
Please use the Issue Tracker(s) available on Github or Bitbucket to report any bugs or feature requests. For all other inquiries email Chris Rosenthal.
license
Copyright (c) 2016 Chris Rosenthal
Released under the GPLv3 License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file medirect-0.30.0.tar.gz
.
File metadata
- Download URL: medirect-0.30.0.tar.gz
- Upload date:
- Size: 50.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c51177e15c673e66b1327588a03a1eb63e7c0456d9b1f6d9e9419b1cf59e38f0 |
|
MD5 | e21d77e36e5bc03a573ece616d10452a |
|
BLAKE2b-256 | 3e27ffc79937c35146c231e6a5a2644ef903e741e2fd6670dbf9886825d14dfd |
File details
Details for the file medirect-0.30.0-py3-none-any.whl
.
File metadata
- Download URL: medirect-0.30.0-py3-none-any.whl
- Upload date:
- Size: 36.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b034e3da398d7bc7ef528570b236566b7871f725d4a1fd75d87809aa25b70303 |
|
MD5 | ef7d932266b525b4dd5c1aa2a2b6fa7d |
|
BLAKE2b-256 | 49018b7442f859c4f8255168ffe7896e35b18520513ba2befe11a769a150b43c |