A python library to fetch metadata from NCBI and MetaSRA for a list of NCBI accessions and data extraction from ARCHS4
Project description
metadatamapping
A python library to fetch metadata from NCBI and MetaSRA for a list of NCBI accessions and data extraction from ARCHS4
Installation
Simply run the following
pip install metadatamapping
or clone the repository
git clone git@github.com:dmalzl/metadatamapping.git
and run
cd metadatamapping
pip install .
you should then be able to import the package as usual
Example usage
NCBI sample, experiment, biosample or geo accessions can be mapped to SRA uids using the map_accessions_to_srauids function from the metadata module of the package. The call to the function as shown below invokes two processes that concurrently fetch the SRA UIDs for the accessions in batches and write the results to the outputfile "/path/to/outputfile".
from metadatamapping import metadata
sra_uids = metadata.map_accessions_to_srauids(
accessions,
"/path/to/outputfile",
n_processes = 2
)
The resulting SRA UIDs can then either be used to retrieve all associated accessions from the SRA with the srauids_to_accessions function from the metadata module like so
from metadatamapping import metadata
ncbi_accessions = metadata.srauids_to_accessions(
sra_uids
)
or link them to BioSample UIDs and then retrieve the associated metadata with the link_sra_to_biosample function from the link module and the biosampleuids_to_metadata function from the metadata module
from metadatamapping import metadata, link
srauids_to_biosampleuids = link.link_sra_to_biosample(
sra_uids.uid
)
biosample_metadata = metadata.biosample_uids_to_metadata(
srauids_to_biosampleuids.biosample
)
Finally we can retrieve normalized metadata for the samples from MetaSRA using the metasra_from_study_id function of the metadata module (note that this database might not contain data for all your samples so the function may only returns normalized metadata for some of your samples)
from metadatamapping import metadata
metasra_metadata = metadata.metasra_from_study_id(
ncbi_accessions.study.unique()
)
While most of the biosample metadata is also found in GEO entries some of the metadata provided in GEO (e.g. treatment protocol) is GEO exclusive but may contain vital information. Because this data is not retrievable from the Entrez API, we adopted a similar approach to geofetch and download the data from the GEO FTP. An example usage would be as follows:
from metadatamapping import metadata
import pandas as pd
geo_accessions = pd.DataFrame(
[
('GSM2791352', 'GSE104174'),
('GSM2771062', 'GSE103424'),
('GSM6271252', 'GSE207049'),
('GSM4329764', 'GSE145668,GSE145669'),
('GSM5064568', 'GSE166148,GSE166150')
],
columns = ['GSM', 'GSE']
)
geo_metadata = metadata.fetch_geo_metadata(
geo_accessions,
'/path/to/outputfile',
n_processes = 24
)
Additionally, the package provides an interface for parsing the ARCHS4 HDF5 format which is located in the archs4 module and handles parsing of associated metadata with the get_filtered_sample_metadata function as well as extraction of expression data in the AnnData format with the samples function
archs4_file = "/path/to/archs4.h5"
retain_keys = [
'geo_accession', 'characteristics_ch1', 'molecule_ch1', 'readsaligned', 'relation',
'series_id', 'singlecellprobability', 'source_name_ch1', 'title'
]
archs4_metadata = archs4.get_filtered_sample_metadata(
archs4_file,
retain_keys
)
archs4_adata = archs4.samples(
archs4_file,
dataframe_indexed_by_geo_accessions,
n_processes = 2
)
For a full demonstration of usage please refer to the Snakefile in the examples directory which gives an overview of how the intended usage looks like.
Entrez credentials
metadatamapping retrieves data from the Entrez eUtilities using the biopython interface. By default the Entrez API only allows 3 requests per second if Entrez.email and Entrez.api_key are not set. This can be increased by setting these properties accordingly which also speeds up the most timeconsuming part of the pipeline which is the accession -> SRA UID mapping as this relies on eSearch which only allows for one accession at a time (maybe it also takes several but I did not test this as I expect it to be cumbersome to pull apart then). So please make sure to set the Entrez properties accordingly like so
from Bio import Entrez
Entrez.email = "<user>@<provider>.<domain>"
Entrez.api_key = "<NCBI API key>
The email typically is the email associated to your NCBI account. The API key can be generated as described here
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file metadatamapping-1.2.2.tar.gz.
File metadata
- Download URL: metadatamapping-1.2.2.tar.gz
- Upload date:
- Size: 28.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9ba5ea69a7053e3b9a2ea4efcd2bfcd66c1932ac5d9516ba3ee8505d12ade90f
|
|
| MD5 |
3a6b512fc8d167ea6a3d5ece68ddc4f9
|
|
| BLAKE2b-256 |
58dc0a05b61cf1541b1dc1ffb9ce3ec281fdf7037f67f3662a6ff281ceff2245
|
File details
Details for the file metadatamapping-1.2.2-py3-none-any.whl.
File metadata
- Download URL: metadatamapping-1.2.2-py3-none-any.whl
- Upload date:
- Size: 31.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c8d99136ef7f074c11cc3ad7c2b07d635347842ea494feb7cca18491c939eb09
|
|
| MD5 |
1c27c6702e32cef2caad056baa1ecc8c
|
|
| BLAKE2b-256 |
aed8974f389b1a98aa176038b755431223ed60e618e89852b655df84bf5c45e4
|