Skip to main content

xi-mzidentml-converter uses pyteomics (https://pyteomics.readthedocs.io/en/latest/index.html) to parse mzIdentML files (v1.2.0) and extract crosslink information. Results are written to a relational database (PostgreSQL or SQLite) using sqlalchemy.

Project description

xi-mzidentml-converter

python-app License

xi-mzidentml-converter processes mzIdentML 1.2.0 and 1.3.0 files with the primary aim of extracting crosslink information. It has three use cases:

  1. to validate mzIdentML files against the criteria given here: https://www.ebi.ac.uk/pride/markdownpage/crosslinking
  2. to extract information on crosslinked resiude pairs and output it in a form more easily used by modelling software
  3. to populate the database that is accessed by xiview-api

It uses the pyteomics library (https://pyteomics.readthedocs.io/en/latest/index.html) as the underlying parser for mzIdentML. Results are written into a relational database (PostgreSQL or SQLite) using sqlalchemy.

Requirements:

python3.10

pipenv

sqlite3 for validation and residue pair extraction. postgresql or sqlite3 for creation of xiview-api dtabase (the instructions below use posrgresql)

Installation

Clone git repository and set up python envorment or install via PYPI:

git clone https://github.com/Rappsilber-Laboratory/xi-mzidentml-converter.git
cd x-mzidentml-converter
pipenv install --python 3.10

PYPI project: https://pypi.org/project/xi-mzidentml-converter/

PYPI instructions: https://packaging.python.org/en/latest/tutorials/installing-packages/

Usage

proceess_dataset.py is the entry point and running it with the -h option will give a list of options.

python process_dataset.py -h

1. Validate a dataset

Run processdataset.py with the -v option to validate a dataset, the argument is the path to a specific mzIdentML file or to a directory conatining multiple mzIdentML files, in which case all of them will be validated. To pass, all the peaklist files referenced must be in the same directory as the mzIdentML file(s). The converter will create an sqlite database in the temporary folder which is used in the validation process, the temporary folder can be specified with the -t option.

Examples:

python process_dataset.py -v ~/mydata
python process_dataset.py -v ~/mydata/mymzid.mzid -t ~/mytempdir

The result is written to the console. If the data fails validation but the error message is not informative, please open an issue on the github repository: https://github.com/Rappsilber-Laboratory/xi-mzidentml-converter/issues

2. Extract summary of crosslinked residue pairs

Run processdataset.py with the --seqsandresiduepairs option to extract a summary of search sequences and crosslinked residue pairs. The output is json which is written to the console. The argument is the path to an mZIdentML file or a directory containing multiple mzIdentML files, in which case all of them will be processed.

Examples:

python process_dataset.py --seqsandresiduepairs ~/mydata -t ~/mytempdir
python process_dataset.py --seqsandresiduepairs ~/mydata/mymzid.mzid

It can also be accessed programitically by using the json_sequences_and_residue_pairs(filepath, tmpdir) function in process_dataset.py.

3. populate the xiview-api database

Create the database

sudo su postgres
psql
create database xiview;
create user xiadmin with login password 'your_password_here';
grant all privileges on database xiview to xiadmin;

find the hba.conf file in the postgresql installation directory and add a line to allow the xiadmin role to access the database: e.g.

sudo nano /etc/postgresql/13/main/pg_hba.conf

then add the line: local xiview xiadmin md5

then restart postgresql:

sudo service postgresql restart

Configure the python environment for the file parser

edit the file xi-mzidentml-converter/config/database.ini to point to your postgressql database. e.g. so its content is:

[postgresql]
host=localhost
database=xitest
user=xiadmin
password=your_password_here
port=5432

Create the database schema

run create_db_schema.py to create the database tables:

python database/create_db_schema.py

Populate the database

To parse a test dataset:

python process_dataset.py -d ~/PXD038060

The command line options that populate the database are -d, -f and -p. Only one of these can be used. The -d option is the directory to process files from, the -f option is the path to an ftp directory conatining mzIdentML files, the -p option is a ProteomeXchange identifier or a list of ProteomeXchange identifiers separated by spaces.

The -i option is the project identifier to use in the database. It will default to the PXD accession or the name of the directory containing the mzIdentML file.

To run tests

Make sure we have the right db user available

psql -p 5432 -c "create role ximzid_unittests with password 'ximzid_unittests';"
psql -p 5432 -c 'alter role ximzid_unittests with login;'
psql -p 5432 -c 'alter role ximzid_unittests with createdb;'
psql -p 5432 -c 'GRANT pg_signal_backend TO ximzid_unittests;'

run the tests

pipenv run pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xi_mzidentml_converter-0.3.2.tar.gz (70.2 kB view details)

Uploaded Source

Built Distribution

xi_mzidentml_converter-0.3.2-py3-none-any.whl (87.1 kB view details)

Uploaded Python 3

File details

Details for the file xi_mzidentml_converter-0.3.2.tar.gz.

File metadata

File hashes

Hashes for xi_mzidentml_converter-0.3.2.tar.gz
Algorithm Hash digest
SHA256 9e31e0437d25ef586c83e4227b3b8cd5ca9745448ae3937c9a62bde537eef9b3
MD5 d2962dd9c18fdd70126d7b1be905b6ff
BLAKE2b-256 067e7a5bf1dc50e47aec8655bc80d81184f9a45e26fa56569cbbedee37535a23

See more details on using hashes here.

File details

Details for the file xi_mzidentml_converter-0.3.2-py3-none-any.whl.

File metadata

File hashes

Hashes for xi_mzidentml_converter-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 db66fa140f4efff50b6978ac0290d7ddd9511f795b34f77ebfeb861dd6a9ff66
MD5 dd0a5c45be3ed19ad8a4751d1a1a92a4
BLAKE2b-256 54edb42017ff83671405c3d5dee99142895adf9adfeffdbf2e790b25398daa26

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page