Skip to main content

xi-mzidentml-converter uses pyteomics (https://pyteomics.readthedocs.io/en/latest/index.html) to parse mzIdentML files (v1.2.0) and extract crosslink information. Results are written to a relational database (PostgreSQL or SQLite) using sqlalchemy.

Project description

xi-mzidentml-converter

python-app License

xi-mzidentml-converter processes mzIdentML 1.2.0 and 1.3.0 files with the primary aim of extracting crosslink information. It has three use cases:

  1. to validate mzIdentML files against the criteria given here: https://www.ebi.ac.uk/pride/markdownpage/crosslinking
  2. to extract information on crosslinked resiude pairs and output it in a form more easily used by modelling software
  3. to populate the database that is accessed by xiview-api

It uses the pyteomics library (https://pyteomics.readthedocs.io/en/latest/index.html) as the underlying parser for mzIdentML. Results are written into a relational database (PostgreSQL or SQLite) using sqlalchemy.

Requirements:

python3.10

pipenv

sqlite3 for validation and residue pair extraction. postgresql or sqlite3 for creation of xiview-api dtabase (the instructions below use posrgresql)

Installation

Clone git repository and set up python envorment or install via PYPI:

git clone https://github.com/Rappsilber-Laboratory/xi-mzidentml-converter.git
cd x-mzidentml-converter
pipenv install --python 3.10

PYPI project: https://pypi.org/project/xi-mzidentml-converter/

PYPI instructions: https://packaging.python.org/en/latest/tutorials/installing-packages/

Usage

proceess_dataset.py is the entry point and running it with the -h option will give a list of options.

python process_dataset.py -h

1. Validate a dataset

Run processdataset.py with the -v option to validate a dataset, the argument is the path to a specific mzIdentML file or to a directory conatining multiple mzIdentML files, in which case all of them will be validated. To pass, all the peaklist files referenced must be in the same directory as the mzIdentML file(s). The converter will create an sqlite database in the temporary folder which is used in the validation process, the temporary folder can be specified with the -t option.

Examples:

python process_dataset.py -v ~/mydata
python process_dataset.py -v ~/mydata/mymzid.mzid -t ~/mytempdir

The result is written to the console. If the data fails validation but the error message is not informative, please open an issue on the github repository: https://github.com/Rappsilber-Laboratory/xi-mzidentml-converter/issues

2. Extract summary of crosslinked residue pairs

Run processdataset.py with the --seqsandresiduepairs option to extract a summary of search sequences and crosslinked residue pairs. The output is json which is written to the console. The argument is the path to an mZIdentML file or a directory containing multiple mzIdentML files, in which case all of them will be processed.

Examples:

python process_dataset.py --seqsandresiduepairs ~/mydata -t ~/mytempdir
python process_dataset.py --seqsandresiduepairs ~/mydata/mymzid.mzid

It can also be accessed programitically by using the json_sequences_and_residue_pairs(filepath, tmpdir) function in process_dataset.py.

3. populate the xiview-api database

Create the database

sudo su postgres
psql
create database xiview;
create user xiadmin with login password 'your_password_here';
grant all privileges on database xiview to xiadmin;

find the hba.conf file in the postgresql installation directory and add a line to allow the xiadmin role to access the database: e.g.

sudo nano /etc/postgresql/13/main/pg_hba.conf

then add the line: local xiview xiadmin md5

then restart postgresql:

sudo service postgresql restart

Configure the python environment for the file parser

edit the file xi-mzidentml-converter/config/database.ini to point to your postgressql database. e.g. so its content is:

[postgresql]
host=localhost
database=xitest
user=xiadmin
password=your_password_here
port=5432

Create the database schema

run create_db_schema.py to create the database tables:

python database/create_db_schema.py

Populate the database

To parse a test dataset:

python process_dataset.py -d ~/PXD038060

The command line options that populate the database are -d, -f and -p. Only one of these can be used. The -d option is the directory to process files from, the -f option is the path to an ftp directory conatining mzIdentML files, the -p option is a ProteomeXchange identifier or a list of ProteomeXchange identifiers separated by spaces.

The -i option is the project identifier to use in the database. It will default to the PXD accession or the name of the directory containing the mzIdentML file.

To run tests

Make sure we have the right db user available

psql -p 5432 -c "create role ximzid_unittests with password 'ximzid_unittests';"
psql -p 5432 -c 'alter role ximzid_unittests with login;'
psql -p 5432 -c 'alter role ximzid_unittests with createdb;'
psql -p 5432 -c 'GRANT pg_signal_backend TO ximzid_unittests;'

run the tests

pipenv run pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

xi_mzidentml_converter-0.3.1.tar.gz (70.5 kB view details)

Uploaded Source

Built Distribution

xi_mzidentml_converter-0.3.1-py3-none-any.whl (87.5 kB view details)

Uploaded Python 3

File details

Details for the file xi_mzidentml_converter-0.3.1.tar.gz.

File metadata

File hashes

Hashes for xi_mzidentml_converter-0.3.1.tar.gz
Algorithm Hash digest
SHA256 90147e86ac5c3c33137bf1db8cb151bed06b546cf32c82a292a1d8a41bb321d3
MD5 839369ea58dec71831dab930c5ff43e0
BLAKE2b-256 ce78d38839bab31fa18f173eeb7c68f1dd136d341b1e499c88d7440c00081361

See more details on using hashes here.

File details

Details for the file xi_mzidentml_converter-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for xi_mzidentml_converter-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 052697373badf322636b487c1336236d96ed0615dc20ebb14c8a4b8360419fbb
MD5 02e4d768a6312585422016a1fac3185d
BLAKE2b-256 fe062b22ef58a48d28b14fb3c175b3341fdd962b58bc5351695f1156d0c481d0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page