Skip to main content

Comprehensive Python Module for Protein Data Management: Designed for streamlined integration and processing of protein information from both UniProt and PDB. Equipped with features for concurrent data fetching, robust error handling, and database synchronization.

Project description

codecov PyPI - Version Documentation Status Linting Status

Protein Information System (PIS)

Protein Information System (PIS) is an integrated biological information system focused on extracting, processing, and managing protein-related data. PIS consolidates data from UniProt, PDB, and GOA, enabling the efficient retrieval and organization of protein sequences, structures, and functional annotations.

The primary goal of PIS is to provide a robust framework for large-scale protein data extraction, facilitating downstream functional analysis and annotation transfer. The system is designed for high-performance computing (HPC) environments, ensuring scalability and efficiency.

📈 Current State of the Project

FANTASIA Redesign

🔄 FANTASIA has been completely redesigned and is now available at:
FANTASIA Repository
This new version is a pipeline for annotating GO (Gene Ontology) terms in protein sequence files (FASTAs). The redesign focuses on long-term support, updated dependencies, and improved integration with High-Performance Computing (HPC) environments.

Stable Version of the Information System

🛠️ A stable version of the information system for working with UniProt and annotation transfer is available at:
Zenodo Stable Release
This version serves as a reference implementation and provides a consistent environment for annotation transfer tasks.

Prerequisites

  • Python 3.11.6
  • RabbitMQ
  • PostgreSQL with pgVector extension installed.

Setup Instructions

1. Install Docker

Ensure Docker is installed on your system. If it’s not, you can download it from here.

2. Starting Required Services

Ensure PostgreSQL and RabbitMQ services are running.

docker run -d --name pgvectorsql \
    --shm-size=64g \
    -e POSTGRES_USER=usuario \
    -e POSTGRES_PASSWORD=clave \
    -e POSTGRES_DB=BioData \
    -p 5432:5432 \
    pgvector/pgvector:pg16 \
    -c shared_buffers=16GB \
    -c effective_cache_size=32GB \
    -c work_mem=64MB

3. PostgreSQL Configuration

The configuration parameters provided above have been optimized for a machine with 128GB of RAM and 32 CPU cores, allowing up to 20 concurrent workers. These settings enhance PostgreSQL’s performance when handling large datasets and computationally intensive queries.

  • --shm-size=64g: Allocates 64GB of shared memory to the container, preventing PostgreSQL from running out of memory in high-performance environments.
  • -c shared_buffers=16GB: Allocates 16GB of RAM for PostgreSQL’s shared memory buffers. This should typically be 25-40% of total system memory.
  • -c effective_cache_size=32GB: Sets PostgreSQL’s estimated available memory for disk caching to 32GB. This helps the query planner make better decisions.
  • -c work_mem=64MB: Defines 64MB of memory per worker for operations like sorting and hashing. This is crucial when handling parallel query execution.

4. (Optional) Connect to the Database

You can use pgAdmin 4, a graphical interface for managing and interacting with PostgreSQL databases, or any other SQL client.

5. Set Up RabbitMQ

Start a RabbitMQ container using the command below:

docker run -d --name rabbitmq \
    -p 15672:15672 \
    -p 5672:5672 \
    rabbitmq:management

6. (Optional) Manage RabbitMQ

Once RabbitMQ is running, you can access its management interface at RabbitMQ Management Interface.


Get started:

To execute the full extraction process, simply run:

python main.py

This command will trigger the complete workflow, starting from the initial data preprocessing stages and continuing through to the final data organization and storage.

Customizing the Workflow:

You can customize the sequence of tasks executed by modifying main.py or adjusting the relevant parameters in the config.yaml file. This allows you to tailor the extraction process to meet specific research needs or to experiment with different data processing configurations.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

protein_metamorphisms_is-3.1.1.tar.gz (52.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

protein_metamorphisms_is-3.1.1-py3-none-any.whl (82.0 kB view details)

Uploaded Python 3

File details

Details for the file protein_metamorphisms_is-3.1.1.tar.gz.

File metadata

  • Download URL: protein_metamorphisms_is-3.1.1.tar.gz
  • Upload date:
  • Size: 52.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.0 CPython/3.10.16 Linux/6.8.0-1021-azure

File hashes

Hashes for protein_metamorphisms_is-3.1.1.tar.gz
Algorithm Hash digest
SHA256 f5e9aba55d3de057f7d08c0c0f03f9550e73950106bdeb21819b9c8ebf99d82d
MD5 71fff46cabf4c43046a83591fb265a32
BLAKE2b-256 4975c4a34a104e0fb1578d6ace7fab257b6a893233619ceb73e0ce3920d1431c

See more details on using hashes here.

File details

Details for the file protein_metamorphisms_is-3.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for protein_metamorphisms_is-3.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d9fc0c9414228e8cbb95c6e212b3a6375e2e9480bcd674aa2c3518955f07ef81
MD5 c083e8d71d93ee5217b09fcbd282c388
BLAKE2b-256 8151951e001447deddbaff76acf4fdd1ae6439e2172d7448321f42c4407522cd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page