Skip to main content

Processing and integrating data with genome-scale metabolic models (GEM)

Project description

PipeGEM v0.1.0

PyPI pyversions License: GPL v3 ci


This is a package for visualizing and analyzing multiple metabolic models. It also allow users to integrate omic data, metabolic tasks, and medium data with GEMs.

The flux analysis functions in the package are based on cobrapy: https://cobrapy.readthedocs.io/en/latest/


How to get PipeGEM

To install directly from PyPI:
pip install pipegem


How to use this package (Python API)

single model

import pipeGEM as pg
from pipeGEM.utils import load_model

model = load_model("your_model_path")  # cobra.Model
pmodel = pg.Model(name_tag="model_name", 
                  model=model)

# Print out model information
print(pmodel)

# Do and plot pFBA result
flux_analysis = pmodel.do_flux_analysis("pFBA")
flux_analysis.plot(
    rxn_ids=['rxn_a', 'rxn_b'],
    file_name='pfba_flux.png'  # can be None if you don't want to save the figure
    )

multiple models

import pipeGEM as pg
from pipeGEM.utils import load_model

model_a1 = load_model("your_model_path_1")
model_a2 = load_model("your_model_path_2")

model_b1 = load_model("your_model_path_3")
model_b2 = load_model("your_model_path_4")

group = pg.Group({
        "group_a": {
            "model_a_dmso": model_a1, 
            "model_a_metformin": model_a2
        },
        "group_b": {
            "model_b_dmso": model_b1, 
            "model_b_metformin": model_b2
        }
    }, 
    name_tag="my_group", 
    treatments={"model_a_dmso": "DMSO", 
                "model_b_dmso": "DMSO",
                "model_a_metformin": "metformin", 
                "model_b_metformin": "metformin"}
)

# Do and plot pFBA result
flux_analysis = group.do_flux_analysis("pFBA")
flux_analysis.plot(rxn_ids=['rxn_a', 'rxn_b'])

Generate context-specific models

import numpy as np
import pipeGEM as pg
from pipeGEM.utils import load_model
from pipeGEM.data import GeneData, synthesis

# initialize model
mod = pg.Model(name_tag="model_name", 
               model=load_model("your_model_path_1"))

# create dummy transcriptomic data
dummy_data = synthesis.get_syn_gene_data(mod, n_sample=3)

# calculate reaction activity score
gene_data = GeneData(data=dummy_data["sample_0"], # pd.Series or a dict
                     data_transform=lambda x: np.log2(x), # callable
                     absent_expression=-np.inf) # value
mod.add_gene_data(name_or_prefix="sample_0",  # name of the data
                  data=gene_data, 
                  or_operation="nanmax",  # alternative: nansum
                  threshold=-np.inf, 
                  absent_value=-np.inf)

# apply GIMME algorithm on the model
gimme_result = mod.integrate_gene_data(data_name="sample_0", integrator="GIMME", high_exp=5*np.log10(2))
context_specific_gem = gimme_result.result_model

Command-Line Interface (CLI) Quick Start

PipeGEM also provides a command-line interface for running predefined pipelines using configuration files.

  1. Generate Template Configurations: Start by generating template TOML configuration files for a specific pipeline (e.g., integration). Replace integration with the desired pipeline name if needed.

    python -m pipeGEM -n template -p integration -o ./configs
    

    This will create a configs directory (if it doesn't exist) containing template .toml files like gene_data_conf.toml, model_conf.toml, etc.

  2. Modify Configurations (Optional): Edit the generated .toml files in the configs directory to specify your input file paths, parameters, and desired settings. For example, in model_conf.toml, you might specify the path to your metabolic model file.

  3. Run a Pipeline: Execute a pipeline using the configuration files. For example, to run the model processing pipeline using the configuration in configs/model_conf.toml:

    python -m pipeGEM -n model_processing -t configs/model_conf.toml
    

    Or, to run the full integration pipeline:

    python -m pipeGEM -n integration \
        -g configs/gene_data_conf.toml \
        -t configs/model_conf.toml \
        -r configs/threshold_conf.toml \
        -m configs/mapping_conf.toml \
        -i configs/integration_conf.toml
    

    Refer to the generated template files and the specific pipeline documentation for details on required configurations.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pipegem-0.1.1.tar.gz (4.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pipegem-0.1.1-py3-none-any.whl (4.5 MB view details)

Uploaded Python 3

File details

Details for the file pipegem-0.1.1.tar.gz.

File metadata

  • Download URL: pipegem-0.1.1.tar.gz
  • Upload date:
  • Size: 4.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.13

File hashes

Hashes for pipegem-0.1.1.tar.gz
Algorithm Hash digest
SHA256 0da7de7b08315bd1cf63130604098caed7b67d8711f2e33317d006a28b2f52b9
MD5 776021c99cc7f56c9c3f79a66d839194
BLAKE2b-256 fd132dac7c79d7a9e9dfd3189e4b97b27e0667eab440491840480ab85f623549

See more details on using hashes here.

File details

Details for the file pipegem-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: pipegem-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 4.5 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.13

File hashes

Hashes for pipegem-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f6b7bc101b26e6259b00942f1a5358d1a9d85a481bcb75592af85a34744c526e
MD5 d6ff9660c7edcdffef163df6ac549b6b
BLAKE2b-256 656a160f59bb26d420f431520d7dc4909f04ee3256319460450ace1a93f8edd5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page