Skip to main content

A package to help serve predictions of biomedical concepts associations as Translator Reasoner API.

Project description

🔮🐍 Translator OpenPredict

Python versions Version Publish package

Test the production API Run integration tests for TRAPI SonarCloud Coverage

OpenPredict is a python package to help serve predictions of biomedical associations, as Translator Reasoner API (aka. TRAPI).

The Translator Reasoner API (TRAPI) defines a standard HTTP API for communicating biomedical questions and answers leveraging the Biolink model.

The package provides:

  • a decorator @trapi_predict to which the developer can pass all informations required to integrate the prediction function to a Translator Reasoner API
  • a TRAPI class to deploy a Translator Reasoner API serving a list of prediction functions decorated with @trapi_predict
  • Helpers to store your models in a FAIR manner, using tools such as dvc and mlem

Predictions are usually generated from machine learning models (e.g. predict disease treated by drug), but it can adapt to generic python function, as long as the input params and return object follow the expected structure.

Additionally to the library, this repository contains the code for the OpenPredict Translator API available at openpredict.semanticscience.org, which serves a few prediction models developed at the Institute of Data Science.

📦️ Use the package

Install

pip install openpredict

Use

The openpredict package provides a decorator @trapi_predict to annotate your functions that generate predictions. The code for this package is in src/openpredict/.

Predictions generated from functions decorated with @trapi_predict can easily be imported in the Translator OpenPredict API, exposed as an API endpoint to get predictions from the web, and queried through the Translator Reasoner API (TRAPI)

from openpredict import trapi_predict, PredictOptions, PredictOutput

@trapi_predict(path='/predict',
    name="Get predicted targets for a given entity",
    description="Return the predicted targets for a given entity: drug (DrugBank ID) or disease (OMIM ID), with confidence scores.",
    edges=[
        {
            'subject': 'biolink:Drug',
            'predicate': 'biolink:treats',
            'object': 'biolink:Disease',
        },
        {
            'subject': 'biolink:Disease',
            'predicate': 'biolink:treated_by',
            'object': 'biolink:Drug',
        },
    ],
	nodes={
        "biolink:Disease": {
            "id_prefixes": [
                "OMIM"
            ]
        },
        "biolink:Drug": {
            "id_prefixes": [
                "DRUGBANK"
            ]
        }
    }
)
def get_predictions(
        input_id: str, options: PredictOptions
    ) -> PredictOutput:
    # Add the code the load the model and get predictions here
    predictions = {
        "hits": [
            {
                "id": "DB00001",
                "type": "biolink:Drug",
                "score": 0.12345,
                "label": "Leipirudin",
            }
        ],
        "count": 1,
    }
    return predictions

🍪 You can use our cookiecutter template to quickly bootstrap a repository with everything ready to start developing your prediction models, to then easily publish, and integrate them in the Translator ecosystem

🌐 The OpenPredict Translator API

Additionally to the library, this repository contains the code for the OpenPredict Translator API available at openpredict.semanticscience.org and the predictions models it exposes:

  • the code for the OpenPredict API endpoints in src/trapi/ defines:
    • a TRAPI endpoint returning predictions for the loaded models
    • individual endpoints for each loaded models, taking an input id, and returning predicted hits
    • endpoints serving metadata about runs, models evaluations, features for the OpenPredict model, stored as RDF, using the ML Schema ontology.
  • various folders for different prediction models served by the OpenPredict API are available under src/:
    • the OpenPredict drug-disease prediction model in src/openpredict_model/
    • a model to compile the evidence path between a drug and a disease explaining the predictions of the OpenPredict model in src/openpredict_evidence_path/
    • a prediction model trained from the Drug Repurposing Knowledge Graph (aka. DRKG) in src/drkg_model/

The data used by the models in this repository is versionned using dvc in the data/ folder, and stored on DagsHub at https://dagshub.com/vemonet/translator-openpredict

Deploy the OpenPredict API locally

Requirements: Python 3.8+ and pip installed

  1. Clone the repository:

    git clone https://github.com/MaastrichtU-IDS/translator-openpredict.git
    cd translator-openpredict
    
  2. Pull the data required to run the models in the data folder with dvc:

    pip install dvc
    dvc pull
    

Start the API in development mode with docker on http://localhost:8808, the API will automatically reload when you make changes in the code:

docker-compose up api
# Or with the helper script:
./scripts/api.sh

Contributions are welcome! If you wish to help improve OpenPredict, see the instructions to contribute :woman_technologist: for more details on the development workflow

Test the OpenPredict API

Run the tests locally with docker:

docker-compose run tests
# Or with the helper script:
./scripts/test.sh

See the TESTING.md file for more details on testing the API.

You can change the entrypoint of the test container to run other commands, such as training a model:

docker-compose run --entrypoint "python src/openpredict_model/train.py train-model" tests
# Or with the helper script:
./scripts/run.sh python src/openpredict_model/train.py train-model

Use the OpenPredict API

The user provides a drug or a disease identifier as a CURIE (e.g. DRUGBANK:DB00394, or OMIM:246300), and choose a prediction model (only the Predict OMIM-DrugBank classifier is currently implemented).

The API will return predicted targets for the given drug or disease:

  • The potential drugs treating a given disease :pill:
  • The potential diseases a given drug could treat :microbe:

Feel free to try the API at openpredict.semanticscience.org

TRAPI operations

Operations to query OpenPredict using the Translator Reasoner API standards.

Query operation

The /query operation will return the same predictions as the /predict operation, using the ReasonerAPI format, used within the Translator project.

The user sends a ReasonerAPI query asking for the predicted targets given: a source, and the relation to predict. The query is a graph with nodes and edges defined in JSON, and uses classes from the BioLink model.

You can use the default TRAPI query of OpenPredict /query operation to try a working example.

Example of TRAPI query to retrieve drugs similar to a specific drug:

{
    "message": {
        "query_graph": {
        "edges": {
            "e01": {
            "object": "n1",
            "predicates": [
                "biolink:similar_to"
            ],
            "subject": "n0"
            }
        },
        "nodes": {
            "n0": {
            "categories": [
                "biolink:Drug"
            ],
            "ids": [
                "DRUGBANK:DB00394"
            ]
            },
            "n1": {
            "categories": [
                "biolink:Drug"
            ]
            }
        }
        }
    },
    "query_options": {
        "n_results": 3
    }
}
Predicates operation

The /predicates operation will return the entities and relations provided by this API in a JSON object (following the ReasonerAPI specifications).

Try it at https://openpredict.semanticscience.org/predicates

Notebooks examples :notebook_with_decorative_cover:

We provide Jupyter Notebooks with examples to use the OpenPredict API:

  1. Query the OpenPredict API
  2. Generate embeddings with pyRDF2Vec, and import them in the OpenPredict API

Add embedding :station:

The default baseline model is openpredict_baseline. You can choose the base model when you post a new embeddings using the /embeddings call. Then the OpenPredict API will:

  1. add embeddings to the provided model
  2. train the model with the new embeddings
  3. store the features and model using a unique ID for the run (e.g. 7621843c-1f5f-11eb-85ae-48a472db7414)

Once the embedding has been added you can find the existing models previously generated (including openpredict_baseline), and use them as base model when you ask the model for prediction or add new embeddings.

Predict operation :crystal_ball:

Use this operation if you just want to easily retrieve predictions for a given entity. The /predict operation takes 4 parameters (1 required):

  • A drug_id to get predicted diseases it could treat (e.g. DRUGBANK:DB00394)
    • OR a disease_id to get predicted drugs it could be treated with (e.g. OMIM:246300)
  • The prediction model to use (default to Predict OMIM-DrugBank)
  • The minimum score of the returned predictions, from 0 to 1 (optional)
  • The limit of results to return, starting from the higher score, e.g. 42 (optional)

The API will return the list of predicted target for the given entity, the labels are resolved using the Translator Name Resolver API

Try it at https://openpredict.semanticscience.org/predict?drug_id=DRUGBANK:DB00394


More about the data model :minidisc:

Diagram of the data model used for OpenPredict, based on the ML Schema ontology (mls):

OpenPredict datamodel


Translator application

Service Summary

Query for drug-disease pairs predicted from pre-computed sets of graphs embeddings.

Add new embeddings to improve the predictive models, with versioning and scoring of the models.

Component List

API component

  1. Component Name: OpenPredict API

  2. Component Description: Python API to serve pre-computed set of drug-disease pair predictions from graphs embeddings

  3. GitHub Repository URL: https://github.com/MaastrichtU-IDS/translator-openpredict

  4. Component Framework: Knowledge Provider

  5. System requirements

    5.1. Specific OS and version if required: python 3.8

    5.2. CPU/Memory (for CI, TEST and PROD): 32 CPUs and 32 Go memory ?

    5.3. Disk size/IO throughput (for CI, TEST and PROD): 20 Go ?

    5.4. Firewall policies: does the team need access to infrastructure components? The NodeNormalization API https://nodenormalization-sri.renci.org

  6. External Dependencies (any components other than current one)

    6.1. External storage solution: Models and database are stored in /data/openpredict in the Docker container

  7. Docker application:

    7.1. Path to the Dockerfile: Dockerfile

    7.2. Docker build command:

    docker build ghcr.io/maastrichtu-ids/openpredict-api .
    

    7.3. Docker run command:

    Replace ${PERSISTENT_STORAGE} with the path to persistent storage on host:

    docker run -d -v ${PERSISTENT_STORAGE}:/data/openpredict -p 8808:8808 ghcr.io/maastrichtu-ids/openpredict-api
    
  8. Logs of the application

    9.2. Format of the logs: TODO

Acknowledgments​

Funded the the NIH NCATS Translator project

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openpredict-0.2.1.tar.gz (2.7 MB view details)

Uploaded Source

Built Distribution

openpredict-0.2.1-py3-none-any.whl (24.3 kB view details)

Uploaded Python 3

File details

Details for the file openpredict-0.2.1.tar.gz.

File metadata

  • Download URL: openpredict-0.2.1.tar.gz
  • Upload date:
  • Size: 2.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.11.1

File hashes

Hashes for openpredict-0.2.1.tar.gz
Algorithm Hash digest
SHA256 43280badf7ae96ffacbaa3a6af41ddc38fafb99931212b22ec0e72db7deaa78f
MD5 015d9db1087819a0bd5a5e6e94f1da95
BLAKE2b-256 0d18625f4b779198701ed2f7947d11a626742f48957455e3b459f173c6ebcf7c

See more details on using hashes here.

File details

Details for the file openpredict-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: openpredict-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 24.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.11.1

File hashes

Hashes for openpredict-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b05c3d5f9c863af6f256270793da5ec34798090bb086892c9ecba4b14a75efd7
MD5 47ebf3c756735f0bdb824ace6318bb76
BLAKE2b-256 cb0c78c530f576b48919162fb412bb021a01e41a2ca6fe16f450879fc1be19a8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page