Skip to main content

An API to compute and serve predictions of biomedical concepts associations via OpenAPI for the NCATS Translator project

Project description

Version Python versions Run tests Publish package SonarCloud Maintainability Rating SonarCloud Coverage CII Best  Practices

OpenPredict is a Python library and API to train and serve predicted biomedical entities associations (e.g. disease treated by drug).

Metadata about runs, models evaluations, features are stored using the ML Schema ontology in a RDF triplestore (such as Ontotext GraphDB, or Virtuoso).

Access the Translator OpenPredict API at https://openpredict.semanticscience.org 🔮🐍

Deploy the OpenPredict API locally :woman_technologist:

Requirements:

  • Python 3.6+ and pip installed

Install from the source code :inbox_tray:

Clone the repository:

git clone https://github.com/MaastrichtU-IDS/translator-openpredict.git
cd translator-openpredict

Install openpredict from the source code, the package will be automatically updated when the files changes locally :arrows_counterclockwise:

pip3 install -e .

Optional: isolate with a Virtual Environment

If you face conflicts with already installed packages, then you might want to use a Virtual Environment to isolate the installation in the current folder before installing OpenPredict:

# Create the virtual environment folder in your workspace
python3 -m venv .venv
# Activate it using a script in the created folder
source .venv/bin/activate

On Windows you might also need to install Visual Studio C++ 14 Build Tools (required for numpy)

Start the OpenPredict API :rocket:

Start locally the OpenPredict API on http://localhost:8808

openpredict start-api

By default all data are stored in the data/ folder in the directory were you used the openpredict command (RDF metadata, features and models of each run)

Contributions are welcome! If you wish to help improve OpenPredict, see the instructions to contribute :woman_technologist:

Reset your local OpenPredict data :x:

You can easily reset the data of your local OpenPredict deployment by deleting the data/ folder and restarting the OpenPredict API:

rm -rf data/

If you are working on improving OpenPredict, you can explore additional documentation to deploy the OpenPredict API locally or with Docker.


Use the API​ :mailbox_with_mail:

The user provides a drug or a disease identifier as a CURIE (e.g. DRUGBANK:DB00394, or OMIM:246300), and choose a prediction model (only the Predict OMIM-DrugBank classifier is currently implemented).

The API will return predicted targets for the given entity, such as:

  • The potential drugs treating a given disease
  • The potential diseases a given drug could treat

Feel free to try the API at openpredict.semanticscience.org

Notebooks examples

We provide Jupyter Notebooks with examples to use the OpenPredict API:

  1. Query the OpenPredict API
  2. Generate embeddings with pyRDF2Vec, and import them in the OpenPredict API

Add embedding

The default baseline model is openpredict-baseline-omim-drugbank. You can choose the base model when you post a new embeddings using the /embeddings call. Then the OpenPredict API will:

  1. add embeddings to the provided model
  2. train the model with the new embeddings
  3. store the features and model using a unique ID for the run (e.g. 7621843c-1f5f-11eb-85ae-48a472db7414)

Once the embedding has been added you can find the existing models previously generated (including openpredict-baseline-omim-drugbank), and use them as base model when you ask the model for prediction or add new embeddings.

Predict operation

Use this operation if you just want to easily retrieve predictions for a given entity. The /predict operation takes 4 parameters (1 required):

  • A drug_id to get predicted diseases it could treat (e.g. DRUGBANK:DB00394)
    • OR a disease_id to get predicted drugs it could be treated with (e.g. OMIM:246300)
  • The prediction model to use (default to Predict OMIM-DrugBank)
  • The minimum score of the returned predictions, from 0 to 1 (optional)
  • The limit of results to return, starting from the higher score, e.g. 42 (optional)

The API will return the list of predicted target for the given entity, the labels are resolved using the Translator Name Resolver API:

{
  "count": 300,
  "hits": [
    {
      "score": 0.8361061489249737,
      "id": "OMIM:246300",
      "label": "leprosy, susceptibility to, 3",
      "type": "disease"
    }
  ]
}

Try it at https://openpredict.semanticscience.org/predict?drug_id=DRUGBANK:DB00394

Query operation

The /query operation will return the same predictions as the /predict operation, using the ReasonerAPI format, used within the Translator project.

The user sends a ReasonerAPI query asking for the predicted targets given: a source, and the relation to predict. The query is a graph with nodes and edges defined in JSON, and uses classes from the BioLink model.

See this ReasonerAPI query example:

{
  "message": {
    "query_graph": {
      "edges": [
        {
          "id": "e00",
          "source_id": "n00",
          "target_id": "n01",
          "type": "treated_by"
        }
      ],
      "nodes": [
        {
          "curie": "DRUGBANK:DB00394",
          "id": "n00",
          "type": "drug"
        },
        {
          "id": "n01",
          "type": "disease"
        }
      ]
    }
  }
}

Predicates operation

The /predicates operation will return the entities and relations provided by this API in a JSON object (following the ReasonerAPI specifications).

Try it at https://openpredict.semanticscience.org/predicates


More about the data model

See the ML Schema documentation for more details on the data model.

OpenPredict datamodel


Acknowledgments​

Funded the the NIH NCATS Translator project

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openpredict-0.0.6.tar.gz (27.9 kB view hashes)

Uploaded Source

Built Distribution

openpredict-0.0.6-py3-none-any.whl (21.6 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page