An API to compute and serve predictions of biomedical concepts associations via OpenAPI for the NCATS Translator project
Project description
OpenPredict is a Python library and API to train and serve predicted biomedical entities associations (e.g. disease treated by drug).
Metadata about runs, models evaluations, features are stored using the ML Schema ontology as RDF.
Access the Translator OpenPredict API at https://openpredict.semanticscience.org 🔮🐍
You can use this API to retrieve predictions for drug/disease, or add new embeddings to improve the model.
Deploy the OpenPredict API locally :woman_technologist:
Requirements: Python 3.6+ and
pip
installed
You can install the openpredict
python package with pip
to run the OpenPredict API on your machine, to test new embeddings or improve the library.
We currently recommend to install from the source code master
branch to get the latest version of OpenPredict. But we also regularly publish the openpredict
package to PyPI: https://pypi.org/project/openpredict
With Docker from the source code :whale:
Clone the repository:
git clone https://github.com/MaastrichtU-IDS/translator-openpredict.git
cd translator-openpredict
Start the API in development mode on http://localhost:8808:
docker-compose up
By default all data are stored in the data/
folder in the directory were you used the openpredict
command (RDF metadata, features and models of each run)
Contributions are welcome! If you wish to help improve OpenPredict, see the instructions to contribute :woman_technologist:
You can use the openpredict
command in the docker container, for example to re-train the baseline model:
docker-compose exec api openpredict train-model --model openpredict-baseline-omim-drugbank
Reset your local OpenPredict data :wastebasket:
You can easily reset the data of your local OpenPredict deployment by deleting the data/
folder and restarting the OpenPredict API:
rm -rf data/
If you are working on improving OpenPredict, you can explore additional documentation to deploy the OpenPredict API locally or with Docker.
Deploy in production
docker-compose -f docker-compose.prod.yml up --build -d
Test the OpenPredict API
See the TESTING.md
file for more details on testing the API.
Use the API :mailbox_with_mail:
The user provides a drug or a disease identifier as a CURIE (e.g. DRUGBANK:DB00394, or OMIM:246300), and choose a prediction model (only the Predict OMIM-DrugBank
classifier is currently implemented).
The API will return predicted targets for the given drug or disease:
- The potential drugs treating a given disease :pill:
- The potential diseases a given drug could treat :microbe:
Feel free to try the API at openpredict.semanticscience.org
TRAPI operations
Operations to query OpenPredict using the Translator Reasoner API standards.
Query operation
The /query
operation will return the same predictions as the /predict
operation, using the ReasonerAPI format, used within the Translator project.
The user sends a ReasonerAPI query asking for the predicted targets given: a source, and the relation to predict. The query is a graph with nodes and edges defined in JSON, and uses classes from the BioLink model.
You can use the default TRAPI query of OpenPredict /query
operation to try a working example.
Example of TRAPI query to retrieve drugs similar to a specific drug:
{
"message": {
"query_graph": {
"edges": {
"e01": {
"object": "n1",
"predicates": [
"biolink:similar_to"
],
"subject": "n0"
}
},
"nodes": {
"n0": {
"categories": [
"biolink:Drug"
],
"ids": [
"DRUGBANK:DB00394"
]
},
"n1": {
"categories": [
"biolink:Drug"
]
}
}
}
},
"query_options": {
"n_results": 3
}
}
Predicates operation
The /predicates
operation will return the entities and relations provided by this API in a JSON object (following the ReasonerAPI specifications).
Try it at https://openpredict.semanticscience.org/predicates
Notebooks examples :notebook_with_decorative_cover:
We provide Jupyter Notebooks with examples to use the OpenPredict API:
- Query the OpenPredict API
- Generate embeddings with pyRDF2Vec, and import them in the OpenPredict API
Add embedding :station:
The default baseline model is openpredict-baseline-omim-drugbank
. You can choose the base model when you post a new embeddings using the /embeddings
call. Then the OpenPredict API will:
- add embeddings to the provided model
- train the model with the new embeddings
- store the features and model using a unique ID for the run (e.g.
7621843c-1f5f-11eb-85ae-48a472db7414
)
Once the embedding has been added you can find the existing models previously generated (including openpredict-baseline-omim-drugbank
), and use them as base model when you ask the model for prediction or add new embeddings.
Predict operation :crystal_ball:
Use this operation if you just want to easily retrieve predictions for a given entity. The /predict
operation takes 4 parameters (1 required):
- A
drug_id
to get predicted diseases it could treat (e.g.DRUGBANK:DB00394
)- OR a
disease_id
to get predicted drugs it could be treated with (e.g.OMIM:246300
)
- OR a
- The prediction model to use (default to
Predict OMIM-DrugBank
) - The minimum score of the returned predictions, from 0 to 1 (optional)
- The limit of results to return, starting from the higher score, e.g. 42 (optional)
The API will return the list of predicted target for the given entity, the labels are resolved using the Translator Name Resolver API
Try it at https://openpredict.semanticscience.org/predict?drug_id=DRUGBANK:DB00394
More about the data model :minidisc:
- The gold standard for drug-disease indications has been retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3159979
- Metadata about runs, models evaluations, features are stored as RDF using the ML Schema ontology.
- See the ML Schema documentation for more details on the data model.
Diagram of the data model used for OpenPredict, based on the ML Schema ontology (mls
):
Translator application
Service Summary
Query for drug-disease pairs predicted from pre-computed sets of graphs embeddings.
Add new embeddings to improve the predictive models, with versioning and scoring of the models.
Component List
API component
-
Component Name: OpenPredict API
-
Component Description: Python API to serve pre-computed set of drug-disease pair predictions from graphs embeddings
-
GitHub Repository URL: https://github.com/MaastrichtU-IDS/translator-openpredict
-
Component Framework: Knowledge Provider
-
System requirements
5.1. Specific OS and version if required: python 3.8
5.2. CPU/Memory (for CI, TEST and PROD): 32 CPUs and 32 Go memory ?
5.3. Disk size/IO throughput (for CI, TEST and PROD): 20 Go ?
5.4. Firewall policies: does the team need access to infrastructure components? The NodeNormalization API https://nodenormalization-sri.renci.org
-
External Dependencies (any components other than current one)
6.1. External storage solution: Models and database are stored in
/data/openpredict
in the Docker container -
Docker application:
7.1. Path to the Dockerfile:
Dockerfile
7.2. Docker build command:
docker build ghcr.io/maastrichtu-ids/openpredict-api .
7.3. Docker run command:
Replace
${PERSISTENT_STORAGE}
with the path to persistent storage on host:docker run -d -v ${PERSISTENT_STORAGE}:/data/openpredict -p 8808:8808 ghcr.io/maastrichtu-ids/openpredict-api
-
Logs of the application
9.2. Format of the logs: TODO
Acknowledgments
- This service has been built from the fair-workflows/openpredict project.
- Predictions made using the PREDICT method.
- Service funded by the NIH NCATS Translator project.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for openpredict-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4d77c2b62851d7cde1568f284130a8e05b47259fd055f2b0c1f45220ea042ca2 |
|
MD5 | b2cc7b4e8ca5c1ebc7f09fc4f4f5a579 |
|
BLAKE2b-256 | 86597f53283764146c72a5fbb9c2e53439fd8cd50ba476856b290c601df1fe7a |