A library to build and deploy FAIR metrics tests in Python, using RDFLib and FastAPI.
Project description
☑️ FAIR testing API
fair-test
is a library to build and deploy FAIR metrics tests APIs supporting the specifications used by the FAIRMetrics working group, in Python.
It aims to enable python developers to easily write and deploy FAIR metric tests functions that can be queried by multiple FAIR evaluations services, such as FAIR enough and the FAIRsharing FAIR Evaluator
Feel free to create an issue, or send a pull request if you are facing issues or would like to see a feature implemented.
🧑🏫 How it works
The user defines and registers custom FAIR metrics tests in separated files in a specific folder (the metrics
folder by default), and start the API.
The endpoint is CORS enabled by default.
Built with RDFLib and FastAPI. Tested for Python 3.7, 3.8 and 3.9
📥 Install the package
Install the package from PyPI:
pip install fair-test
🐍 Build a FAIR metrics test API
Checkout the example
folder for a complete working app example to get started, including a docker deployment. A good way to create a new FAIR testing API is to copy this example
folder, and start from it.
📝 Define the API
Create a main.py
file to declare the API:
from fair_test import FairTestAPI, settings
app = FairTestAPI(
title='FAIR Metrics tests API',
metrics_folder_path='metrics',
description="""FAIR Metrics tests API""",
license_info = {
"name": "MIT license",
"url": "https://opensource.org/licenses/MIT"
},
contact = {
"name": settings.CONTACT_NAME,
"email": settings.CONTACT_EMAIL,
"url": settings.CONTACT_URL,
"x-id": settings.CONTACT_ORCID,
},
)
📝 Define a FAIR metrics test
Create a metrics/a1_my_test.py
file in your project folder with your custom metrics test:
from api.metrics_test import FairTest
class MetricTest(FairTest):
metric_path = 'a1-check-something'
applies_to_principle = 'A1'
title = 'Check something'
description = """Test something"""
author = 'https://orcid.org/0000-0000-0000-0000'
metric_version = '0.1.0'
def evaluate(self):
self.info(f'Checking something for {self.subject}')
g = self.getRDF(self.subject)
if len(g) > 0:
self.success(f'{len(g)} triples found, test sucessful')
else:
self.failure('No triples found, test failed')
return self.response()
🦄 Deploy the API
You can then run the metrics tests API on http://localhost:8000/sparql with uvicorn
cd example
uvicorn main:app --reload
Checkout in the
example/README.md
for more details, such as deploying it with docker.
🧑💻 Development
📥 Install for development
Clone and install locally for development:
git clone https://github.com/MaastrichtU-IDS/fair-test
cd fair-test
pip install -e .
You can use a virtual environment to avoid conflicts:
# Create the virtual environment folder in your workspace
python3 -m venv .venv
# Activate it using a script in the created folder
source .venv/bin/activate
✅️ Run the tests
Install additional dependencies for testing:
pip install pytest
Run the tests locally (from the root folder) and display prints:
pytest -s
📂 Projects using fair-test
Here are some projects using fair-test
to deploy FAIR testing services:
- https://github.com/MaastrichtU-IDS/fair-enough-metrics
- A generic FAIR metrics tests service developed at the Institute of Data Science at Maastricht University.
- https://github.com/LUMC-BioSemantics/RD-FAIRmetric-F4
- A FAIR metrics tests service for Rare Disease research.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for fair_test-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2902d4910d6a1e079797c6d88938ac4832d4b5280ae8c43aa9a26f1594db0b28 |
|
MD5 | 1bff8f21de433d92e59d12e47215b32c |
|
BLAKE2b-256 | 29a16a4cc446ce12fb717568946f891fedc0fa49c11d208a87746e47273c6316 |