A package to deploy SPARQL endpoint to serve local RDF files, machine learning models, or any other logic implemented in Python, using RDFLib and FastAPI.
Project description
rdflib-endpoint
is a SPARQL endpoint based on RDFLib to easily serve RDF files locally, machine learning models, or any other logic implemented in Python via custom SPARQL functions.
It aims to enable python developers to easily deploy functions that can be queried in a federated fashion using SPARQL. For example: using a python function to resolve labels for specific identifiers, or run a classifier given entities retrieved using a SERVICE
query to another SPARQL endpoint.
Feel free to create an issue, or send a pull request if you are facing issues or would like to see a feature implemented.
ℹ️ How it works
rdflib-endpoint
can be used directly from the terminal to quickly serve RDF files through a SPARQL endpoint automatically deployed locally.
It can also be used to define custom SPARQL functions: the user defines and registers custom SPARQL functions, and/or populate the RDFLib Graph using Python, then the endpoint is started using uvicorn
/gunicorn
.
The deployed SPARQL endpoint can be used as a SERVICE
in a federated SPARQL query from regular triplestores SPARQL endpoints. Tested on OpenLink Virtuoso (Jena based) and Ontotext GraphDB (rdf4j based). The endpoint is CORS enabled by default.
Built with RDFLib and FastAPI.
📦️ Installation
This package requires Python >=3.7, simply install it from PyPI with:
pip install rdflib-endpoint
If you want to use oxigraph as backend triplestore you can install with the optional dependency:
pip install "rdflib-endpoint[oxigraph]"
⚠️ Oxigraph and
oxrdflib
do not support custom functions, so it can be only used to deploy graphs without custom functions.
⌨️ Use the CLI
rdflib-endpoint
can be used from the command line interface to perform basic utility tasks, such as serving or converting RDF files locally.
⚡️ Quickly serve RDF files through a SPARQL endpoint
Use rdflib-endpoint
as a command line interface (CLI) in your terminal to quickly serve one or multiple RDF files as a SPARQL endpoint.
You can use wildcard and provide multiple files, for example to serve all turtle, JSON-LD and nquads files in the current folder you could run:
rdflib-endpoint serve *.ttl *.jsonld *.nq
Then access the YASGUI SPARQL editor on http://localhost:8000
If you installed with the Oxigraph optional dependency you can use it as backend triplestore, it is faster and supports some functions that are not supported by the RDFLib query engine (such as COALESCE()
):
rdflib-endpoint serve --store Oxigraph "*.ttl" "*.jsonld" "*.nq"
🔄 Convert RDF files to another format
rdflib-endpoint
can also be used to quickly merge and convert files from multiple formats to a specific format:
rdflib-endpoint convert "*.ttl" "*.jsonld" "*.nq" --output "merged.trig"
✨ Deploy your SPARQL endpoint
rdflib-endpoint
enables you to easily define and deploy SPARQL endpoints based on RDFLib Graph, ConjunctiveGraph, and Dataset. Additionally it provides helpers to defines custom functions in the endpoint.
Checkout the example
folder for a complete working app example to get started, including a docker deployment. A good way to create a new SPARQL endpoint is to copy this example
folder, and start from it.
🚨 Deploy as a standalone API
Deploy your SPARQL endpoint as a standalone API:
from rdflib import ConjunctiveGraph
from rdflib_endpoint import SparqlEndpoint
# Start the SPARQL endpoint based on a RDFLib Graph and register your custom functions
g = ConjunctiveGraph()
# TODO: Add triples in your graph
# Then use either SparqlEndpoint or SparqlRouter, they take the same arguments
app = SparqlEndpoint(
graph=g,
path="/",
cors_enabled=True,
# Metadata used for the SPARQL service description and Swagger UI:
title="SPARQL endpoint for RDFLib graph",
description="A SPARQL endpoint to serve machine learning models, or any other logic implemented in Python. \n[Source code](https://github.com/vemonet/rdflib-endpoint)",
version="0.1.0",
public_url='https://your-endpoint-url/',
# Example queries displayed in the Swagger UI to help users try your function
example_query="""PREFIX myfunctions: <https://w3id.org/um/sparql-functions/>
SELECT ?concat ?concatLength WHERE {
BIND("First" AS ?first)
BIND(myfunctions:custom_concat(?first, "last") AS ?concat)
}"""
)
Finally deploy this app using uvicorn
(see below)
🛣️ Deploy as a router to include in an existing API
Deploy your SPARQL endpoint as an APIRouter
to include in an existing FastAPI
API. The SparqlRouter
constructor takes the same arguments as the SparqlEndpoint
.
from fastapi import FastAPI
from rdflib import ConjunctiveGraph
from rdflib_endpoint import SparqlRouter
g = ConjunctiveGraph()
sparql_router = SparqlRouter(
graph=g,
path="/",
# Metadata used for the SPARQL service description and Swagger UI:
title="SPARQL endpoint for RDFLib graph",
description="A SPARQL endpoint to serve machine learning models, or any other logic implemented in Python. \n[Source code](https://github.com/vemonet/rdflib-endpoint)",
version="0.1.0",
public_url='https://your-endpoint-url/',
)
app = FastAPI()
app.include_router(sparql_router)
📝 Define custom SPARQL functions
This option makes it easier to define functions in your SPARQL endpoint, e.g. BIND(myfunction:custom_concat("start", "end") AS ?concat)
. It can be used with the SparqlEndpoint
and SparqlRouter
classes.
Create a app/main.py
file in your project folder with your custom SPARQL functions, and endpoint parameters:
import rdflib
from rdflib import ConjunctiveGraph
from rdflib.plugins.sparql.evalutils import _eval
from rdflib_endpoint import SparqlEndpoint
def custom_concat(query_results, ctx, part, eval_part):
"""Concat 2 strings in the 2 senses and return the length as additional Length variable
"""
# Retrieve the 2 input arguments
argument1 = str(_eval(part.expr.expr[0], eval_part.forget(ctx, _except=part.expr._vars)))
argument2 = str(_eval(part.expr.expr[1], eval_part.forget(ctx, _except=part.expr._vars)))
evaluation = []
scores = []
# Prepare the 2 result string, 1 for eval, 1 for scores
evaluation.append(argument1 + argument2)
evaluation.append(argument2 + argument1)
scores.append(len(argument1 + argument2))
scores.append(len(argument2 + argument1))
# Append the results for our custom function
for i, result in enumerate(evaluation):
query_results.append(eval_part.merge({
part.var: rdflib.Literal(result),
# With an additional custom var for the length
rdflib.term.Variable(part.var + 'Length'): rdflib.Literal(scores[i])
}))
return query_results, ctx, part, eval_part
# Start the SPARQL endpoint based on a RDFLib Graph and register your custom functions
g = ConjunctiveGraph()
# Use either SparqlEndpoint or SparqlRouter, they take the same arguments
app = SparqlEndpoint(
graph=g,
path="/",
# Register the functions:
functions={
'https://w3id.org/um/sparql-functions/custom_concat': custom_concat
},
cors_enabled=True,
# Metadata used for the SPARQL service description and Swagger UI:
title="SPARQL endpoint for RDFLib graph",
description="A SPARQL endpoint to serve machine learning models, or any other logic implemented in Python. \n[Source code](https://github.com/vemonet/rdflib-endpoint)",
version="0.1.0",
public_url='https://your-endpoint-url/',
# Example queries displayed in the Swagger UI to help users try your function
example_query="""PREFIX myfunctions: <https://w3id.org/um/sparql-functions/>
SELECT ?concat ?concatLength WHERE {
BIND("First" AS ?first)
BIND(myfunctions:custom_concat(?first, "last") AS ?concat)
}"""
)
✒️ Or directly define the custom evaluation
You can also directly provide the custom evaluation function, this will override the functions
.
Refer to the RDFLib documentation to define the custom evaluation function. Then provide it when instantiating the SPARQL endpoint:
import rdflib
from rdflib.plugins.sparql.evaluate import evalBGP
from rdflib.namespace import FOAF, RDF, RDFS
def custom_eval(ctx, part):
"""Rewrite triple patterns to get super-classes"""
if part.name == "BGP":
# rewrite triples
triples = []
for t in part.triples:
if t[1] == RDF.type:
bnode = rdflib.BNode()
triples.append((t[0], t[1], bnode))
triples.append((bnode, RDFS.subClassOf, t[2]))
else:
triples.append(t)
# delegate to normal evalBGP
return evalBGP(ctx, triples)
raise NotImplementedError()
app = SparqlEndpoint(
graph=g,
custom_eval=custom_eval
)
🦄 Run the SPARQL endpoint
You can then run the SPARQL endpoint server from the folder where your script is defined with uvicorn
on http://localhost:8000 (it is installed automatically when you install the rdflib-endpoint
package)
uvicorn main:app --app-dir app --reload
Checkout in the
example/README.md
for more details, such as deploying it with docker.
🧑💻 Development
This section is for if you want to run the package in development, and get involved by making a code contribution.
📥️ Clone
Clone the repository:
git clone https://github.com/vemonet/rdflib-endpoint
cd rdflib-endpoint
🐣 Install dependencies
Install Hatch, this will automatically handle virtual environments and make sure all dependencies are installed when you run a script in the project:
pip install --upgrade hatch
Install the dependencies in a local virtual environment (running this command is optional as hatch
will automatically install and synchronize dependencies each time you run a script with hatch run
):
hatch -v env create
🚀 Run example API
The API will be automatically reloaded when the code is changed:
hatch run dev
Access the YASGUI interface at http://localhost:8000
☑️ Run tests
Make sure the existing tests still work by running pytest
. Note that any pull requests to the fairworkflows repository on github will automatically trigger running of the test suite;
hatch run test
To display all print()
:
hatch run test -s
🧹 Code formatting
The code will be automatically formatted when you commit your changes using pre-commit
. But you can also run the script to format the code yourself:
hatch run fmt
Check the code for errors, and if it is in accordance with the PEP8 style guide, by running ruff
and mypy
:
hatch run check
✅ Run all checks
Run all checks (fmt, linting, tests) with:
hatch run all
♻️ Reset the environment
In case you are facing issues with dependencies not updating properly you can easily reset the virtual environment with:
hatch env prune
🏷️ New release process
The deployment of new releases is done automatically by a GitHub Action workflow when a new release is created on GitHub. To release a new version:
-
Make sure the
PYPI_TOKEN
secret has been defined in the GitHub repository (in Settings > Secrets > Actions). You can get an API token from PyPI at pypi.org/manage/account. -
Increment the
version
number following semantic versioning, select betweenfix
,minor
, ormajor
:hatch version fix
-
Create a new release on GitHub, which will automatically trigger the publish workflow, and publish the new release to PyPI.
You can also manually trigger the workflow from the Actions tab in your GitHub repository webpage if needed.
📂 Projects using rdflib-endpoint
Here are some projects using rdflib-endpoint
to deploy custom SPARQL endpoints with python:
- proycon/codemeta-server
- Server for codemeta, in memory triple store, SPARQL endpoint and simple web-based visualisation for end-user
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file rdflib_endpoint-0.4.0.tar.gz
.
File metadata
- Download URL: rdflib_endpoint-0.4.0.tar.gz
- Upload date:
- Size: 21.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 889b08de11a526a2fc125a2df1d2c2bc061ef9a942bcae15e7bf3b53561a86e8 |
|
MD5 | 3dc815251a021a728b2f8043080c69ed |
|
BLAKE2b-256 | af5e37365e1930c9076ae9c0856fafa43d20a06efd2136c615e5255d8b8a8e05 |
File details
Details for the file rdflib_endpoint-0.4.0-py3-none-any.whl
.
File metadata
- Download URL: rdflib_endpoint-0.4.0-py3-none-any.whl
- Upload date:
- Size: 15.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 04721b0f23a9b5f5132967e55f8125e6252cc507fd5df230b30f7e8c633e301f |
|
MD5 | 72015d4148a6a5d1ce9d52924d801183 |
|
BLAKE2b-256 | 419d7e3e455209fa9c11bc703131a38788ebe3a1b169e0a7b57d7359a9057fb8 |