Skip to main content

A tool for uploading RDF data to SPARQL endpoints

Project description

RDF Uploader

When working with RDF data and multiple triple stores, it is common to need to upload knowledge graphs to different stores. Although most stores claim to be standards-based, there are two main standards: the Graph Store Protocol and SPARQL Update. However, there are nuances regarding exact URL endpoints, named graphs, and authentication, making it a pain to deal with multiple proprietary tools.

Introducing rdf_uploader, a single tool that can upload RDF data to a variety of data sources. It is easy to use and has no dependencies on RDFLib or any datastore-specific libraries, relying solely on pure HTTP. With rdf_uploader, you can seamlessly upload your RDF data to different triple stores without the hassle of dealing with multiple tools and their quirks.

Features

  • Ingest RDF data into SPARQL endpoints using asynchronous operations
  • Support for multiple RDF stores (MarkLogic, Blazegraph, Neptune, RDFox, and Stardog)
  • Authentication support for secure endpoints
  • Content type detection and customization
  • Clear status outputs after each upload operation
  • Concurrent uploads with configurable limits

Installation

From PyPI

pip install rdf-uploader

Usage

Basic Usage

Upload a single RDF file to a SPARQL endpoint:

rdf-uploader path/to/file.ttl --endpoint http://localhost:3030/dataset/sparql

You can also omit the endpoint URL and use environment variables:

# Set the endpoint URL in an environment variable
export RDF_ENDPOINT=http://localhost:3030/dataset/sparql

# Then run without the --endpoint parameter
rdf-uploader path/to/file.ttl

Or specify the endpoint type to use a type-specific environment variable:

# Set endpoint-specific URL
export MARKLOGIC_ENDPOINT=http://marklogic-server:8000/v1/graphs

# Use the endpoint type to determine which environment variable to use
rdf-uploader path/to/file.ttl --type marklogic

Programmatic Usage

You can also use the library programmatically in your Python code:

from pathlib import Path
from rdf_uploader.uploader import upload_rdf_file
from rdf_uploader.endpoints import EndpointType

# The endpoint URL, username, and password can be provided directly
# or read from environment variables if not specified
await upload_rdf_file(
    file_path=Path("path/to/file.ttl"),
    endpoint="http://localhost:3030/dataset/sparql",
    endpoint_type=EndpointType.GENERIC,
    username="myuser",
    password="mypass"
)

# Using environment variables
# export RDF_ENDPOINT=http://localhost:3030/dataset/sparql
# export RDF_USERNAME=myuser
# export RDF_PASSWORD=mypass
await upload_rdf_file(
    file_path=Path("path/to/file.ttl"),
    endpoint_type=EndpointType.GENERIC
)

Multiple Files

Upload multiple RDF files:

rdf-uploader upload path/to/file1.ttl path/to/file2.n3 --endpoint http://localhost:3030/dataset/sparql

Specify Endpoint Type

rdf-uploader upload path/to/file.ttl --endpoint http://localhost:3030/dataset/sparql --type fuseki

Available endpoint types:

  • marklogic
  • neptune
  • blazegraph
  • rdfox
  • stardog

Specify Named Graph

rdf-uploader upload path/to/file.ttl --endpoint http://localhost:3030/dataset/sparql --graph http://example.org/graph

Authentication

For endpoints that require authentication:

rdf-uploader upload path/to/file.ttl --endpoint http://localhost:3030/dataset/sparql --username myuser --password mypass

You can also set authentication credentials using environment variables:

export RDF_USERNAME=myuser
export RDF_PASSWORD=mypass
rdf-uploader upload path/to/file.ttl --endpoint http://localhost:3030/dataset/sparql

For endpoint-specific credentials, use the endpoint type as a prefix:

export MARKLOGIC_USERNAME=mluser
export MARKLOGIC_PASSWORD=mlpass
rdf-uploader upload path/to/file.ttl --endpoint http://marklogic-server:8000/v1/graphs --type marklogic

Content Type

Specify the content type for the RDF data:

rdf-uploader upload path/to/file.ttl --endpoint http://localhost:3030/dataset/sparql --content-type "text/turtle"

If not specified, the content type is automatically detected based on the file extension:

  • .ttl, .turtle: text/turtle
  • .nt: application/n-triples
  • .n3: text/n3
  • .nq, .nquads: application/n-quads
  • .rdf, .xml: application/rdf+xml
  • .jsonld: application/ld+json
  • .json: application/rdf+json
  • .trig: application/trig

Control Concurrency

Limit the number of concurrent uploads:

rdf-uploader upload path/to/*.ttl --endpoint http://localhost:3030/dataset/sparql --concurrent 10

Verbose Mode

Enable verbose output to see detailed information about each batch upload, including the number of triples per batch and server response codes:

rdf-uploader upload path/to/file.ttl --endpoint http://localhost:3030/dataset/sparql --verbose

Help

Get help on available commands and options:

rdf-uploader --help
rdf-uploader upload --help

Environment Variables

You can configure the RDF Uploader using environment variables, which is especially useful for CI/CD pipelines or when working with multiple endpoints. The library also supports reading values from a .envrc file in the current working directory if environment variables are not set:

Endpoint URLs

# Generic endpoint URL
export RDF_ENDPOINT=http://localhost:3030/dataset/sparql

# Endpoint-specific URLs
export MARKLOGIC_ENDPOINT=http://marklogic-server:8000/v1/graphs
export NEPTUNE_ENDPOINT=https://your-neptune-instance.amazonaws.com:8182/sparql
export BLAZEGRAPH_ENDPOINT=http://blazegraph-server:9999/blazegraph/sparql
export RDFOX_ENDPOINT=http://rdfox-server:12110/datastores/default/content
export STARDOG_ENDPOINT=https://your-stardog-instance:5820/database

Authentication

# Generic credentials
export RDF_USERNAME=myuser
export RDF_PASSWORD=mypass

# Endpoint-specific credentials
export MARKLOGIC_USERNAME=mluser
export MARKLOGIC_PASSWORD=mlpass
export NEPTUNE_USERNAME=neptuneuser
export NEPTUNE_PASSWORD=neptunepass
export BLAZEGRAPH_USERNAME=bguser
export BLAZEGRAPH_PASSWORD=bgpass
export RDFOX_USERNAME=rdfoxuser
export RDFOX_PASSWORD=rdfoxpass
export STARDOG_USERNAME=sduser
export STARDOG_PASSWORD=sdpass

RDFox Store Name

export RDFOX_STORE_NAME=mystore

Test Configuration

Tests use a local SPARQL endpoint by default. You can configure the test endpoint by setting environment variables:

export TEST_ENDPOINT_URL=http://localhost:3030/test
export TEST_ENDPOINT_TYPE=fuseki

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rdf_uploader-0.17.5.tar.gz (43.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

rdf_uploader-0.17.5-py3-none-any.whl (12.1 kB view details)

Uploaded Python 3

File details

Details for the file rdf_uploader-0.17.5.tar.gz.

File metadata

  • Download URL: rdf_uploader-0.17.5.tar.gz
  • Upload date:
  • Size: 43.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-httpx/0.28.1

File hashes

Hashes for rdf_uploader-0.17.5.tar.gz
Algorithm Hash digest
SHA256 f4c5e3c9244ecc5864bdfde0798021aa411b22d9536be69776f66798cc62eafc
MD5 3d9025dffc230c85ba44c4f0370ee65c
BLAKE2b-256 fb7a050e6bf5cb90515b2340b2353f51991808f2f297b6f5cf399fe0efaf71f4

See more details on using hashes here.

File details

Details for the file rdf_uploader-0.17.5-py3-none-any.whl.

File metadata

File hashes

Hashes for rdf_uploader-0.17.5-py3-none-any.whl
Algorithm Hash digest
SHA256 502eb47a4966cc2fa452e61b7e6da65ce80ec3f0dabb2d031ccd65d323658aaa
MD5 1138d5c77a16bc2db6d1adb1c66de98c
BLAKE2b-256 525786a3a6b3a20830c7590e541dd6ad6cecfbd755f726fba9d491ee623f3d59

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page