Skip to main content

CLI to easily merge multiple RDF files and perform inference (OWL or SPARQL) on the result.

Project description

pythinfer - Python Logical Inference

Tests codecov

Pronounced 'python fur'.

CLI to easily merge multiple RDF files, perform inference (OWL or SPARQL), and query the result.

Point this at a selection of RDF files and it will merge them, run inference over them, export the results, and execute a query on them. The results are the original statements together with the useful set of inferences (see below under Inference for what 'useful' means here).

A distinction is made between 'reference' and 'focus' files. See below.

Quick Start

Using uv

(in the below, replace ~/git and ~/git/pythinfer/example_projects/eg0-basic with folder paths on your system, of course)

  1. Install pythinfer as a tool:

    uv tool install pythinfer
    
  2. Clone the repository [OPTIONAL - this is just to get the example]:

    cd ~/git
    git clone https://github.com/robertmuil/pythinfer.git
    
  3. Execute it as a tool in your project (or the example project):

    cd ~/git/pythinfer/example_projects/eg0-basic # or your own project folder
    uvx pythinfer query "SELECT * WHERE { ?s ?p ?o } LIMIT 10"
    uvx pythinfer query select_who_knows_whom.rq
    

    This will create a pythinfer.yaml project file in the project folder, merge all RDF files it finds, perform inference, and then execute the SPARQL query against the inferred graph.

  4. To use a specific project file, use the --project option before the command:

    uvx pythinfer --project pythinfer_celebrity.yaml query select_who_knows_whom.rq
    
  5. Edit the pythinfer.yaml file to specify which files to include, try again. Have fun.

Demo of executing eg0 in CLI

Command Line Interface

Global Options

  • --project / -p: Specify the path to a project configuration file. If not provided, pythinfer will search for pythinfer.yaml in the current directory and parent directories, or create a new project if none is found.
  • --verbose / -v: Enable verbose (DEBUG) logging output.

Common Options

  • --extra-export: allows specifying extra export formats beyond the default trig. Can be used to 'strip' quads of their named graph down to triples when exporting (by exporting to ttl or nt)
    • NB: trig is always included as an export because it is used for caching
  • ...

pythinfer create

Create a new project specification file in the current folder by scanning for RDF files.

Invoked automatically if another command is used and no project file exists already.

pythinfer merge

Largely a helper command, not likely to need direct invocation.

pythinfer infer

Perform merging and inference as per the project specification, and export the resulting graphs to the output folder.

pythinfer query

A simple helper command should allow easily specifying a query, or queries, and these should be executed against the latest full inferred graph.

In principle, the tool could also take care of dependency management so that any change in an input file is automatically re-merged and inferred before a query...

Python API

In addition to the CLI, the library can be used directly from Python code.

The primary entry-point is an instance of Project. Once initialised, the project can be used to perform inference and access the full inferred graph, as well as the source data.

No state is stored in the Project instance, it is just a convenient interface. The data is loaded and created as-needed, either from source files or from the exports of inference, exactly as the CLI operates. In all cases, the data is loaded from disk.

This means that a client should keep the resultant dataset or graph itself in memory, rather than making multiple calls to the merge or infer methods of the Project instance, to avoid repeated loading from disk.

Quick-start: querying full inferred data

from pythinfer import Project

# Load and infer in one step from the first project discovered in current folder
ds = Project.discover().infer()

# Then you can do what you want with the Dataset
results = ds.query("SELECT ?g ?s ?p ?o WHERE { GRAPH ?g { ?s ?p ?o } } LIMIT 10")
for row in results:
    print(row)

# Strip to a single Graph if named graphs not needed
from pythinfer.utils import strip
g = strip(ds)
results = g.query("SELECT * WHERE { ?s a ?type }")

Initialising a Project

A project can be initialised from a project specification file, or directly specified.

from pythinfer import Project

# Load from a specific file
project = Project.from_yaml('path/to/pythinfer.yaml')

# Load from a discovered file (searches current and parent folders)
project = Project.discover()

# Specify directly in code
project = Project(
    name='Project From Python',
    focus=['data/file1.ttl'],
    reference=['vocabs/ref_vocab1.ttl'],
)

All of these return a Project instance. The from_yaml() and discover() methods will raise a FileNotFoundError if no project file is found.

Merging and Inference

Access to the data is through the merge or infer methods, which return the merged and inferred datasets respectively. The inferred data will be loaded directly from disk if the exports are up-to-date, otherwise inference will be performed.

# Load the source files, returning the merged dataset.
ds_combined = project.merge()

# Load the source files and perform inference, returning the full resultant dataset.
ds_full = project.infer()

merge() and infer() return a rdflib.Dataset containing the merged and inferred data, including named graphs for provenance.

A helper method, strip() is also provided which returns a rdflib.Graph by stripping quads down to triples (i.e. merging all named graphs) which is commonly done to simplify downstream processing.

from pythinfer.utils import strip
# Strip named graphs to triples
g_full = strip(ds_full)

Project Specification

A 'Project' is the specification of which RDF files to process and configuration of how to process them, along with some metadata like a name.

Because we will likely have several files and they will be of different types, it is easiest to specify these in a configuration file (YAML or similar) instead of requiring everything on the command line.

The main function or CLI can then be pointed at the project file to easily switch between projects. This also allows the same sets and subsets of inputs to be combined in different ways with configuration.

Project Specification Components

name: (optional)
focus:
    - <pattern>: <a pattern specifying a specific or set of files>
    - ...
reference:
    - <pattern>: <a pattern specifying a specific or set of 'reference' files>
    - ...
output:
    folder: <a path to the folder in which to put the output> (defaults to `<base_folder>/derived`)

Reference vs. Focus Data (was External vs. Internal)

Reference data is treated as ephemeral information used for inference and then discarded. Most commonly it is the vocabulary and data that is not maintained by the user, but whose axioms are assumed to hold true for the application. They are used to augment inference, but are not part of the data being analysed, and so they are not generally needed in the output.

Examples are OWL, RDFS, SKOS, and other standard vocabularies.

Synonyms for 'reference' here could be 'transient' or 'catalyst' or (as was the case) 'external'.

Path Resolution

Paths in the project configuration file can be either relative or absolute.

Relative paths are resolved relative to the directory containing the project configuration file (pythinfer.yaml). This allows project configurations to remain portable - you can move the project folder around or share it with others, and relative paths will continue to work.

This means that the current working directory from which you execute pythinfer is irrelevant - as long as you point to the right project file, the paths will be resolved correctly.

Absolute paths are used as-is without modification.

Examples

If your project structure is:

my_project/
├── pythinfer.yaml
├── data/
│   ├── file1.ttl
│   └── file2.ttl
└── vocabs/
    └── schema.ttl

Your pythinfer.yaml can use relative paths:

name: My Project
data:
  - data/file1.ttl
  - data/file2.ttl
reference:
  - vocabs/schema.ttl

These paths will be resolved relative to the directory containing pythinfer.yaml, so the configuration is portable.

You can also use absolute paths if needed:

data:
  - /home/user/my_project/data/file1.ttl

Project Selection

The project selection process is:

  1. User provided: path to project file provided directly by user on command line, and if this file is not found, exit
    1. if no user-provided file, proceed to next step
  2. Discovery: search in current folder and parent folders for project file, returning first found
    1. if no project file discovered, proceed to next step
  3. Creation: generate a new project specification by searching in current folder for RDF files
    1. if no RDF files found, fail
    2. otherwise, create new project file and use immediately

Project Discovery

If a project file is not explicitly specified, pythinfer should operate like git or uv - it should search for a pythinfer.yaml file in the current directory, and then in parent directories up to a limit.

The limit on ancestors should be:

  1. don't traverse below $HOME if that is in the ancestral line
  2. don't go beyond 10 folders
  3. don't traverse across file systems

Project Creation

If a project is not provided by the user or discovered from the folder structure, a new project sepecification will be created automatically by scanning the current folder for RDF files. If some RDF files are found, subsidiary files such as SPARQL queries for inference are also sought and a new project specification is created. This new spec will be saved to the current folder.

The user can also specifically request the creation of a new project file with the create command.

Merging

Merging of multiple graphs should preserve the source, ideally using the named graph of a quad.

Merging should distinguish 2 different types of input:

  1. Reference data: things like OWL, SKOS, RDFS, which are introduced for inference purposes, but are not maintained by the person using the library, and the axioms of which can generally be assumed to exist for any application.
    • the term reference is meant from the perspective of the user / application, not to invoke the notion of 'master' vs. 'reference' data.
  2. Focus data: ontologies being developed, vocabularies that are part of the current focus, and the data itself - all of this should always be preserved in the output, and is the 'focus' of the analysis.

Inference

By default an efficient OWL rule subset should be used, like OWL-RL.

Invalid inferences

Some inferences, at least in owlrl, may be invalid in RDF - for instance, a triple with a literal as subject. These should be removed during the inference process.

Unwanted inferences

In addition to the actually invalid inferences, many inferences are banal. For instance, every instance could be considered to be the owl:sameAs itself. This is semantically valid but useless to express as an explicit triple.

Several classes of these unwanted inferences can be removed by this package. Some can be removed per-triple during inference, others need to be removed by considering the whole graph.

Per-triple unwanted inferences

These are unwanted inferences that can be identified by looking at each triple in isolation. Examples:

  1. triples with an empty string as object
  2. redundant reflexives, such as ex:thing owl:sameAs ex:thing
  3. many declarations relating to owl:Thing, e.g. ex:thing rdf:type owl:Thing
  4. declarations that owl:Nothing is a subclass of another class (NB: the inverse is not unwanted as it indicates a contradiction)

Whole-graph unwanted inferences

These are unwanted inferences that can only be identified by considering the whole graph. Examples:

  1. Undeclared blank nodes
    • blank nodes are often used for complex subClass or range or domain expressions
    • where this occurs but the declaration of the blank node is not included in the final output, the blank node is useless and we are better off removing any triples that refer to it
    • a good example of this is skos:member which uses blank nodes to express that the domain and range are the union of skos:Concept and skos:Collection
    • for now, blank node 'declaration' is defined as any triple where the blank node is the subject

Inference Process

Steps:

  1. Load and merge all input data into a triplestore
    • Maintain provenance of data by named graph
    • Maintain list of which named graphs are 'reference'
    • output: merged
    • consequence: current = merged
  2. Generate reference inferences by running RDFS/OWL-RL engine over 'reference' input data[^1]
    • output: inferences_reference_owl
  3. Generate full inferences by running RDFS/OWL-RL inference over all data so far[^1]
    • output: inferences_full_owl
    • consequence: current += inferences_full_owl
  4. Run heuristics[^2] over all data
    • output: inferences_sparql + inferences_python
    • consequence: current += inferences_sparql + inferences_python
  5. Repeat steps 3 through 4 until no new triples are generated, or limit reached
    • consequence: combined_full = current
  6. Subtract reference data and inferences from the current graph[^4]
    • consequence: current -= (reference_data + inferences_reference_owl)
    • consequence: combined_focus = current
  7. Subtract all 'unwanted' inferences from result[^3]
    • consequence: combined_wanted = current - inferences_unwanted

[^1]: inference is backend dependent, and will include the removal of invalid triples that may result, e.g. from owlrl [^2]: See below for heuristics. [^3]: unwanted inferences are those that are semantically valid but not useful, see below [^4]: this step logically applies, but in the owlrl implementation we can simply avoid including the reference_owl_inferences graph in the output, since owlrl will not generate inferences that already exist.

Backends

rdflib and owlrl

In rdflib, the owlrl package should be used.

This package has some foibles. For instance, it generates a slew of unnecessary triples. The easiest way to remove these is to first run inference over all reference vocabularies, then combine with the user-provided vocabularies and data, run inference, and then remove all the original inferences from the reference vocabularies from the final result. The reference vocabularies themselves can also be removed, depending on application.

Unwanted inferences are generated even when executed over an empty graph.

pyoxigraph

No experience with this yet.

Jena (riot etc.)

Because Jena provides a reference implementation, it might be useful to be able to call out to the Jena suite of command line utilities (like riot) for manipulation of the graphs (including inference).

Heuristics (SPARQL, Python, etc.)

Some inferences are difficult or impossible to express in OWL-RL. This will especially be the case for very project-specific inferences which are trivial to express procedurally but complicated in a logical declaration.

Therefore we want to support specification of 'heuristics' in other formalisms, like SPARQL CONSTRUCT queries and Python functions.

The order of application of these heuristics may matter - for instance, a SPARQL CONSTRUCT may create triples that are then used by a Python heuristic, or the former may require the full type hierarchy to be explicit from OWL-RL inference.

Thus, we apply heuristics and OWL-RL inference in alternating steps until no new triples are generated.

Data Structures

DatasetView

Intended to give a restricted (filtered) view on a Dataset by only providing access to explicitly selected graphs, enabling easy handling of a subset of graphs without copying data to new graphs.

Specifications:

  1. A DatasetView may be read/write or readonly.
  2. Graphs MUST be explicitly included to be visible, otherwise they are excluded (and invisible).
  3. Attempted access to excluded graphs MUST raise a PermissionError.
  4. Any mechanism to retrieve triples (e.g.: iterating the view itself, or using triples() or using quads()) that does not explicitly specify a named graph (e.g. triples() called without a context argument) MUST return triples from all included graphs, not just the default graph.
  5. Default graph MUST therefore be excluded if the underlying Dataset has default_union set (because otherwise this would counterintuitively render triples from excluded graphs visible to the view).
  6. A DatasetView SHOULD otherwise operate in exactly the same way as the underlying Dataset.

Inclusion and Exclusion of Graphs

rdflib's handling of access, addition, and deletion of named graphs has some unintuitive nuance. See this issue for the most relevant example.

For the View, we want to adopt as little difference to APIs and expectations as possible, which unfortunately means taking on the unintuitive behaviours.

So, there are no methods for including or excluding a graph once a view is created, because the behaviour of such methods would be very difficult to define. If the included graphs needs to be changed, a new DatasetView should simply be created, which is light-weight because no copying is involved.

Adding and removing content

Adding a new graph is not possible through the View unless it was in the list of included graphs at construction, because it only allows accessing included graphs. If an identifier is in the original included list, but has no corresponding triples in the underlying triplestore, this is allowed, and subsequent addition of a triple against that graph identifier would defacto essentially be the 'addition' of a graph to the store.

Removing a graph likewise performs exactly as if performed on the underlying Dataset, unless the graph's identifier is not in the inclusion list, in which case it generates a PermissionError. In either case, the graph remains in the inclusion list.

Adding and removing triples is possible (unless the View is set to read-only, which may not be implemented) as long as the triples are added to a graph in the inclusion list.

Adding or removing a triple without specifying the graph would go to the default graph and the same check applies: if the default graph is in the inclusion list, this is allowed, otherwise it will raise a PermissionError.

This is all following the principle of altering the API of Dataset as little as possible.

Real-World Usage

The example_projects folder contains contrived examples, but this has also been run over real data:

  1. foafPub
    1. takes a while, but successfully completes
    2. only infers 7 new useful triples, all deriving from an owl:sameAs link to an otherwise completely unconnected local id (treated as a blank node)
  2. starwars
    1. successfully completes, reasonable time
    2. infers 175 new triples from the basic starwars.ttl file, mainly that characters are of type voc:Mammal and voc:Sentient or voc:Artificial, etc.
      1. also funnily generates xsd:decimal owl:disjointWith xsd:string
    3. including summary.ttl doesn't change the inferences, which I think is correct.

Next Steps

  1. implement pattern support for input files
  2. check this handles non-turtle input files ok
  3. allow Python-coded inference rules (e.g. for path-traversal or network analytics)
    • also use of text / linguistic analysis would be a good motivation (e.g. infer that two projects are related if they share similar topics based on text analysis of abstracts)
  4. implement base_folder support - perhaps more generally support for specification of any folder variables...
  5. consider using a proper config language like dhal(?) instead of yaml
  6. check and raise error or at least warning if default_union is set in underlying Dataset of DatasetView
  7. document and/or fix serialisation: canon longTurtle is not great with the way it orders things, so we might need to call out to riot unfortunately.
  8. add better output support for ASK query
  9. add option to remove project name from named graphs, for easier specification:
    1. e.g. <urn:pythinfer:inferences:owl> which is easy to remember and specify on command-line.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pythinfer-0.5.0.tar.gz (1.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pythinfer-0.5.0-py3-none-any.whl (34.7 kB view details)

Uploaded Python 3

File details

Details for the file pythinfer-0.5.0.tar.gz.

File metadata

  • Download URL: pythinfer-0.5.0.tar.gz
  • Upload date:
  • Size: 1.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.3

File hashes

Hashes for pythinfer-0.5.0.tar.gz
Algorithm Hash digest
SHA256 a6048cc4d78b69797c0aa8beaca89df7055b4ca176329fd79ad8699e252be5ca
MD5 08bd91d6a18c8d45ba7b446d2278aea0
BLAKE2b-256 7ad5cd76844efc46117bf133bbeeb959d368021f82f6915df9df8b86290965f9

See more details on using hashes here.

File details

Details for the file pythinfer-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: pythinfer-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 34.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.3

File hashes

Hashes for pythinfer-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 31519ca3892c5176f13cc6f6cf6d925bd7345e57d0250e76f69a9ea998808b59
MD5 b1753d4a1801abb7f543f6a54c349386
BLAKE2b-256 e92208d31ee0615f4ff198b782f3601503e955dd2abda125f06af48f81047476

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page