Skip to main content

A library to prototype inference engines with logical reasoning capabilities

Project description

Pie : Prototyping Inference Engine

CI Coverage License: GPLv3 Python: 3.10+

Pie is a Python library for building inference engines. It allows rapid prototyping of software that requires logical reasoning capabilities.

The library supports:

  • Existential disjunctive rules (Disjunctive Datalog with existentially quantified variables)
  • First-order queries with conjunction, disjunction, negation, and quantifiers
  • Backward chaining (query rewriting)
  • DLGP parser (DLGPE version) with disjunction, negation, equality, sections, and IRI resolution for @base/@prefix (default for examples)
  • Computed predicates with Integraal standard functions via @computed
  • Knowledge bases and rule bases for grouping facts and rules
  • Prepared query interfaces and FOQuery factory helpers
  • IRI utilities for parsing, normalization, and base/prefix management
  • IO helpers with parsers and writers (DLGP export)

Installation

pip install -e .

Requires Python 3.10+ (uses match/case syntax).

Progression

Module Status Description
API 90% Core classes: terms, atoms, formulas, queries, fact bases, ontologies
Data Abstraction 80% ReadableData interface for heterogeneous data sources
Query Evaluation 85% Evaluating first-order queries against data sources
DLGP Parser (DLGPE) 75% Extended Datalog+- with negation, sections, and IRI resolution
Homomorphism 70% Pattern matching with backtracking and indexing
Backward Chaining 90% UCQ rewriting with disjunctive existential rules
Forward Chaining 0% Not yet implemented

Quick Start

Parsing and Querying

from prototyping_inference_engine.io.parsers.dlgpe import DlgpeParser
from prototyping_inference_engine.api.fact_base.mutable_in_memory_fact_base import MutableInMemoryFactBase
from prototyping_inference_engine.query_evaluation.evaluator.fo_query_evaluators import GenericFOQueryEvaluator

# Parse facts and query (DLGP)
parser = DlgpeParser.instance()
result = parser.parse("""
    @facts
    p(a,b).
    p(b,c).
    p(c,d).

    @queries
    ?(X,Z) :- p(X,Y), p(Y,Z).
""")
facts = result["facts"]
query = result["queries"][0]

# Create fact base and evaluate
fact_base = MutableInMemoryFactBase(facts)
evaluator = GenericFOQueryEvaluator()

# Get results as substitutions
for sub in evaluator.evaluate(query, fact_base):
    print(sub)  # {X -> a, Y -> b, Z -> c}, etc.

# Or get projected tuples
for answer in evaluator.evaluate_and_project(query, fact_base):
    print(answer)  # (a, c), (b, d)

Using the Session API

from prototyping_inference_engine.session.reasoning_session import ReasoningSession
from prototyping_inference_engine.io.parsers.dlgpe import DlgpeParser

with ReasoningSession() as session:
    # Parse DLGP content
    parser = DlgpeParser.instance()
    result = parser.parse("""
        @facts
        p(a,b).
        p(b,c).

        @queries
        ?(X) :- p(a,X).
    """)

    # Create fact base and evaluate
    fb = session.create_fact_base(result["facts"])
    for answer in session.evaluate_query(result["queries"][0], fb):
        print(answer)  # (b,)

IRI Utilities

from prototyping_inference_engine.api.iri import (
    IRIManager,
    StandardComposableNormalizer,
    RFCNormalizationScheme,
)

manager = IRIManager(
    normalizer=StandardComposableNormalizer(RFCNormalizationScheme.STRING),
    iri_base="http://example.org/base/",
)
manager.set_prefix("ex", "http://example.org/ns/")

iri = manager.create_iri("ex:resource")
print(iri.recompose())  # http://example.org/ns/resource

Exporting DLGP

from prototyping_inference_engine.io.writers.dlgpe_writer import DlgpeWriter
from prototyping_inference_engine.io.parsers.dlgpe import DlgpeParser

parser = DlgpeParser.instance()
result = parser.parse("""
    @base <http://example.org/base/>.
    @prefix ex: <http://example.org/ns/>.
    <rel>(ex:obj).
""")

writer = DlgpeWriter()
print(writer.write(result))

Computed Predicates (@computed)

To load standard functions, use @computed <prefix>: <stdfct>.. To load Python computed functions, use @computed <prefix>: <path/to/config.json>. The configuration format is documented in docs/usage.md.

@computed ig: <stdfct>.

@queries
?(X) :- ig:sum(1, X, 3).
@computed ig: <stdfct>.

@queries
?(X) :- ig:get(ig:tuple(a, b, c), 1, X).
?(U) :- ig:union(ig:set(a, b), ig:set(b, c), U).
?(D) :- ig:dict(ig:tuple(a, b), ig:tuple(b, c), D).

Architecture

Core API (api/)

  • Terms: Variable, Constant with flyweight caching
  • Atoms: Predicate + terms, implements Substitutable
  • Formulas: Atom, ConjunctionFormula, DisjunctionFormula, NegationFormula, ExistentialFormula, UniversalFormula
  • Queries: FOQuery wrapping formulas with answer variables
  • Fact Bases: MutableInMemoryFactBase, FrozenInMemoryFactBase
  • Rules & Ontology: Generic rules with disjunctive head support
  • Rule Bases & Knowledge Bases: Containers for rules, facts, and ontologies

Data Abstraction (api/data/)

Abstraction layer for data sources (fact bases, SQL databases, REST APIs, etc.):

  • ReadableData: Abstract interface for queryable data sources
  • MaterializedData: Extension for fully iterable data sources
  • BasicQuery: Simple query with predicate, bound positions, and answer variables
  • AtomicPattern: Describes constraints for querying predicates (mandatory positions, type constraints)
  • PositionConstraint: Validators for term types at positions (GROUND, CONSTANT, VARIABLE, etc.)

Data sources declare their capabilities via AtomicPattern and implement evaluate(BasicQuery) returning tuples of terms. Evaluators handle variable mapping and post-processing.

Query Evaluation (query_evaluation/)

Hierarchical evaluator architecture:

QueryEvaluator[Q]
└── FOQueryEvaluator
    ├── AtomicFOQueryEvaluator
    ├── ConjunctiveFOQueryEvaluator
    ├── DisjunctiveFOQueryEvaluator
    ├── NegationFOQueryEvaluator
    ├── UniversalFOQueryEvaluator
    ├── ExistentialFOQueryEvaluator
    └── GenericFOQueryEvaluator (dispatches by formula type)

Each evaluator provides:

  • evaluate(query, data, substitution)Iterator[Substitution]
  • evaluate_and_project(query, data, substitution)Iterator[Tuple[Term, ...]]

Evaluators work with any ReadableData source, not just in-memory fact bases.

Backward Chaining (backward_chaining/)

  • BreadthFirstRewriting - UCQ rewriting algorithm
  • PieceUnifierAlgorithm - computes most general piece unifiers
  • RewritingOperator - applies rules to queries

Parser (parser/)

DLGP (parser/dlgpe/)

Extended Datalog+- format with disjunction, negation, and sections (recommended).

Supported features:

Feature Syntax Example
Disjunction in head | p(X) | q(X) :- r(X).
Disjunction in body | h(X) :- p(X) | q(X).
Negation not h(X) :- p(X), not q(X).
Equality = ?(X,Y) :- p(X,Y), X = Y.
Sections @facts, @rules, @queries, @constraints Organize knowledge base
Labels [name] [rule1] h(X) :- b(X).
IRI directives @base, @prefix @base <http://example.org/>.

Usage:

from prototyping_inference_engine.io.parsers.dlgpe import DlgpeParser
from prototyping_inference_engine.io.parsers.dlgpe import DlgpeUnsupportedFeatureError

parser = DlgpeParser.instance()

# Parse DLGP content
result = parser.parse("""
    @facts
    person(alice).
    person(bob).
    knows(alice, bob).

    @rules
    [transitivity] knows(X, Z) :- knows(X, Y), knows(Y, Z).
    stranger(X, Y) :- person(X), person(Y), not knows(X, Y).

    @queries
    ?(X) :- knows(alice, X).
""")

facts = result["facts"]
rules = result["rules"]
queries = result["queries"]

# Parse specific elements
atoms = list(parser.parse_atoms("p(a). q(b)."))
rules = list(parser.parse_rules("h(X) :- b(X). p(X) | q(X) :- r(X)."))

Not supported: arithmetic expressions, comparison operators (<, >, etc.), @import, @view directives.

DLGP Files (.dlgp)

DLGP files use the .dlgp extension. This version uses | for disjunction.

% Facts
p(a,b).

% Disjunctive rule
q(X) | r(Y) :- p(X,Y).

% Conjunctive query
?(X) :- p(X,Y), q(Y).

% Disjunctive query
?() :- (p(X), q(X)) | (r(X), s(X)).

CLI Tools

# Query rewriter (DLGP syntax)
disjunctive-rewriter [file.dlgp] [-l LIMIT] [-v] [-m]

Running Tests

# All tests
python3 -m unittest discover -s prototyping_inference_engine -v

# Specific module
python3 -m unittest discover -s prototyping_inference_engine/query_evaluation -v

License

GNU General Public License v3 (GPLv3)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

prototyping_inference_engine-0.0.19.tar.gz (223.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

prototyping_inference_engine-0.0.19-py3-none-any.whl (354.1 kB view details)

Uploaded Python 3

File details

Details for the file prototyping_inference_engine-0.0.19.tar.gz.

File metadata

File hashes

Hashes for prototyping_inference_engine-0.0.19.tar.gz
Algorithm Hash digest
SHA256 0d0cabf8e4c8310f0c9bb4fd1a40fe86e1b5cb5cc645eb842ad9c8ed99fec6f0
MD5 b77a4835957a8edfcc41949782e836ad
BLAKE2b-256 bb697a87a91a022731c91ed47b51d20d89b33e5ef1ce8c272ba41396bd653b8e

See more details on using hashes here.

Provenance

The following attestation bundles were made for prototyping_inference_engine-0.0.19.tar.gz:

Publisher: release.yml on guillaumeki/pie

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file prototyping_inference_engine-0.0.19-py3-none-any.whl.

File metadata

File hashes

Hashes for prototyping_inference_engine-0.0.19-py3-none-any.whl
Algorithm Hash digest
SHA256 916f610dd38bdfe5c0e78aef1893dc81f8327947ffaeba3ea1e769f7103722f9
MD5 b6712170c5e5a857f00f1eda7e156126
BLAKE2b-256 82ae59aad0c89594ead0cd7633ac350816758cac4d70cf00ed6950a937a225da

See more details on using hashes here.

Provenance

The following attestation bundles were made for prototyping_inference_engine-0.0.19-py3-none-any.whl:

Publisher: release.yml on guillaumeki/pie

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page