Skip to main content

A library to prototype inference engines with logical reasoning capabilities

Project description

Pie : Prototyping Inference Engine

CI Coverage License: GPLv3 Python: 3.10+

Pie is a Python library for building inference engines. It allows rapid prototyping of software that requires logical reasoning capabilities.

The library supports:

  • Existential disjunctive rules (Disjunctive Datalog with existentially quantified variables)
  • First-order queries with conjunction, disjunction, negation, and quantifiers
  • Backward chaining (query rewriting)
  • Rule compilation (ID and hierarchical fragments) for accelerating rewriting and evaluation
  • Rule analysis for guarded/frontier-guarded/range-restricted/weakly-acyclic/sticky fragments
  • DLGP parser (DLGPE version) with disjunction, negation, equality, sections, and IRI resolution for @base/@prefix (default for examples)
  • View declarations and imports with @view and @import <*.vd> for virtual external sources
  • Computed predicates with the standard function library via @computed
  • Knowledge bases and rule bases for grouping facts and rules
  • Prepared query interfaces and FOQuery factory helpers
  • IRI utilities for parsing, normalization, and base/prefix management
  • IO helpers with parsers and writers (DLGP export)

Installation

pip install -e .

Requires Python 3.10+ (uses match/case syntax). CI runs on CPython 3.10, CPython 3.12, and PyPy 3.10.

Progression

Module Status Description
API 90% Core classes: terms, atoms, formulas, queries, fact bases, ontologies
Data Abstraction 80% ReadableData interface for heterogeneous data sources
Query Evaluation 85% Evaluating first-order queries against data sources
DLGP Parser (DLGPE) 75% Extended Datalog+- with negation, sections, and IRI resolution
Homomorphism 70% Pattern matching with backtracking and indexing
Backward Chaining 90% UCQ rewriting with disjunctive existential rules
Forward Chaining 85% Chase with schedulers (naive/GRD/predicate), trigger strategies, stratified execution, lineage
Rule Analysis 35% PIE-native ruleset analysis over shared fixpoint data and declarative property implications

Quick Start

Parsing and Querying

from prototyping_inference_engine.io.parsers.dlgpe import DlgpeParser
from prototyping_inference_engine.api.fact_base.mutable_in_memory_fact_base import MutableInMemoryFactBase
from prototyping_inference_engine.query_evaluation.evaluator.fo_query.fo_query_evaluators import (
    GenericFOQueryEvaluator,
)

# Parse facts and query (DLGP)
parser = DlgpeParser.instance()
result = parser.parse("""
    @facts
    p(a,b).
    p(b,c).
    p(c,d).

    @queries
    ?(X,Z) :- p(X,Y), p(Y,Z).
""")
facts = result["facts"]
query = result["queries"][0]

# Create fact base and evaluate
fact_base = MutableInMemoryFactBase(facts)
evaluator = GenericFOQueryEvaluator()

# Get results as substitutions
for sub in evaluator.evaluate(query, fact_base):
    print(sub)  # {X -> a, Y -> b, Z -> c}, etc.

# Or get projected tuples
for answer in evaluator.evaluate_and_project(query, fact_base):
    print(answer)  # (a, c), (b, d)

Using the Session API

from prototyping_inference_engine.session.reasoning_session import ReasoningSession
from prototyping_inference_engine.io.parsers.dlgpe import DlgpeParser

with ReasoningSession.create() as session:
    # Parse DLGP content
    parser = DlgpeParser.instance()
    result = parser.parse("""
        @facts
        p(a,b).
        p(b,c).

        @queries
        ?(X) :- p(a,X).
    """)

    # Create fact base and evaluate
    fb = session.create_fact_base(result["facts"])
    for answer in session.evaluate_query(result["queries"][0], fb):
        print(answer)  # (b,)

IRI Utilities

from prototyping_inference_engine.api.iri import (
    IRIManager,
    StandardComposableNormalizer,
    RFCNormalizationScheme,
)

manager = IRIManager(
    normalizer=StandardComposableNormalizer(RFCNormalizationScheme.STRING),
    iri_base="http://example.org/base/",
)
manager.set_prefix("ex", "http://example.org/ns/")

iri = manager.create_iri("ex:resource")
print(iri.recompose())  # http://example.org/ns/resource

Exporting DLGP

from prototyping_inference_engine.io.writers.dlgpe_writer import DlgpeWriter
from prototyping_inference_engine.io.parsers.dlgpe import DlgpeParser

parser = DlgpeParser.instance()
result = parser.parse("""
    @base <http://example.org/base/>.
    @prefix ex: <http://example.org/ns/>.
    <rel>(ex:obj).
""")

writer = DlgpeWriter()
print(writer.write(result))

Computed Predicates (@computed)

To load standard functions, use @computed <prefix>: <stdfct>.. To load Python computed functions, use @computed <prefix>: <path/to/config.json>. The configuration format is documented in docs/usage.md.

@computed ig: <stdfct>.

@queries
?(X) :- ig:sum(1, X, 3).
@computed ig: <stdfct>.

@queries
?(X) :- ig:get(ig:tuple(a, b, c), 1, X).
?(U) :- ig:union(ig:set(a, b), ig:set(b, c), U).
?(D) :- ig:dict(ig:tuple(a, b), ig:tuple(b, c), D).

Analysing Rule Sets

from prototyping_inference_engine.io.parsers.dlgpe import DlgpeParser
from prototyping_inference_engine.rule_analysis import PropertyId, RuleAnalyser

rules = tuple(
    DlgpeParser.instance().parse_rules(
        """
        q(X, Y) :- p(X).
        r(X) :- q(X, Y), s(Y).
        """
    )
)

report = RuleAnalyser(rules).analyse(
    [PropertyId.RANGE_RESTRICTED, PropertyId.STICKY]
)
statuses = {
    property_id.value: report.get(property_id).status.value
    for property_id in (PropertyId.RANGE_RESTRICTED, PropertyId.STICKY)
}
print(statuses)

Expected output: {'range_restricted': 'violated', 'sticky': 'violated'}.

Architecture

Core API (api/)

  • Terms: Variable, Constant with flyweight caching
  • Atoms: Predicate + terms, implements Substitutable
  • Formulas: Atom, ConjunctionFormula, DisjunctionFormula, NegationFormula, ExistentialFormula, UniversalFormula
  • Queries: FOQuery wrapping formulas with answer variables
  • Fact Bases: MutableInMemoryFactBase, FrozenInMemoryFactBase
  • Rules & Ontology: Formula-based rules with disjunctive head support
  • Rule Bases & Knowledge Bases: Containers for rules, facts, and ontologies
  • GRD: Graph of Rule Dependencies (disjunctive heads + safe negation) with stratification strategies backed by igraph, including minimal-evaluation stratification

Data Abstraction (api/data/)

Abstraction layer for data sources (fact bases, SQL databases, REST APIs, etc.):

  • ReadableData: Abstract interface for queryable data sources
  • MaterializedData: Extension for fully iterable data sources
  • BasicQuery: Simple query with predicate, bound positions, and answer variables
  • AtomicPattern: Describes constraints for querying predicates (mandatory positions, type constraints)
  • PositionConstraint: Validators for term types at positions (GROUND, CONSTANT, VARIABLE, etc.)

Data sources declare their capabilities via AtomicPattern and implement evaluate(BasicQuery) returning tuples of terms. Evaluators handle variable mapping and post-processing.

Query Evaluation (query_evaluation/)

Hierarchical evaluator architecture:


### Rule Compilation (`rule_compilation/`)

- **ID compilation** and **hierarchical compilation** for compiled preorders
- **Compatibility and unfolding** helpers used in rewriting and evaluation
QueryEvaluator[Q]
└── FOQueryEvaluator
    ├── AtomicFOQueryEvaluator
    ├── ConjunctiveFOQueryEvaluator
    ├── DisjunctiveFOQueryEvaluator
    ├── NegationFOQueryEvaluator
    ├── UniversalFOQueryEvaluator
    ├── ExistentialFOQueryEvaluator
    └── GenericFOQueryEvaluator (dispatches by formula type)

Each evaluator provides:

  • evaluate(query, data, substitution)Iterator[Substitution]
  • evaluate_and_project(query, data, substitution)Iterator[Tuple[Term, ...]]

Evaluators work with any ReadableData source, not just in-memory fact bases.

Backward Chaining (backward_chaining/)

  • BreadthFirstRewriting - UCQ rewriting algorithm
  • PieceUnifierAlgorithm - computes most general piece unifiers
  • RewritingOperator - applies rules to queries

Parser (parser/)

DLGP (parser/dlgpe/)

Extended Datalog+- format with disjunction, negation, and sections (recommended).

Supported features:

Feature Syntax Example
Disjunction in head | p(X) | q(X) :- r(X).
Disjunction in body | h(X) :- p(X) | q(X).
Negation not h(X) :- p(X), not q(X).
Equality = ?(X,Y) :- p(X,Y), X = Y.
Comparison operators <, >, <=, >=, != ?(X) :- p(X), X > 3.
Arithmetic expressions +, -, *, /, ** ?(X) :- p(X + 1).
Sections @facts, @rules, @queries, @constraints Organize knowledge base
Labels [name] [rule1] h(X) :- b(X).
IRI directives @base, @prefix @base <http://example.org/>.
Imports @import @import <facts.dlgp>., @import <views.vd>.
View declarations @view @view v:<views.vd>.

Usage:

from prototyping_inference_engine.io.parsers.dlgpe import DlgpeParser
from prototyping_inference_engine.io.parsers.dlgpe import DlgpeUnsupportedFeatureError

parser = DlgpeParser.instance()

# Parse DLGP content
result = parser.parse("""
    @facts
    person(alice).
    person(bob).
    knows(alice, bob).

    @rules
    [transitivity] knows(X, Z) :- knows(X, Y), knows(Y, Z).
    stranger(X, Y) :- person(X), person(Y), not knows(X, Y).

    @queries
    ?(X) :- knows(alice, X).
""")

facts = result["facts"]
rules = result["rules"]
queries = result["queries"]

# Parse specific elements
atoms = list(parser.parse_atoms("p(a). q(b)."))
rules = list(parser.parse_rules("h(X) :- b(X). p(X) | q(X) :- r(X)."))

DLGP Files (.dlgp)

DLGP files use the .dlgp extension. This version uses | for disjunction.

% Facts
p(a,b).

% Disjunctive rule
q(X) | r(Y) :- p(X,Y).

% Conjunctive query
?(X) :- p(X,Y), q(Y).

% Disjunctive query
?() :- (p(X), q(X)) | (r(X), s(X)).

CLI Tools

# Query rewriter (DLGP syntax)
disjunctive-rewriter [file.dlgp] [-l LIMIT] [-v] [-m]

Running Tests

# All tests
python3 -m unittest discover -s prototyping_inference_engine -t . -v

# Specific module
python3 -m unittest discover -s prototyping_inference_engine/query_evaluation -v

License

GNU General Public License v3 (GPLv3)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

prototyping_inference_engine-0.0.31.tar.gz (405.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

prototyping_inference_engine-0.0.31-py3-none-any.whl (663.6 kB view details)

Uploaded Python 3

File details

Details for the file prototyping_inference_engine-0.0.31.tar.gz.

File metadata

File hashes

Hashes for prototyping_inference_engine-0.0.31.tar.gz
Algorithm Hash digest
SHA256 8c540adb73409139869fbdc8c872cf543331fecbefefd28dde94ac6e44e57bc9
MD5 e0ac17fe1bd239ea80b72d200e30b4be
BLAKE2b-256 8d1a52b56fef7df5606805dd07f1dd6269695c05c7edf620f2bc90daec45282f

See more details on using hashes here.

Provenance

The following attestation bundles were made for prototyping_inference_engine-0.0.31.tar.gz:

Publisher: release.yml on guillaumeki/pie

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file prototyping_inference_engine-0.0.31-py3-none-any.whl.

File metadata

File hashes

Hashes for prototyping_inference_engine-0.0.31-py3-none-any.whl
Algorithm Hash digest
SHA256 cdd3d4fd65db1b9c712887e0231d6c213590d977abd7eabe360a4e526afc6c8d
MD5 e827085b92dd6594ee4efda58d42549d
BLAKE2b-256 fe84e95f3df10893a43c10aff1c2f8eb067d3de32b97a7bb6fe500d8fba1e77d

See more details on using hashes here.

Provenance

The following attestation bundles were made for prototyping_inference_engine-0.0.31-py3-none-any.whl:

Publisher: release.yml on guillaumeki/pie

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page