Skip to main content

A library to prototype inference engines with logical reasoning capabilities

Project description

Pie : Prototyping Inference Engine

CI Coverage License: GPLv3 Python: 3.10+

Pie is a Python library for building inference engines. It allows rapid prototyping of software that requires logical reasoning capabilities.

The library supports:

  • Existential disjunctive rules (Disjunctive Datalog with existentially quantified variables)
  • First-order queries with conjunction, disjunction, negation, and quantifiers
  • Backward chaining (query rewriting)
  • DLGP parser (DLGPE version) with disjunction, negation, equality, sections, and IRI resolution for @base/@prefix (default for examples)
  • Computed predicates with Integraal standard functions via @computed
  • Knowledge bases and rule bases for grouping facts and rules
  • Prepared query interfaces and FOQuery factory helpers
  • IRI utilities for parsing, normalization, and base/prefix management
  • IO helpers with parsers and writers (DLGP export)

Installation

pip install -e .

Requires Python 3.10+ (uses match/case syntax).

Progression

Module Status Description
API 90% Core classes: terms, atoms, formulas, queries, fact bases, ontologies
Data Abstraction 80% ReadableData interface for heterogeneous data sources
Query Evaluation 85% Evaluating first-order queries against data sources
DLGP Parser (DLGPE) 75% Extended Datalog+- with negation, sections, and IRI resolution
Homomorphism 70% Pattern matching with backtracking and indexing
Backward Chaining 90% UCQ rewriting with disjunctive existential rules
Forward Chaining 0% Not yet implemented

Quick Start

Parsing and Querying

from prototyping_inference_engine.io.parsers.dlgpe import DlgpeParser
from prototyping_inference_engine.api.fact_base.mutable_in_memory_fact_base import MutableInMemoryFactBase
from prototyping_inference_engine.query_evaluation.evaluator.fo_query_evaluators import GenericFOQueryEvaluator

# Parse facts and query (DLGP)
parser = DlgpeParser.instance()
result = parser.parse("""
    @facts
    p(a,b).
    p(b,c).
    p(c,d).

    @queries
    ?(X,Z) :- p(X,Y), p(Y,Z).
""")
facts = result["facts"]
query = result["queries"][0]

# Create fact base and evaluate
fact_base = MutableInMemoryFactBase(facts)
evaluator = GenericFOQueryEvaluator()

# Get results as substitutions
for sub in evaluator.evaluate(query, fact_base):
    print(sub)  # {X -> a, Y -> b, Z -> c}, etc.

# Or get projected tuples
for answer in evaluator.evaluate_and_project(query, fact_base):
    print(answer)  # (a, c), (b, d)

Using the Session API

from prototyping_inference_engine.session.reasoning_session import ReasoningSession
from prototyping_inference_engine.io.parsers.dlgpe import DlgpeParser

with ReasoningSession() as session:
    # Parse DLGP content
    parser = DlgpeParser.instance()
    result = parser.parse("""
        @facts
        p(a,b).
        p(b,c).

        @queries
        ?(X) :- p(a,X).
    """)

    # Create fact base and evaluate
    fb = session.create_fact_base(result["facts"])
    for answer in session.evaluate_query(result["queries"][0], fb):
        print(answer)  # (b,)

IRI Utilities

from prototyping_inference_engine.api.iri import (
    IRIManager,
    StandardComposableNormalizer,
    RFCNormalizationScheme,
)

manager = IRIManager(
    normalizer=StandardComposableNormalizer(RFCNormalizationScheme.STRING),
    iri_base="http://example.org/base/",
)
manager.set_prefix("ex", "http://example.org/ns/")

iri = manager.create_iri("ex:resource")
print(iri.recompose())  # http://example.org/ns/resource

Exporting DLGP

from prototyping_inference_engine.io.writers.dlgpe_writer import DlgpeWriter
from prototyping_inference_engine.io.parsers.dlgpe import DlgpeParser

parser = DlgpeParser.instance()
result = parser.parse("""
    @base <http://example.org/base/>.
    @prefix ex: <http://example.org/ns/>.
    <rel>(ex:obj).
""")

writer = DlgpeWriter()
print(writer.write(result))

Computed Predicates (@computed)

To load standard functions, use @computed <prefix>: <stdfct>.. To load Python computed functions, use @computed <prefix>: <path/to/config.json>. The configuration format is documented in docs/usage.md.

@computed ig: <stdfct>.

@queries
?(X) :- ig:sum(1, X, 3).
@computed ig: <stdfct>.

@queries
?(X) :- ig:get(ig:tuple(a, b, c), 1, X).
?(U) :- ig:union(ig:set(a, b), ig:set(b, c), U).
?(D) :- ig:dict(ig:tuple(a, b), ig:tuple(b, c), D).

Architecture

Core API (api/)

  • Terms: Variable, Constant with flyweight caching
  • Atoms: Predicate + terms, implements Substitutable
  • Formulas: Atom, ConjunctionFormula, DisjunctionFormula, NegationFormula, ExistentialFormula, UniversalFormula
  • Queries: FOQuery wrapping formulas with answer variables
  • Fact Bases: MutableInMemoryFactBase, FrozenInMemoryFactBase
  • Rules & Ontology: Generic rules with disjunctive head support
  • Rule Bases & Knowledge Bases: Containers for rules, facts, and ontologies

Data Abstraction (api/data/)

Abstraction layer for data sources (fact bases, SQL databases, REST APIs, etc.):

  • ReadableData: Abstract interface for queryable data sources
  • MaterializedData: Extension for fully iterable data sources
  • BasicQuery: Simple query with predicate, bound positions, and answer variables
  • AtomicPattern: Describes constraints for querying predicates (mandatory positions, type constraints)
  • PositionConstraint: Validators for term types at positions (GROUND, CONSTANT, VARIABLE, etc.)

Data sources declare their capabilities via AtomicPattern and implement evaluate(BasicQuery) returning tuples of terms. Evaluators handle variable mapping and post-processing.

Query Evaluation (query_evaluation/)

Hierarchical evaluator architecture:

QueryEvaluator[Q]
└── FOQueryEvaluator
    ├── AtomicFOQueryEvaluator
    ├── ConjunctiveFOQueryEvaluator
    ├── DisjunctiveFOQueryEvaluator
    ├── NegationFOQueryEvaluator
    ├── UniversalFOQueryEvaluator
    ├── ExistentialFOQueryEvaluator
    └── GenericFOQueryEvaluator (dispatches by formula type)

Each evaluator provides:

  • evaluate(query, data, substitution)Iterator[Substitution]
  • evaluate_and_project(query, data, substitution)Iterator[Tuple[Term, ...]]

Evaluators work with any ReadableData source, not just in-memory fact bases.

Backward Chaining (backward_chaining/)

  • BreadthFirstRewriting - UCQ rewriting algorithm
  • PieceUnifierAlgorithm - computes most general piece unifiers
  • RewritingOperator - applies rules to queries

Parser (parser/)

DLGP (parser/dlgpe/)

Extended Datalog+- format with disjunction, negation, and sections (recommended).

Supported features:

Feature Syntax Example
Disjunction in head | p(X) | q(X) :- r(X).
Disjunction in body | h(X) :- p(X) | q(X).
Negation not h(X) :- p(X), not q(X).
Equality = ?(X,Y) :- p(X,Y), X = Y.
Sections @facts, @rules, @queries, @constraints Organize knowledge base
Labels [name] [rule1] h(X) :- b(X).
IRI directives @base, @prefix @base <http://example.org/>.

Usage:

from prototyping_inference_engine.io.parsers.dlgpe import DlgpeParser
from prototyping_inference_engine.io.parsers.dlgpe import DlgpeUnsupportedFeatureError

parser = DlgpeParser.instance()

# Parse DLGP content
result = parser.parse("""
    @facts
    person(alice).
    person(bob).
    knows(alice, bob).

    @rules
    [transitivity] knows(X, Z) :- knows(X, Y), knows(Y, Z).
    stranger(X, Y) :- person(X), person(Y), not knows(X, Y).

    @queries
    ?(X) :- knows(alice, X).
""")

facts = result["facts"]
rules = result["rules"]
queries = result["queries"]

# Parse specific elements
atoms = list(parser.parse_atoms("p(a). q(b)."))
rules = list(parser.parse_rules("h(X) :- b(X). p(X) | q(X) :- r(X)."))

Not supported: arithmetic expressions, comparison operators (<, >, etc.), @import, @view directives.

DLGP Files (.dlgp)

DLGP files use the .dlgp extension. This version uses | for disjunction.

% Facts
p(a,b).

% Disjunctive rule
q(X) | r(Y) :- p(X,Y).

% Conjunctive query
?(X) :- p(X,Y), q(Y).

% Disjunctive query
?() :- (p(X), q(X)) | (r(X), s(X)).

CLI Tools

# Query rewriter (DLGP syntax)
disjunctive-rewriter [file.dlgp] [-l LIMIT] [-v] [-m]

Running Tests

# All tests
python3 -m unittest discover -s prototyping_inference_engine -v

# Specific module
python3 -m unittest discover -s prototyping_inference_engine/query_evaluation -v

License

GNU General Public License v3 (GPLv3)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

prototyping_inference_engine-0.0.20.tar.gz (226.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

prototyping_inference_engine-0.0.20-py3-none-any.whl (358.8 kB view details)

Uploaded Python 3

File details

Details for the file prototyping_inference_engine-0.0.20.tar.gz.

File metadata

File hashes

Hashes for prototyping_inference_engine-0.0.20.tar.gz
Algorithm Hash digest
SHA256 d2b6f5ecba1a26518d14e996d05aab7c55835edaaa2cdfd6c7cf7476b888f2e6
MD5 7a90e62060d37697675bf3631827c98d
BLAKE2b-256 516c7b4eca07b9aee63764dec13b1b756456e571fb61381e63b7d11414db9a1a

See more details on using hashes here.

Provenance

The following attestation bundles were made for prototyping_inference_engine-0.0.20.tar.gz:

Publisher: release.yml on guillaumeki/pie

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file prototyping_inference_engine-0.0.20-py3-none-any.whl.

File metadata

File hashes

Hashes for prototyping_inference_engine-0.0.20-py3-none-any.whl
Algorithm Hash digest
SHA256 78ff68c0425ec17790b2aaa67b7ea90eee0394399edabc7ffca0a818c3397d51
MD5 b0558aa211be25fa3ca51b45f47df473
BLAKE2b-256 0c5592d64da6b988fcb324b755113660242c9e0d63215fbbaa0c97ce965b8c23

See more details on using hashes here.

Provenance

The following attestation bundles were made for prototyping_inference_engine-0.0.20-py3-none-any.whl:

Publisher: release.yml on guillaumeki/pie

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page