Skip to main content

Neural-symbolic AI framework unifying logical reasoning and tensor computation

Project description

TensorLogic

Neural-symbolic AI framework unifying logical reasoning and tensor computation. Bridge neural networks and symbolic reasoning through tensor operations based on Pedro Domingos' Tensor Logic paper (arXiv:2510.12269).

Core Insight: Logical operations map directly to tensor operations:

  • Logical AND → Hadamard product
  • Logical OR → Maximum operation
  • Implications → max(1-a, b)
  • Quantifiers → Einsum summation with Heaviside step

Beyond Deduction: Enabling Generalization with Analogical Reasoning

TensorLogic's breakthrough capability: temperature-controlled reasoning that bridges pure logic and neural approximation.

Temperature Behavior Use Case
T=0 Pure deductive inference Verification, provable correctness, zero hallucinations
T=0.1-0.5 Cautious generalization Robust inference with uncertainty
T=1.0 Analogical reasoning Pattern completion, missing link prediction
T>1.0 Exploratory Creative hypotheses, knowledge graph expansion

Why this matters: Standard logical solvers give you T=0 only. Standard neural networks give you T>0 only with no guarantees. TensorLogic gives you the entire spectrum—from mathematically provable deduction to neural-style generalization—in a unified framework.

from tensorlogic.api import reason

# Pure deduction: mathematically provable, zero hallucinations
result = reason('Grandparent(x, z)', temperature=0.0, ...)

# Analogical: can infer "likely grandparent" even with incomplete data
result = reason('Grandparent(x, z)', temperature=0.5, ...)

This capability is theoretically grounded in Pedro Domingos' Tensor Logic paper (arXiv:2510.12269). For a deep dive on temperature semantics, see the Temperature-Controlled Inference Guide.

Quick Start

Installation

# Basic Installation (NumPy backend)
uv add tensorlogic

# Recommended (MLX backend for Apple Silicon)
uv add tensorlogic mlx>=0.30.0

Performance Architecture

TensorLogic is built for scale. The MLX backend enables 1M+ entity knowledge graphs on Apple Silicon:

from tensorlogic.backends import create_backend

# Auto-selects MLX (GPU) on Apple Silicon, NumPy fallback elsewhere
backend = create_backend()  # ← This step selects your hardware backend
Backend Hardware Key Advantage
MLX Apple Silicon (M1/M2/M3) Unified memory + Metal GPU, lazy evaluation
NumPy Universal CPU Compatibility fallback

The MLX backend's lazy evaluation enables 10-100x speedups for complex knowledge graph queries. See Performance Benchmarks for detailed metrics.

Logical Reasoning in Tensors

from tensorlogic.core import logical_and, logical_or, logical_not, logical_implies
from tensorlogic.core.quantifiers import exists, forall
from tensorlogic.backends import create_backend

backend = create_backend()

# Define relations as tensors (family knowledge graph)
# Rows = subject, Columns = object
parent = backend.asarray([
    [0., 1., 1., 0.],  # Alice is parent of Bob, Carol
    [0., 0., 0., 1.],  # Bob is parent of David
    [0., 0., 0., 0.],  # Carol has no children
    [0., 0., 0., 0.],  # David has no children
])

# Infer grandparent: exists y: Parent(x,y) AND Parent(y,z)
# Using einsum: sum over intermediate variable y
composition = backend.einsum('xy,yz->xz', parent, parent)
grandparent = backend.step(composition)  # Alice is grandparent of David

# Quantified query: "Does Alice have any children?"
has_children = exists(parent[0, :], backend=backend)  # True

# Logical implication: Parent(x,y) -> Ancestor(x,y)
ancestor = logical_implies(parent, parent, backend=backend)

Knowledge Graph Reasoning

TensorLogic's flagship capability: neural-symbolic reasoning over knowledge graphs with temperature-controlled inference.

from tensorlogic.api import quantify, reason

# Pattern-based quantified queries
result = quantify(
    'exists y: Parent(x, y) and Parent(y, z)',
    predicates={'Parent': parent_tensor},
    backend=backend
)

# Temperature-controlled reasoning
# T=0: Pure deductive (no hallucinations)
# T>0: Analogical reasoning (generalization)
inference = reason(
    'Grandparent(x, z)',
    bindings={'x': alice_idx, 'z': david_idx},
    temperature=0.0,  # Strict deductive mode
    backend=backend
)

Comprehensive Example

Run the full knowledge graph reasoning example:

uv run python examples/knowledge_graph_reasoning.py

Demonstrates:

  • Family knowledge graph with 8 entities and 4 relation types
  • Logical operations: AND, OR, NOT, IMPLIES
  • Relation inference: Grandparent, Aunt/Uncle rules via implication
  • Quantified queries: EXISTS ("has children?"), FORALL ("loves all?")
  • Temperature control: T=0 deductive vs T>0 analogical reasoning
  • Compilation strategy comparison across 5 semantic modes
  • Uncertain knowledge handling with fuzzy relations

See examples/README.md for detailed documentation.

Compilation Strategies

TensorLogic supports multiple semantic interpretations—choose based on your problem, not your logic background:

soft_differentiable — Train neural networks that respect logical rules

Problem: "I want to train a model where the loss includes logical constraints" Example: Learning embeddings where Parent(x,y) ∧ Parent(y,z) → Grandparent(x,z) is enforced during training

hard_boolean — Provable, exact inference

Problem: "I need mathematically guaranteed answers with no approximation" Example: Verifying that a knowledge graph satisfies business rules (integrates with Lean 4 verification)

godel — Score similarity on a continuous spectrum

Problem: "I need a grade (0.0-1.0), not just true/false" Example: Scoring product similarity in a recommendation engine

product — Probabilistic reasoning with independent events

Problem: "I'm combining probabilities and want P(A∧B) = P(A) × P(B)" Example: Computing joint probabilities in a Bayesian knowledge graph

lukasiewicz — Bounded arithmetic with saturation

Problem: "I need bounded confidence scores that don't explode" Example: Multi-hop reasoning where confidence degrades gracefully

Strategy Differentiable Best For
soft_differentiable Yes Neural network training with logic constraints
hard_boolean No Exact verification, theorem proving
godel Yes Similarity scoring, fuzzy matching
product Yes Probabilistic inference
lukasiewicz Yes Bounded multi-hop reasoning
from tensorlogic.compilation import create_strategy

# Choose based on your problem
strategy = create_strategy("soft_differentiable")  # Training with logic constraints
strategy = create_strategy("hard_boolean")         # Exact verification
strategy = create_strategy("godel")                # Continuous scoring

See Compilation Strategies Guide for detailed API reference and mathematical semantics.

API Reference

Core Operations

from tensorlogic.core import logical_and, logical_or, logical_not, logical_implies

# Element-wise logical operations on tensors
result = logical_and(a, b, backend=backend)      # a AND b
result = logical_or(a, b, backend=backend)       # a OR b
result = logical_not(a, backend=backend)         # NOT a
result = logical_implies(a, b, backend=backend)  # a -> b

Quantifiers

from tensorlogic.core.quantifiers import exists, forall

# Existential: "exists x such that P(x)"
result = exists(predicate, axis=0, backend=backend)

# Universal: "for all x, P(x)"
result = forall(predicate, axis=0, backend=backend)

High-Level Pattern API

from tensorlogic.api import quantify, reason

# Pattern-based quantified queries
result = quantify(
    'forall x: P(x) -> Q(x)',
    predicates={'P': predicate_p, 'Q': predicate_q},
    backend=backend
)

# Temperature-controlled reasoning
result = reason(
    'exists y: Related(x, y) and HasProperty(y)',
    bindings={'x': entity_batch},
    temperature=0.0,  # 0.0 = deductive, >0 = analogical
    backend=backend
)

Backend System

TensorLogic uses a minimal Protocol-based abstraction (~25-30 operations) supporting multiple tensor frameworks. See Performance Architecture for hardware selection.

from tensorlogic.backends import create_backend

# Explicit backend selection
numpy_backend = create_backend("numpy")
mlx_backend = create_backend("mlx")

MLX Lazy Evaluation: Operations are not computed until backend.eval(result) is called—critical for batching complex knowledge graph queries.

Protocol Operations:

  • Creation: zeros, ones, arange, full, asarray
  • Transformation: reshape, broadcast_to, transpose, squeeze, expand_dims
  • Operations: einsum, maximum, add, subtract, multiply, divide, matmul
  • Reductions: sum, max, min, mean, prod
  • Utilities: eval, step, clip, abs, exp, log, sqrt, power, astype

See docs/backends/API.md for complete API reference.

Project Status

Current Phase: Production Ready

Completed:

  • BACKEND-001: TensorBackend Protocol with MLX + NumPy (PR #6)
  • CORE-001: Logical Operations & Quantifiers (PR #7)
  • API-001: Pattern Language & Compilation (PR #8)
  • VERIF-001: Lean 4 Verification Bridge (15 theorems proven)
  • RAG-001: Integration module with LangChain adapter
  • 1,257 tests, 99%+ pass rate, 100% type coverage

Features:

  • Sparse tensor support for 1M+ entity knowledge graphs
  • LangChain-compatible retriever with hybrid neural-symbolic scoring
  • 4 Jupyter notebooks for interactive learning
  • Benchmark suite for scale validation

See docs/tutorials/index.md for tutorials and docs/research/rag-goals.md for research roadmap.

Development

Running Tests

# All tests
uv run pytest

# With coverage
uv run pytest --cov=tensorlogic --cov-report=html

# Specific component
uv run pytest tests/test_core/
uv run pytest tests/test_backends/
uv run pytest tests/test_api/
uv run pytest tests/test_integrations/

Type Checking

uv run mypy --strict src/tensorlogic/
# Current status: 0 errors

Code Quality

uv run ruff check .   # Linting
uv run ruff format .  # Formatting

Documentation

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

python_tensorlogic-0.2.0.tar.gz (137.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

python_tensorlogic-0.2.0-py3-none-any.whl (90.4 kB view details)

Uploaded Python 3

File details

Details for the file python_tensorlogic-0.2.0.tar.gz.

File metadata

  • Download URL: python_tensorlogic-0.2.0.tar.gz
  • Upload date:
  • Size: 137.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for python_tensorlogic-0.2.0.tar.gz
Algorithm Hash digest
SHA256 1198b5260e9179ef9ff079c5c49f3690faf20d57f0df7e1509a486ec2c364fc9
MD5 3b1369a96a55bca5fd274e340c53a5f4
BLAKE2b-256 c15251384ad8f2f7386c48270706f5254c883c3e617895eabad4023cddb4a668

See more details on using hashes here.

File details

Details for the file python_tensorlogic-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for python_tensorlogic-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ebeb7b808ef924fe67543b2ea342577993360a4a6ea3688006414fbc9e343f37
MD5 1d6ea9a589c46c9db09363557fc97f97
BLAKE2b-256 e357681ce050926309467783ff8c3ea4fbb649c0d2891a4cfff1d3bddea92fd0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page