Neural-symbolic AI framework unifying logical reasoning and tensor computation
Project description
TensorLogic
Neural-symbolic AI framework unifying logical reasoning and tensor computation. Bridge neural networks and symbolic reasoning through tensor operations based on Pedro Domingos' Tensor Logic paper (arXiv:2510.12269).
Core Insight: Logical operations map directly to tensor operations:
- Logical AND → Hadamard product
- Logical OR → Maximum operation
- Implications →
max(1-a, b) - Quantifiers → Einsum summation with Heaviside step
Beyond Deduction: Enabling Generalization with Analogical Reasoning
TensorLogic's breakthrough capability: temperature-controlled reasoning that bridges pure logic and neural approximation.
| Temperature | Behavior | Use Case |
|---|---|---|
| T=0 | Pure deductive inference | Verification, provable correctness, zero hallucinations |
| T=0.1-0.5 | Cautious generalization | Robust inference with uncertainty |
| T=1.0 | Analogical reasoning | Pattern completion, missing link prediction |
| T>1.0 | Exploratory | Creative hypotheses, knowledge graph expansion |
Why this matters: Standard logical solvers give you T=0 only. Standard neural networks give you T>0 only with no guarantees. TensorLogic gives you the entire spectrum—from mathematically provable deduction to neural-style generalization—in a unified framework.
from tensorlogic.api import reason
# Pure deduction: mathematically provable, zero hallucinations
result = reason('Grandparent(x, z)', temperature=0.0, ...)
# Analogical: can infer "likely grandparent" even with incomplete data
result = reason('Grandparent(x, z)', temperature=0.5, ...)
This capability is theoretically grounded in Pedro Domingos' Tensor Logic paper (arXiv:2510.12269). For a deep dive on temperature semantics, see the Temperature-Controlled Inference Guide.
Quick Start
Installation
# Basic Installation (NumPy backend)
uv add python-tensorlogic
# Recommended (MLX backend for Apple Silicon)
uv add python-tensorlogic mlx>=0.30.0
Performance Architecture
TensorLogic is built for scale. The MLX backend enables 1M+ entity knowledge graphs on Apple Silicon:
from tensorlogic.backends import create_backend
# Auto-selects MLX (GPU) on Apple Silicon, NumPy fallback elsewhere
backend = create_backend() # ← This step selects your hardware backend
| Backend | Hardware | Key Advantage |
|---|---|---|
| MLX | Apple Silicon (M1/M2/M3) | Unified memory + Metal GPU, lazy evaluation |
| NumPy | Universal CPU | Compatibility fallback |
The MLX backend's lazy evaluation enables 10-100x speedups for complex knowledge graph queries. See Performance Benchmarks for detailed metrics.
Logical Reasoning in Tensors
from tensorlogic.core import logical_and, logical_or, logical_not, logical_implies
from tensorlogic.core.quantifiers import exists, forall
from tensorlogic.backends import create_backend
backend = create_backend()
# Define relations as tensors (family knowledge graph)
# Rows = subject, Columns = object
parent = backend.asarray([
[0., 1., 1., 0.], # Alice is parent of Bob, Carol
[0., 0., 0., 1.], # Bob is parent of David
[0., 0., 0., 0.], # Carol has no children
[0., 0., 0., 0.], # David has no children
])
# Infer grandparent: exists y: Parent(x,y) AND Parent(y,z)
# Using einsum: sum over intermediate variable y
composition = backend.einsum('xy,yz->xz', parent, parent)
grandparent = backend.step(composition) # Alice is grandparent of David
# Quantified query: "Does Alice have any children?"
has_children = exists(parent[0, :], backend=backend) # True
# Logical implication: Parent(x,y) -> Ancestor(x,y)
ancestor = logical_implies(parent, parent, backend=backend)
Knowledge Graph Reasoning
TensorLogic's flagship capability: neural-symbolic reasoning over knowledge graphs with temperature-controlled inference.
from tensorlogic.api import quantify, reason
# Pattern-based quantified queries
result = quantify(
'exists y: Parent(x, y) and Parent(y, z)',
predicates={'Parent': parent_tensor},
backend=backend
)
# Temperature-controlled reasoning
# T=0: Pure deductive (no hallucinations)
# T>0: Analogical reasoning (generalization)
inference = reason(
'Grandparent(x, z)',
bindings={'x': alice_idx, 'z': david_idx},
temperature=0.0, # Strict deductive mode
backend=backend
)
Comprehensive Example
Run the full knowledge graph reasoning example:
uv run python examples/knowledge_graph_reasoning.py
Demonstrates:
- Family knowledge graph with 8 entities and 4 relation types
- Logical operations: AND, OR, NOT, IMPLIES
- Relation inference: Grandparent, Aunt/Uncle rules via implication
- Quantified queries: EXISTS ("has children?"), FORALL ("loves all?")
- Temperature control: T=0 deductive vs T>0 analogical reasoning
- Compilation strategy comparison across 5 semantic modes
- Uncertain knowledge handling with fuzzy relations
See examples/README.md for detailed documentation.
Compilation Strategies
TensorLogic supports multiple semantic interpretations—choose based on your problem, not your logic background:
soft_differentiable — Train neural networks that respect logical rules
Problem: "I want to train a model where the loss includes logical constraints"
Example: Learning embeddings where Parent(x,y) ∧ Parent(y,z) → Grandparent(x,z) is enforced during training
hard_boolean — Provable, exact inference
Problem: "I need mathematically guaranteed answers with no approximation" Example: Verifying that a knowledge graph satisfies business rules (integrates with Lean 4 verification)
godel — Score similarity on a continuous spectrum
Problem: "I need a grade (0.0-1.0), not just true/false" Example: Scoring product similarity in a recommendation engine
product — Probabilistic reasoning with independent events
Problem: "I'm combining probabilities and want P(A∧B) = P(A) × P(B)" Example: Computing joint probabilities in a Bayesian knowledge graph
lukasiewicz — Bounded arithmetic with saturation
Problem: "I need bounded confidence scores that don't explode" Example: Multi-hop reasoning where confidence degrades gracefully
| Strategy | Differentiable | Best For |
|---|---|---|
soft_differentiable |
Yes | Neural network training with logic constraints |
hard_boolean |
No | Exact verification, theorem proving |
godel |
Yes | Similarity scoring, fuzzy matching |
product |
Yes | Probabilistic inference |
lukasiewicz |
Yes | Bounded multi-hop reasoning |
from tensorlogic.compilation import create_strategy
# Choose based on your problem
strategy = create_strategy("soft_differentiable") # Training with logic constraints
strategy = create_strategy("hard_boolean") # Exact verification
strategy = create_strategy("godel") # Continuous scoring
See Compilation Strategies Guide for detailed API reference and mathematical semantics.
API Reference
Core Operations
from tensorlogic.core import logical_and, logical_or, logical_not, logical_implies
# Element-wise logical operations on tensors
result = logical_and(a, b, backend=backend) # a AND b
result = logical_or(a, b, backend=backend) # a OR b
result = logical_not(a, backend=backend) # NOT a
result = logical_implies(a, b, backend=backend) # a -> b
Quantifiers
from tensorlogic.core.quantifiers import exists, forall
# Existential: "exists x such that P(x)"
result = exists(predicate, axis=0, backend=backend)
# Universal: "for all x, P(x)"
result = forall(predicate, axis=0, backend=backend)
High-Level Pattern API
from tensorlogic.api import quantify, reason
# Pattern-based quantified queries
result = quantify(
'forall x: P(x) -> Q(x)',
predicates={'P': predicate_p, 'Q': predicate_q},
backend=backend
)
# Temperature-controlled reasoning
result = reason(
'exists y: Related(x, y) and HasProperty(y)',
bindings={'x': entity_batch},
temperature=0.0, # 0.0 = deductive, >0 = analogical
backend=backend
)
Backend System
TensorLogic uses a minimal Protocol-based abstraction (~25-30 operations) supporting multiple tensor frameworks. See Performance Architecture for hardware selection.
from tensorlogic.backends import create_backend
# Explicit backend selection
numpy_backend = create_backend("numpy")
mlx_backend = create_backend("mlx")
MLX Lazy Evaluation: Operations are not computed until backend.eval(result) is called—critical for batching complex knowledge graph queries.
Protocol Operations:
- Creation:
zeros,ones,arange,full,asarray - Transformation:
reshape,broadcast_to,transpose,squeeze,expand_dims - Operations:
einsum,maximum,add,subtract,multiply,divide,matmul - Reductions:
sum,max,min,mean,prod - Utilities:
eval,step,clip,abs,exp,log,sqrt,power,astype
See docs/backends/API.md for complete API reference.
Project Status
Current Phase: Production Ready
Completed:
- BACKEND-001: TensorBackend Protocol with MLX + NumPy (PR #6)
- CORE-001: Logical Operations & Quantifiers (PR #7)
- API-001: Pattern Language & Compilation (PR #8)
- VERIF-001: Lean 4 Verification Bridge (15 theorems proven)
- RAG-001: Integration module with LangChain adapter
- 1,257 tests, 99%+ pass rate, 100% type coverage
Features:
- Sparse tensor support for 1M+ entity knowledge graphs
- LangChain-compatible retriever with hybrid neural-symbolic scoring
- 4 Jupyter notebooks for interactive learning
- Benchmark suite for scale validation
See docs/tutorials/index.md for tutorials and docs/research/rag-goals.md for research roadmap.
Development
Running Tests
# All tests
uv run pytest
# With coverage
uv run pytest --cov=tensorlogic --cov-report=html
# Specific component
uv run pytest tests/test_core/
uv run pytest tests/test_backends/
uv run pytest tests/test_api/
uv run pytest tests/test_integrations/
Type Checking
uv run mypy --strict src/tensorlogic/
# Current status: 0 errors
Code Quality
uv run ruff check . # Linting
uv run ruff format . # Formatting
Documentation
- Conceptual Guide:
docs/concepts/tensor-logic-mapping.md- How logic becomes tensors - Examples:
examples/README.md- Practical usage examples - Backend API:
docs/backends/API.md- Comprehensive API reference - Research Goals:
docs/research/rag-goals.md- RAG research roadmap - Original Paper: arXiv:2510.12269 (Domingos, 2025)
License
MIT License
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file python_tensorlogic-0.2.1.tar.gz.
File metadata
- Download URL: python_tensorlogic-0.2.1.tar.gz
- Upload date:
- Size: 138.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
df6ee495f49f14926915115e5db0a6d2f07f8355ff665a5f9ed0bbc3e49bd8f5
|
|
| MD5 |
de35444dbfda60acae01c8da4b97d81b
|
|
| BLAKE2b-256 |
d8b1ed8089b8fcc599a87519c6e2ec171d4e66f1e21d55fee696a80c6899a430
|
File details
Details for the file python_tensorlogic-0.2.1-py3-none-any.whl.
File metadata
- Download URL: python_tensorlogic-0.2.1-py3-none-any.whl
- Upload date:
- Size: 90.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
66977ca07665c3bb5b213dd6507a3a678f131c7ea26fab6cac7381af82e062bd
|
|
| MD5 |
ca5b2dd764476c18e6e51cc7ae275b10
|
|
| BLAKE2b-256 |
645cba9bf5c3c03da6bf1b41447839c79ff8ab5cbf8eccc331fca7fb5c907f4a
|