Differentiable Modal Logic for PyTorch — Modal Logical Neural Networks (MLNNs)
Project description
torchmodal
Differentiable Modal Logic for PyTorch
A PyTorch library implementing Modal Logical Neural Networks (MLNNs) — the first framework enabling differentiable reasoning over necessity and possibility by integrating neural networks with Kripke semantics from modal logic.
Installation
pip install -e .
Quick Start
import torch
import torchmodal
from torchmodal import nn, KripkeModel
# Create a 3-world Kripke model with learnable accessibility
model = KripkeModel(
num_worlds=3,
accessibility=nn.LearnableAccessibility(3, init_bias=-2.0),
tau=0.1,
)
# Add propositions
model.add_proposition("safe", learnable=True)
model.add_proposition("online", learnable=False)
# Evaluate modal operators
A = model.get_accessibility()
box_safe = model.necessity("safe", A) # □(safe) — necessarily safe
dia_online = model.possibility("online", A) # ♢(online) — possibly online
# Compute contradiction loss
loss = model.contradiction_loss()
Architecture
torchmodal/
├── __init__.py # Public API
├── functional.py # Stateless functional operators (like torch.nn.functional)
├── nn/
│ ├── operators.py # Softmin, Softmax, ConvPool modules
│ ├── connectives.py # Negation, Conjunction, Disjunction, Implication
│ ├── modal.py # Necessity (□), Possibility (♢) neurons
│ └── accessibility.py # Fixed, Learnable, Metric accessibility relations
├── kripke.py # KripkeModel, Proposition
├── losses.py # ContradictionLoss, ModalLoss, SparsityLoss, CrystallizationLoss
├── inference.py # Upward-downward bound propagation
├── systems.py # EpistemicOperator, DoxasticOperator, TemporalOperator, MultiAgentKripke
└── utils.py # Temperature annealing, accessibility builders, decoding
Core Concepts
Differentiable Kripke Semantics
A Kripke model M = ⟨W, R, V⟩ is realized as differentiable tensors:
- W (Worlds): A finite set of possible worlds — agents, time steps, or contexts
- R (Accessibility): A relation determining which worlds can "see" each other
- V (Valuation): Truth bounds [L, U] ⊆ [0, 1] for each proposition in each world
Modal Operators
| Operator | Symbol | Semantics | Implementation |
|---|---|---|---|
| Necessity | □ | True in all accessible worlds | softmin over weighted implications |
| Possibility | ♢ | True in some accessible world | softmax over weighted conjunctions |
| Knowledge | K_a | Agent a knows ϕ | □ restricted to agent's row |
| Belief | B_a | Agent a believes ϕ | □ with non-reflexive access |
| Globally | G | ϕ at all future times | □ over temporal accessibility |
| Finally | F | ϕ at some future time | ♢ over temporal accessibility |
Accessibility Relations
# Fixed (deductive mode): enforce known rules
R = torchmodal.build_sudoku_accessibility(3)
access = nn.FixedAccessibility(R)
# Learnable direct matrix (small worlds)
access = nn.LearnableAccessibility(num_worlds=7, init_bias=-2.0)
# Metric learning (scales to 20,000+ worlds)
access = nn.MetricAccessibility(num_worlds=10000, embed_dim=64)
Loss Functions
# Combined modal loss: L_total = L_task + β * L_contra
criterion = torchmodal.ModalLoss(beta=0.3)
loss = criterion(task_loss, model.all_bounds())
# Sparsity regularization on accessibility
sparse_loss = torchmodal.SparsityLoss(lambda_sparse=0.05)
# Crystallization for SAT mode (forces crisp 0/1 assignments)
crystal_loss = torchmodal.CrystallizationLoss()
Examples
All examples are self-contained scripts in examples/ and can be run directly:
python examples/sudoku.py
| Example | Modal Logic | Description |
|---|---|---|
sudoku.py |
□, CSP | 4x4 Sudoku via modal contradiction + crystallization |
temporal_epistemic.py |
K, G, F, K∘G | Learns epistemic accessibility to resolve contradictions |
epistemic_trust.py |
K_a | Trust learning from promise-keeping behavior |
doxastic_belief.py |
B_a | Belief calibration and hallucination detection |
temporal_causal.py |
□(cause → crash) | Root cause analysis in event traces |
deontic_boundary.py |
O, P | Normative boundary learning (spoofing detection) |
trust_erosion.py |
Temporal + Deontic | Retroactive lie detection collapses trust |
dialect_classification.py |
□, ♢ thresholds | OOD detection — 89% Neutral recall trained only on AmE/BrE |
axiom_ablation.py |
T, 4, B axioms | Effect of reflexivity/transitivity/symmetry on structure learning |
scalability_ring.py |
□, ♢ | Ring structure recovery with tau/top-k/learnable ablation |
Epistemic Trust Learning (CaSiNo / Diplomacy)
from torchmodal import MultiAgentKripke
# 7 agents (Diplomacy powers), 3 time steps
kripke = MultiAgentKripke(
num_agents=7,
num_steps=3,
learnable_epistemic=True,
init_bias=-2.0,
)
# Evaluate "agent knows claim is consistent over time"
K_G_claim = kripke.K_G(claim_bounds)
# Learn trust from contradiction minimization
A = kripke.get_epistemic_accessibility()
Sudoku as Constraint Satisfaction
import torchmodal
from torchmodal import KripkeModel, nn
# 81 worlds (cells), fixed Sudoku accessibility
R = torchmodal.build_sudoku_accessibility(3)
model = KripkeModel(
num_worlds=81,
accessibility=nn.FixedAccessibility(R),
)
# 9 propositions (digits)
for d in range(1, 10):
model.add_proposition(f"d{d}", learnable=True)
# Train with contradiction loss + crystallization
contra_loss = torchmodal.ContradictionLoss(squared=True)
crystal_loss = torchmodal.CrystallizationLoss()
POS Tagging with Grammatical Guardrails
from torchmodal import nn, functional as F
# 3-world structure: Real, Pessimistic, Exploratory
box = nn.Necessity(tau=0.1)
access = nn.LearnableAccessibility(3)
# Enforce axiom: □¬(DET_i ∧ VERB_{i+1})
A = access()
det_bounds = ... # from proposer network
verb_bounds = ...
conj = F.conjunction(det_bounds, verb_bounds)
neg_conj = F.negation(conj)
box_constraint = box(neg_conj, A) # must be high (true)
Formula Graph Inference
from torchmodal import FormulaGraph, upward_downward
graph = FormulaGraph()
graph.add_atomic("p")
graph.add_atomic("q")
graph.add_conjunction("p_and_q", "p", "q")
graph.add_necessity("box_p_and_q", "p_and_q")
# Initialize bounds
bounds = {
"p": torch.tensor([[0.8, 1.0], [0.3, 0.5], [0.9, 1.0]]),
"q": torch.tensor([[0.7, 0.9], [0.6, 0.8], [0.4, 0.6]]),
"p_and_q": torch.tensor([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]),
"box_p_and_q": torch.tensor([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]),
}
# Run inference
A = torch.eye(3) # reflexive accessibility
tightened = upward_downward(graph, bounds, A, tau=0.1)
Two Learning Modes
| Mode | Fixed | Learned | Use Case |
|---|---|---|---|
| Deductive | Accessibility R | Propositions V | POS guardrails, Sudoku, OOD detection |
| Inductive | Propositions V | Accessibility A_θ | Trust learning, social structure discovery |
Citation
If you use torchmodal in your research, please cite:
@misc{sulc2025modallogicalneuralnetworks,
title={Modal Logical Neural Networks},
author={Antonin Sulc},
year={2025},
eprint={2512.03491},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2512.03491},
}
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file torchmodal-0.1.0.1.tar.gz.
File metadata
- Download URL: torchmodal-0.1.0.1.tar.gz
- Upload date:
- Size: 32.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
62b17350e3d62de70a2122638efcea524e945b18a749b15458a19288e6b3f43f
|
|
| MD5 |
490dd696b372eb625cd97a1cf6ee009a
|
|
| BLAKE2b-256 |
5afdf1d5c522a32ad3d0226c0cad852ed845ec5ce2fbdae69314d22a0b8962b1
|
File details
Details for the file torchmodal-0.1.0.1-py3-none-any.whl.
File metadata
- Download URL: torchmodal-0.1.0.1-py3-none-any.whl
- Upload date:
- Size: 31.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1021cf17dcca52548ab506157b35a3ec6d6e820bd701d6078f83f49d06401865
|
|
| MD5 |
eb1fa09dccc6fdf9ec949cd5a9c989eb
|
|
| BLAKE2b-256 |
0855ffae01d5379b1e4c717d75e6107f1388483d4a773c050f9745fa34b4f73d
|