Skip to main content

Differentiable Modal Logic for PyTorch — Modal Logical Neural Networks (MLNNs)

Project description

torchmodal

Differentiable Modal Logic.

Differentiable Modal Logic for PyTorch

A PyTorch library implementing Modal Logical Neural Networks (MLNNs) — the first framework enabling differentiable reasoning over necessity and possibility by integrating neural networks with Kripke semantics from modal logic.

Installation

pip install -e .

Quick Start

import torch
import torchmodal
from torchmodal import nn, KripkeModel

# Create a 3-world Kripke model with learnable accessibility
model = KripkeModel(
    num_worlds=3,
    accessibility=nn.LearnableAccessibility(3, init_bias=-2.0),
    tau=0.1,
)

# Add propositions
model.add_proposition("safe", learnable=True)
model.add_proposition("online", learnable=False)

# Evaluate modal operators
A = model.get_accessibility()
box_safe = model.necessity("safe", A)       # □(safe) — necessarily safe
dia_online = model.possibility("online", A)  # ♢(online) — possibly online

# Compute contradiction loss
loss = model.contradiction_loss()

Architecture

torchmodal/
├── __init__.py          # Public API
├── functional.py        # Stateless functional operators (like torch.nn.functional)
├── nn/
│   ├── operators.py     # Softmin, Softmax, ConvPool modules
│   ├── connectives.py   # Negation, Conjunction, Disjunction, Implication
│   ├── modal.py         # Necessity (□), Possibility (♢) neurons
│   └── accessibility.py # Fixed, Learnable, Metric accessibility relations
├── kripke.py            # KripkeModel, Proposition
├── losses.py            # ContradictionLoss, ModalLoss, SparsityLoss, CrystallizationLoss
├── inference.py         # Upward-downward bound propagation
├── systems.py           # EpistemicOperator, DoxasticOperator, TemporalOperator, MultiAgentKripke
└── utils.py             # Temperature annealing, accessibility builders, decoding

Core Concepts

Differentiable Kripke Semantics

A Kripke model M = ⟨W, R, V⟩ is realized as differentiable tensors:

  • W (Worlds): A finite set of possible worlds — agents, time steps, or contexts
  • R (Accessibility): A relation determining which worlds can "see" each other
  • V (Valuation): Truth bounds [L, U] ⊆ [0, 1] for each proposition in each world

Modal Operators

Operator Symbol Semantics Implementation
Necessity True in all accessible worlds softmin over weighted implications
Possibility True in some accessible world softmax over weighted conjunctions
Knowledge K_a Agent a knows ϕ □ restricted to agent's row
Belief B_a Agent a believes ϕ □ with non-reflexive access
Globally G ϕ at all future times □ over temporal accessibility
Finally F ϕ at some future time ♢ over temporal accessibility

Accessibility Relations

# Fixed (deductive mode): enforce known rules
R = torchmodal.build_sudoku_accessibility(3)
access = nn.FixedAccessibility(R)

# Learnable direct matrix (small worlds)
access = nn.LearnableAccessibility(num_worlds=7, init_bias=-2.0)

# Metric learning (scales to 20,000+ worlds)
access = nn.MetricAccessibility(num_worlds=10000, embed_dim=64)

Loss Functions

# Combined modal loss: L_total = L_task + β * L_contra
criterion = torchmodal.ModalLoss(beta=0.3)
loss = criterion(task_loss, model.all_bounds())

# Sparsity regularization on accessibility
sparse_loss = torchmodal.SparsityLoss(lambda_sparse=0.05)

# Crystallization for SAT mode (forces crisp 0/1 assignments)
crystal_loss = torchmodal.CrystallizationLoss()

Examples

All examples are self-contained scripts in examples/ and can be run directly:

python examples/sudoku.py
Example Modal Logic Description
sudoku.py □, CSP 4x4 Sudoku via modal contradiction + crystallization
temporal_epistemic.py K, G, F, K∘G Learns epistemic accessibility to resolve contradictions
epistemic_trust.py K_a Trust learning from promise-keeping behavior
doxastic_belief.py B_a Belief calibration and hallucination detection
temporal_causal.py □(cause → crash) Root cause analysis in event traces
deontic_boundary.py O, P Normative boundary learning (spoofing detection)
trust_erosion.py Temporal + Deontic Retroactive lie detection collapses trust
dialect_classification.py □, ♢ thresholds OOD detection — 89% Neutral recall trained only on AmE/BrE
axiom_ablation.py T, 4, B axioms Effect of reflexivity/transitivity/symmetry on structure learning
scalability_ring.py □, ♢ Ring structure recovery with tau/top-k/learnable ablation

Epistemic Trust Learning (CaSiNo / Diplomacy)

from torchmodal import MultiAgentKripke

# 7 agents (Diplomacy powers), 3 time steps
kripke = MultiAgentKripke(
    num_agents=7,
    num_steps=3,
    learnable_epistemic=True,
    init_bias=-2.0,
)

# Evaluate "agent knows claim is consistent over time"
K_G_claim = kripke.K_G(claim_bounds)

# Learn trust from contradiction minimization
A = kripke.get_epistemic_accessibility()

Sudoku as Constraint Satisfaction

import torchmodal
from torchmodal import KripkeModel, nn

# 81 worlds (cells), fixed Sudoku accessibility
R = torchmodal.build_sudoku_accessibility(3)
model = KripkeModel(
    num_worlds=81,
    accessibility=nn.FixedAccessibility(R),
)

# 9 propositions (digits)
for d in range(1, 10):
    model.add_proposition(f"d{d}", learnable=True)

# Train with contradiction loss + crystallization
contra_loss = torchmodal.ContradictionLoss(squared=True)
crystal_loss = torchmodal.CrystallizationLoss()

POS Tagging with Grammatical Guardrails

from torchmodal import nn, functional as F

# 3-world structure: Real, Pessimistic, Exploratory
box = nn.Necessity(tau=0.1)
access = nn.LearnableAccessibility(3)

# Enforce axiom: □¬(DET_i ∧ VERB_{i+1})
A = access()
det_bounds = ...   # from proposer network
verb_bounds = ...
conj = F.conjunction(det_bounds, verb_bounds)
neg_conj = F.negation(conj)
box_constraint = box(neg_conj, A)  # must be high (true)

Formula Graph Inference

from torchmodal import FormulaGraph, upward_downward

graph = FormulaGraph()
graph.add_atomic("p")
graph.add_atomic("q")
graph.add_conjunction("p_and_q", "p", "q")
graph.add_necessity("box_p_and_q", "p_and_q")

# Initialize bounds
bounds = {
    "p": torch.tensor([[0.8, 1.0], [0.3, 0.5], [0.9, 1.0]]),
    "q": torch.tensor([[0.7, 0.9], [0.6, 0.8], [0.4, 0.6]]),
    "p_and_q": torch.tensor([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]),
    "box_p_and_q": torch.tensor([[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]),
}

# Run inference
A = torch.eye(3)  # reflexive accessibility
tightened = upward_downward(graph, bounds, A, tau=0.1)

Two Learning Modes

Mode Fixed Learned Use Case
Deductive Accessibility R Propositions V POS guardrails, Sudoku, OOD detection
Inductive Propositions V Accessibility A_θ Trust learning, social structure discovery

Citation

If you use torchmodal in your research, please cite:

@misc{sulc2025modallogicalneuralnetworks,
  title={Modal Logical Neural Networks},
  author={Antonin Sulc},
  year={2025},
  eprint={2512.03491},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2512.03491},
}

License

MIT

Authors

Antonin Sulc

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchmodal-0.1.0.5.tar.gz (34.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torchmodal-0.1.0.5-py3-none-any.whl (31.7 kB view details)

Uploaded Python 3

File details

Details for the file torchmodal-0.1.0.5.tar.gz.

File metadata

  • Download URL: torchmodal-0.1.0.5.tar.gz
  • Upload date:
  • Size: 34.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.18

File hashes

Hashes for torchmodal-0.1.0.5.tar.gz
Algorithm Hash digest
SHA256 15bd935ed7da94a85b576c971b4adf6be2384e5f908b458a6fcdccac5c08aef0
MD5 942e32b91990be5ae62ce6884df7b74c
BLAKE2b-256 022ff836f3194972fd895be1c66dd7c83a43e7a6d4af9bb2f08e533cb54a9c2d

See more details on using hashes here.

File details

Details for the file torchmodal-0.1.0.5-py3-none-any.whl.

File metadata

  • Download URL: torchmodal-0.1.0.5-py3-none-any.whl
  • Upload date:
  • Size: 31.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.18

File hashes

Hashes for torchmodal-0.1.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 128736e664b99aaa805b9e193dff657f59eb07d3ff43f042c202f3708771c262
MD5 90a72828fa0c1c3d816f4c704ded9ea4
BLAKE2b-256 777f0e7ff5a1e48e22aaa3826a398abbed2d43f6f963ef0dbc4b30e0b22085aa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page