Skip to main content

Semantic Tension Language (STL) Parser - A lightweight parser for structured knowledge representation

Project description

STL Parser (v1.7.0)

A comprehensive Python toolkit for Semantic Tension Language (STL) — parse, build, validate, query, diff, stream, and repair structured knowledge.

License Python Version Code style: black

Overview

stl-parser provides feature parity with the JSON ecosystem for STL documents:

Capability Functions JSON Equivalent
Parse & Serialize parse(), to_json(), to_stl(), to_rdf() json.loads() / json.dumps()
Build Programmatically stl(), stl_doc(), StatementBuilder Manual dict construction
Schema Validation load_schema(), validate_against_schema() JSON Schema
LLM Output Repair clean(), repair(), validate_llm_output() N/A (STL-unique)
Query & Filter find(), filter_statements(), select(), stl_pointer() jq / JSONPath
Diff & Patch stl_diff(), stl_patch(), diff_to_text() json-diff / json-patch
Streaming I/O STLEmitter, STLReader, stream_parse() NDJSON streaming
Graph Analysis STLGraph, STLAnalyzer N/A (STL-unique)
Confidence Decay effective_confidence(), decay_report() N/A (STL-unique)
CLI 10 commands (validate, convert, query, diff, ...) jq CLI

Installation

From Source

git clone https://github.com/scos-lab/semantic-tension-language.git
cd semantic-tension-language/parser
pip install -e .

Development Installation

pip install -e ".[dev]"

Quick Start

Parse STL

from stl_parser import parse

result = parse('[Einstein] -> [Theory_Relativity] ::mod(confidence=0.98, rule="empirical")')
stmt = result.statements[0]
print(stmt.source.name)          # "Einstein"
print(stmt.modifiers.confidence)  # 0.98

Build Programmatically

from stl_parser import stl

stmt = stl("[Rain]", "[Flooding]").mod(confidence=0.85, rule="causal", strength=0.8).build()
print(str(stmt))  # [Rain] -> [Flooding] ::mod(confidence=0.85, rule="causal", strength=0.8)

Build a Multi-Statement Document

from stl_parser import stl, stl_doc

doc = stl_doc(
    stl("[Rain]", "[Flooding]").mod(rule="causal", confidence=0.85),
    stl("[Flooding]", "[Evacuation]").mod(rule="causal", confidence=0.90),
)
print(len(doc.statements))  # 2

Validate Against a Schema

from stl_parser import parse, load_schema, validate_against_schema

schema = load_schema("docs/schemas/causal.stl.schema")
result = parse('[Rain] -> [Flooding] ::mod(rule="causal", confidence=0.85, strength=0.8)')
validation = validate_against_schema(result, schema)
print(f"Valid: {validation.is_valid}")

Clean LLM Output

from stl_parser import validate_llm_output

result = validate_llm_output("A => B mod(confience=1.5)")
print(f"Valid: {result.is_valid}")
print(f"Repairs: {len(result.repairs)}")
# Repairs fix: missing brackets, wrong arrow, missing ::, typo "confience", value 1.5 clamped to 1.0

Query Statements

from stl_parser import parse, find, find_all, filter_statements, select

result = parse("""
[Rain] -> [Flooding] ::mod(rule="causal", confidence=0.85)
[Sun] -> [Evaporation] ::mod(rule="causal", confidence=0.90)
[Wind] -> [Erosion] ::mod(rule="causal", confidence=0.70)
""")

# Find first match ("source" resolves to source.name)
stmt = find(result, source="Rain")

# Filter returns a new ParseResult
high_conf = filter_statements(result, confidence__gte=0.85)
print(len(high_conf.statements))  # 2

# Extract a single field from all statements
names = select(result, field="source")   # ["Rain", "Sun", "Wind"]
confs = select(result, field="confidence")  # [0.85, 0.9, 0.7]

Diff & Patch

from stl_parser import parse, stl_diff, stl_patch

before = parse('[A] -> [B] ::mod(confidence=0.8)')
after  = parse('[A] -> [B] ::mod(confidence=0.9)')

diff = stl_diff(before, after)
print(diff.summary)  # {added: 0, removed: 0, modified: 1}

patched = stl_patch(before, diff)

Streaming I/O

from stl_parser import STLEmitter, stream_parse

# Write statements to a file
with STLEmitter("output.stl") as emitter:
    emitter.emit("[A]", "[B]", confidence=0.9, rule="causal")
    emitter.emit("[C]", "[D]", confidence=0.8, rule="logical")

# Stream-parse a file
for stmt in stream_parse("output.stl"):
    print(f"{stmt.source.name} -> {stmt.target.name}")

Confidence Decay

from stl_parser import parse, effective_confidence

# Statement with a timestamp
result = parse('[Fact_X] -> [Conclusion_Y] ::mod(confidence=0.95, timestamp="2020-01-01T00:00:00Z")')
stmt = result.statements[0]

# Compute decayed confidence (exponential half-life)
decayed = effective_confidence(stmt, half_life_days=365)
print(f"Original: {stmt.modifiers.confidence}, Decayed: {decayed:.4f}")

CLI Reference

The stl command provides 10 subcommands:

# Validate an STL file
stl validate input.stl

# Parse and output as JSON
stl parse input.stl --json

# Convert to JSON, RDF/Turtle, JSON-LD, N-Triples
stl convert input.stl --to json --output output.json
stl convert input.stl --to rdf --format turtle --output output.ttl

# Graph analysis (nodes, edges, centrality, cycles)
stl analyze input.stl

# Build a statement from CLI
stl build "[Rain]" "[Flooding]" --mod "rule=causal,confidence=0.85"

# Clean and repair LLM output
stl clean messy_output.txt --show-repairs
stl clean messy_output.txt --schema domain.stl.schema

# Validate against a domain schema
stl schema-validate input.stl --schema medical.stl.schema

# Query with filters and field selection
stl query input.stl --where "rule=causal,confidence__gte=0.8" --select "source.name,target.name"

# Semantic diff between two files
stl diff before.stl after.stl
stl diff before.stl after.stl --format json

# Apply a diff patch
stl patch base.stl changes.json --output patched.stl

Architecture

stl_parser/                    # 19 modules
│
├── Core Layer
│   ├── grammar.py             # Lark EBNF grammar definition
│   ├── parser.py              # parse(), parse_file() — core parser with multi-line merge
│   ├── models.py              # Pydantic v2 models (Anchor, Modifier, Statement, ParseResult)
│   ├── validator.py           # Structural and semantic validation
│   ├── errors.py              # Error hierarchy (E001-E969, W001-W099)
│   └── _utils.py              # Shared utilities (sanitize_anchor_name, etc.)
│
├── Serialization Layer
│   ├── serializer.py          # to_json(), to_dict(), to_stl(), to_rdf(), from_json()
│   ├── builder.py             # stl(), stl_doc(), StatementBuilder — programmatic construction
│   └── emitter.py             # STLEmitter — thread-safe streaming writer
│
├── Analysis Layer
│   ├── graph.py               # STLGraph — NetworkX-based graph construction
│   ├── analyzer.py            # STLAnalyzer — centrality, cycles, statistics
│   └── decay.py               # Confidence decay with configurable half-life
│
├── Query Layer
│   ├── query.py               # find(), filter_statements(), select(), stl_pointer()
│   ├── diff.py                # stl_diff(), stl_patch(), diff_to_text()
│   └── reader.py              # stream_parse(), STLReader — streaming input
│
├── Validation Layer
│   ├── schema.py              # load_schema(), validate_against_schema(), .stl.schema format
│   └── llm.py                 # clean(), repair(), validate_llm_output() — 3-stage pipeline
│
└── Interface Layer
    └── cli.py                 # Typer CLI with 10 commands

Schema Ecosystem

Six domain-specific schemas are included in docs/schemas/:

Schema Domain Key Constraints
tcm.stl.schema Traditional Chinese Medicine Unicode anchors, domain required
scientific.stl.schema Scientific research source required, confidence 0.3-1.0
causal.stl.schema Causal inference strength required, rule must be "causal"
historical.stl.schema Historical knowledge time + source required, multi-script
medical.stl.schema Medical/clinical Prefixed anchors (Symptom_, Drug_, ...)
legal.stl.schema Legal reasoning Prefixed anchors (Law_, Regulation_, ...)

Create custom schemas using docs/schemas/_template.stl.schema.

Development

Setup

git clone https://github.com/scos-lab/semantic-tension-language.git
cd semantic-tension-language/parser
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -e ".[dev]"

Run Tests

pytest tests/                          # Run all 531 tests
pytest tests/ --cov=stl_parser         # With coverage report
pytest tests/test_builder.py           # Single module
pytest tests/ -k "test_query"          # By name pattern

Code Quality

black stl_parser/          # Format
ruff stl_parser/           # Lint
mypy stl_parser/           # Type check

Documentation

Contributing

We welcome contributions. See the Issues page to get started.

License

Apache License 2.0 — see LICENSE.

Citation

@software{stl_parser_2025,
  author = {SCOS-Lab},
  title = {STL Parser: A Comprehensive Toolkit for Semantic Tension Language},
  year = {2025},
  version = {1.7.0},
  url = {https://github.com/scos-lab/semantic-tension-language}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

stl_parser-1.7.1.tar.gz (116.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

stl_parser-1.7.1-py3-none-any.whl (84.1 kB view details)

Uploaded Python 3

File details

Details for the file stl_parser-1.7.1.tar.gz.

File metadata

  • Download URL: stl_parser-1.7.1.tar.gz
  • Upload date:
  • Size: 116.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0

File hashes

Hashes for stl_parser-1.7.1.tar.gz
Algorithm Hash digest
SHA256 326cd1c54cd0a45ef20b1c8f38b9695558563b52e710490c43c5c4426ff21ac2
MD5 65319929d917d8fcaf40043af259f378
BLAKE2b-256 a3014cf640833ab00e5e4599c021251a5577c9fca6ae862e770192e9ff7cc1c2

See more details on using hashes here.

File details

Details for the file stl_parser-1.7.1-py3-none-any.whl.

File metadata

  • Download URL: stl_parser-1.7.1-py3-none-any.whl
  • Upload date:
  • Size: 84.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.0

File hashes

Hashes for stl_parser-1.7.1-py3-none-any.whl
Algorithm Hash digest
SHA256 e5caeeecaa62f0dc096e680a06783b2ab4912169f2f1c9a33826059787695a5e
MD5 9bf1f16d86e88c69d977ca17fd0b5f71
BLAKE2b-256 8f4cc2ae1872a719d8ed89ed03b685c1c2d41be9306a6363a1bfc7f03390189e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page