Skip to main content

A Semantic Type System for AI outputs — validate intent, not just shape.

Project description

Semantix

A Semantic Type System for AI Outputs

Define what your LLM output should mean, not just what shape it has.

Created by Akhona Eland, 2026


Why Semantix?

We have type systems for data structures (int, str, Pydantic models), but nothing for semantic intent. Semantix fills that gap:

from semantix import Intent, validate_intent

class ProfessionalDecline(Intent):
    """The text must politely decline an invitation without being rude or aggressive."""

@validate_intent
def decline_invite(event: str) -> ProfessionalDecline:
    return call_my_llm(event)   # returns a plain string

result = decline_invite("the company retreat")
# ✓ result is a ProfessionalDecline instance — validated by a judge model
# ✗ raises SemanticIntentError if the output is rude, off-topic, etc.

Think of it as Pydantic for meaning.


Installation

# Core (bring your own judge)
pip install semantix

# With OpenAI judge (GPT-4o-mini, accurate)
pip install "semantix[openai]"

# With embedding judge (sentence-transformers, fast, local)
pip install "semantix[embeddings]"

# Everything
pip install "semantix[all]"

Quick Start

1. Define an Intent

from semantix import Intent

class PositiveSentiment(Intent):
    """The text must express a clearly positive, optimistic, or encouraging sentiment."""
    threshold = 0.85  # optional — default is 0.8

2. Decorate your LLM call

from semantix import validate_intent

@validate_intent
def encourage(name: str) -> PositiveSentiment:
    return openai_client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": f"Encourage {name}"}],
    ).choices[0].message.content

3. Handle failures

from semantix import SemanticIntentError

try:
    result = encourage("Alice")
    print(result.text)
except SemanticIntentError as e:
    print(f"Failed: {e.intent_name} (score={e.score})")

Features

Swappable Judges

from semantix import EmbeddingJudge, LLMJudge, CachingJudge

# Fast — local cosine similarity (no API key needed)
@validate_intent(judge=EmbeddingJudge())
def fast_fn(x: str) -> MyIntent: ...

# Accurate — asks GPT-4o-mini Yes/No
@validate_intent(judge=LLMJudge(model="gpt-4o-mini"))
def accurate_fn(x: str) -> MyIntent: ...

# Cached — wraps any judge with LRU cache
@validate_intent(judge=CachingJudge(LLMJudge(), maxsize=256))
def cached_fn(x: str) -> MyIntent: ...

Retries

Re-invoke the LLM if the output fails validation:

@validate_intent(judge=EmbeddingJudge(), retries=3)
def decline(event: str) -> ProfessionalDecline:
    return call_llm(event)  # retried up to 3 extra times on failure

Composite Intents

Combine multiple intents with & (all must pass) or | (any must pass):

from semantix import AllOf, AnyOf

# Operator syntax
PoliteAndPositive = ProfessionalDecline & PositiveSentiment

# Function syntax
FlexibleDecline = AnyOf(ProfessionalDecline, CasualDecline)

@validate_intent(judge=EmbeddingJudge())
def respond(msg: str) -> PoliteAndPositive: ...

Async Support

Works transparently with async def:

@validate_intent(judge=EmbeddingJudge())
async def encourage(name: str) -> PositiveSentiment:
    response = await async_openai_call(name)
    return response

Streaming

Validate once the full stream is assembled:

from semantix import StreamCollector

# Iterator wrapper
sc = StreamCollector(ProfessionalDecline, judge=my_judge)
for chunk in sc.wrap(llm_stream()):
    print(chunk, end="")
result = sc.result()  # validated Intent or raises

# Context manager
async with StreamCollector(ProfessionalDecline, judge=my_judge) as sc:
    async for chunk in llm_stream:
        sc.feed(chunk)
result = sc.result()

Observability

All validation events are emitted via Python's logging module under the semantix logger:

import logging
logging.getLogger("semantix").setLevel(logging.DEBUG)

Output:

INFO  semantix.validation | intent=ProfessionalDecline passed=True score=0.92 latency_ms=45.23 attempt=1

Custom Judges

Implement the Judge interface to plug in any backend:

from semantix import Judge, Verdict

class MyCustomJudge(Judge):
    def evaluate(self, output: str, intent_description: str, threshold: float = 0.8) -> Verdict:
        score = my_scoring_function(output, intent_description)
        return Verdict(passed=score >= threshold, score=score)

API Reference

Symbol Description
Intent Base class — subclass with a docstring to define a semantic type
SemanticIntentError Raised when validation fails (.output, .score, .intent_name)
@validate_intent Decorator — validates return values against their Intent type hint
Judge Abstract base — implement .evaluate() for custom backends
Verdict Dataclass — .passed, .score, .reason
LLMJudge OpenAI-based judge (accurate, needs API key)
EmbeddingJudge Sentence-transformers judge (fast, local)
CachingJudge LRU cache wrapper for any judge
AllOf(*intents) Composite — all intents must be satisfied
AnyOf(*intents) Composite — at least one intent must be satisfied
StreamCollector Validates streamed LLM output once fully assembled

Project Structure

semantix/
├── __init__.py          # Public API
├── intent.py            # Intent base class + metaclass
├── exceptions.py        # SemanticIntentError
├── decorator.py         # @validate_intent (retries, logging)
├── composite.py         # AllOf / AnyOf combinators
├── observability.py     # Structured logging
├── streaming.py         # StreamCollector
├── judges/
│   ├── __init__.py      # Judge ABC + Verdict
│   ├── embedding.py     # EmbeddingJudge
│   ├── llm.py           # LLMJudge
│   └── caching.py       # CachingJudge
└── tests/               # Full test suite (34 tests)

Development

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
python -m pytest tests/ -v

License

MIT


Semantix was created and is maintained by Akhona Eland (2026).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

semantix_ai-0.1.1.tar.gz (20.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

semantix_ai-0.1.1-py3-none-any.whl (18.6 kB view details)

Uploaded Python 3

File details

Details for the file semantix_ai-0.1.1.tar.gz.

File metadata

  • Download URL: semantix_ai-0.1.1.tar.gz
  • Upload date:
  • Size: 20.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for semantix_ai-0.1.1.tar.gz
Algorithm Hash digest
SHA256 d2d7250a31278104e94f89f6192604ee94a42d95c83ffe65eee5e2343fc91e79
MD5 4a667715676ff27a9b14756883f1042c
BLAKE2b-256 1c754b9aa398629ab9b74b476869497c836f0f4a0171f86db7b7360f0b9ee6a7

See more details on using hashes here.

File details

Details for the file semantix_ai-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: semantix_ai-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 18.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for semantix_ai-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 27a2c77c1537ed345d5403ce1b86143087d6831412e6a9d75c2e19f9d6b8826e
MD5 db6a415cfa45a095a595393a5e742325
BLAKE2b-256 7877c25edec3ac2e86e1ce3054e5e34a89c30a560c3f4ec27e9b06a9730eec81

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page