Skip to main content

Epistemic uncertainty layer for LLMs. Stop hallucinations. Let AI say 'I don't know.'

Project description

KATERYNA

Epistemic Uncertainty Layer for LLMs

Stop your LLMs from hallucinating. Let them say "I don't know."

PyPI version License: MIT


The Problem

User: "What is the capital of Freedonia?"

GPT-4 (binary): "The capital of Freedonia is Fredville."
                 ^ Confident. Wrong. Freedonia is fictional.

Kateryna Layer:  "I don't know. I found no grounded information
                  about Freedonia in my knowledge base."
                 ^ Abstained. Correct response.

LLMs hallucinate because binary architecture forces yes/no responses. There's no native way to represent "I don't know."

Kateryna adds that capability.


The Solution: Ternary Logic

Based on Nikolai Brusentsov's 1958 Setun computer - the first (and only) balanced ternary computer ever mass-produced.

State Value Meaning Action
CONFIDENT +1 Strong RAG evidence, response matches grounding Return answer
UNCERTAIN 0 Weak/no evidence, model hedging Abstain
OVERCONFIDENT -1 Confident language WITHOUT evidence DANGER FLAG

The Critical Insight

The -1 state is the breakthrough.

Traditional confidence scores miss this:

  • Binary: "Is the model confident?" Yes/No
  • Probability: "How confident?" 0-100%

Neither asks: "Is the confidence justified?"

Kateryna asks: Does the confidence level match the grounding evidence?

Confident response + Strong RAG grounding = +1 (Trust it)
Uncertain response + Weak RAG grounding   =  0 (Appropriate uncertainty)
Confident response + Weak RAG grounding   = -1 (HALLUCINATION RISK)

The -1 state catches confident bullshit.


Installation

# Core package (works with any LLM)
pip install kateryna

# With OpenAI support
pip install kateryna[openai]

# With Anthropic support
pip install kateryna[anthropic]

# With local Ollama support
pip install kateryna[ollama]

# All adapters
pip install kateryna[all]

Quick Start

Standalone Detector (Any LLM)

from kateryna import EpistemicDetector, TernaryState

detector = EpistemicDetector()

# Analyze any LLM output
state = detector.analyze(
    text="The capital of Freedonia is definitely Fredville.",
    question="What is the capital of Freedonia?",
    retrieval_confidence=0.05,  # Low RAG score
    chunks_found=0
)

if state.is_danger_zone:
    print(f"DANGER: {state.reason}")
    # "DANGER: Confident response without grounding (RAG: 5%)"

# The -1 state catches hallucinations that LOOK confident
print(state.state)  # TernaryState.OVERCONFIDENT

With OpenAI

from openai import OpenAI
from kateryna.adapters.openai import OpenAISyncEpistemicAdapter

client = OpenAISyncEpistemicAdapter(OpenAI(), model="gpt-4")

response = client.generate_with_rag(
    prompt="How does this function work?",
    rag_chunks=[
        {"content": "def add(a, b): return a + b", "distance": 0.1},
        {"content": "# Adds two numbers together", "distance": 0.15},
    ]
)

if response.epistemic_state.grounded:
    print(f"Confident answer: {response.content}")
elif response.epistemic_state.is_danger_zone:
    print("WARNING: Potential hallucination detected")

With Local Ollama

from kateryna.adapters.ollama import OllamaSyncEpistemicAdapter

client = OllamaSyncEpistemicAdapter(model="llama3.2")

response = client.generate_with_rag(
    prompt="Explain this COBOL code",
    rag_chunks=my_retrieved_chunks
)

Pre-Question Filtering (Save Tokens!)

detector = EpistemicDetector()

# Don't even call the LLM for unanswerable questions
should_abstain, reason = detector.should_abstain_on_question(
    "What will Bitcoin be worth in 2030?"
)

if should_abstain:
    print("Abstaining - question asks for prediction")

RAG Integration

Kateryna works with any vector database. Just pass chunks with a score field:

# Pinecone / ChromaDB (distance)
chunks = [{"content": "...", "distance": 0.1}]

# Weaviate (score)
chunks = [{"text": "...", "score": 0.85}]

# Custom (relevance or similarity)
chunks = [{"content": "...", "relevance": 0.9}]
chunks = [{"content": "...", "similarity": 0.88}]

Ternary State Mapping

RAG Confidence LLM Response State Grounded Action
High (>0.7) Confident +1 Yes Return
High (>0.7) Uncertain 0 Yes Abstain
Medium (0.3-0.7) Confident +1 Yes Return
Medium (0.3-0.7) Uncertain 0 No Abstain
Low (<0.3) Confident -1 No DANGER
Low (<0.3) Uncertain 0 No Abstain
None Any 0 No Abstain

Key insight: A confident response without evidence is MORE dangerous than an uncertain one.


Named After

Kateryna Yushchenko (1919-2001) - Ukrainian computer scientist who invented indirect addressing (pointers) in 1955, systematically erased from Western computing history.

Her work on indirect addressing made modern programming possible. This work on epistemic addressing makes AI trustworthy.


Research Foundation

  • Setun Computer (1958) - Nikolai Brusentsov, Moscow State University
  • Three-Valued Logic - Jan Lukasiewicz (1920s)
  • Ternary Inference - DOI: 10.5281/zenodo.17875182

License

MIT


"AI that knows when it doesn't know."

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kateryna-1.0.0.tar.gz (28.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kateryna-1.0.0-py3-none-any.whl (22.1 kB view details)

Uploaded Python 3

File details

Details for the file kateryna-1.0.0.tar.gz.

File metadata

  • Download URL: kateryna-1.0.0.tar.gz
  • Upload date:
  • Size: 28.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for kateryna-1.0.0.tar.gz
Algorithm Hash digest
SHA256 454c6626c0d513efa9ca75c7ec2f5f3a5144c07b4878c109b949ee8965f876e3
MD5 b9fdb0d3d6f3da66315ff2765635b78a
BLAKE2b-256 6896fea622c384926d10bd958a3f9636c91335e36d216d10ba7f6a338b58d16b

See more details on using hashes here.

File details

Details for the file kateryna-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: kateryna-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 22.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for kateryna-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 67b69cc006ca59b35e4c7601755d7fc043257f6ad384a43cffd3047e100bf14b
MD5 827857050124d2a523f4740fb405b8f1
BLAKE2b-256 2831d39556125d57aa25b7006b07f7b5fad190e347cd728a927b4c56229df896

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page