Skip to main content

Python SDK for NocturnusAI — a logic-based inference engine and knowledge database

Project description

NocturnusAI Python SDK

Python SDK for NocturnusAI — a logic-based inference engine and knowledge database for AI agents.

Install

pip install nocturnusai
pip install "nocturnusai[langchain]"   # optional LangChain tools

Start the server

# Recommended — installs CLI, configures LLM provider, starts server
curl -fsSL https://raw.githubusercontent.com/Auctalis/nocturnusai/main/install.sh | bash
nocturnusai setup

# Or bare Docker (logic engine works; context extraction requires LLM config — see below)
docker run -p 9300:9300 ghcr.io/auctalis/nocturnusai:latest

Verify it's running:

curl http://localhost:9300/health

Quick start — facts, rules, and inference

This works immediately with a plain server — no LLM configuration needed.

from nocturnusai import SyncNocturnusAIClient

with SyncNocturnusAIClient("http://localhost:9300") as client:
    # Assert facts
    client.assert_fact("parent", ["alice", "bob"])
    client.assert_fact("parent", ["bob", "charlie"])

    # Teach a rule: grandparent(?x, ?z) :- parent(?x, ?y), parent(?y, ?z)
    client.assert_rule(
        head={"predicate": "grandparent", "args": ["?x", "?z"]},
        body=[
            {"predicate": "parent", "args": ["?x", "?y"]},
            {"predicate": "parent", "args": ["?y", "?z"]},
        ],
    )

    # Infer — the engine finds alice is grandparent of charlie
    results = client.infer("grandparent", ["?who", "charlie"])
    for atom in results:
        print(f"{atom.predicate}({', '.join(atom.args)})")
        # grandparent(alice, charlie)

    # Query — exact pattern match without inference
    parents = client.query("parent", ["?x", "?y"])
    print(f"{len(parents)} parent facts")  # 2 parent facts

    # Retract a fact
    client.retract("parent", ["bob", "charlie"])

Context window — salience-ranked retrieval

Use the context methods to retrieve the most relevant facts for an agent's working memory. This works with any facts you've asserted — no LLM needed.

from nocturnusai import SyncNocturnusAIClient

with SyncNocturnusAIClient("http://localhost:9300") as client:
    client.assert_fact("customer_tier", ["acme_corp", "enterprise"])
    client.assert_fact("contract_value", ["acme_corp", "2000000"])
    client.assert_fact("issue", ["acme_corp", "sla_credits_blocked"])

    # Simple salience-ranked window
    ctx = client.context(max_facts=10)
    for f in ctx.facts:
        print(f"{f.predicate}({', '.join(f.args)})")

    # Goal-driven optimization — backward chaining narrows to relevant facts
    ctx = client.context(
        max_facts=10,
        goals=[{"predicate": "customer_tier", "args": ["acme_corp", "?tier"]}],
        session_id="ticket-42",
    )
    print(f"Window has {ctx.window_size} facts out of {ctx.total_available} available")

Context reduction from raw text

Requires LLM. The context extraction workflow uses an LLM to convert raw text into structured facts. You must configure an LLM provider or this will time out / return empty results.

The simplest path: run nocturnusai setup — it auto-configures Ollama. Or set env vars manually:

  • Ollama (local): LLM_BASE_URL=http://host.docker.internal:11434/v1 + LLM_MODEL=granite3.3:8b
  • OpenAI: OPENAI_API_KEY=sk-...
  • Anthropic: ANTHROPIC_API_KEY=sk-ant-...
# Recommended: use the setup wizard (auto-detects Ollama)
nocturnusai setup

# Or manual Docker with Ollama:
docker run -p 9300:9300 \
  -e LLM_BASE_URL=http://host.docker.internal:11434/v1 \
  -e LLM_MODEL=granite3.3:8b \
  ghcr.io/auctalis/nocturnusai:latest
from nocturnusai import SyncNocturnusAIClient

with SyncNocturnusAIClient("http://localhost:9300") as client:
    # Extract facts from raw text, assert them, and get an optimized context window
    ctx = client.ingest_and_optimize(
        text="""
        user: Customer says they are enterprise and blocked on SLA credits.
        tool: CRM says account is Acme Corp with a 2M ARR contract.
        tool: Billing note says renewal is due next month.
        """,
        max_facts=12,
        session_id="ticket-42",
    )
    print(f"Extracted {ctx.total_facts_included} facts from raw text")
    for entry in ctx.entries:
        print(f"  {entry.predicate}({', '.join(entry.args)})")

    # On the next turn, get only what changed (incremental diff)
    diff = client.diff_context(session_id="ticket-42", max_facts=12)
    print(f"Diff: +{len(diff.added)} / -{len(diff.removed)}")

    # Clean up the session when the conversation ends
    client.clear_context_session("ticket-42")

Key context methods

Method Endpoint LLM required?
context() POST /memory/context No
diff_context() POST /context/diff No
summarize_context() POST /context/summary No
clear_context_session() POST /context/session/clear No
ingest_and_optimize() extract + optimize Yes

Named databases

Use the database parameter to isolate data. Call ensure_database() to create the database and its default tenant on the server.

with SyncNocturnusAIClient("http://localhost:9300", database="my-project") as client:
    client.ensure_database()
    client.assert_fact("status", ["ready"])

LangChain integration

from nocturnusai import SyncNocturnusAIClient
from nocturnusai.langchain import get_nocturnusai_tools

client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tools(client)

MCP helper

from nocturnusai.mcp import NocturnusAIMCPClient

async with NocturnusAIMCPClient("http://localhost:9300") as mcp:
    await mcp.initialize()
    tools = await mcp.list_tools()
    result = await mcp.call_tool("context", {"maxFacts": 10, "minSalience": 0.1})
    print(result.text)

Docs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nocturnusai-0.3.1.tar.gz (42.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nocturnusai-0.3.1-py3-none-any.whl (44.2 kB view details)

Uploaded Python 3

File details

Details for the file nocturnusai-0.3.1.tar.gz.

File metadata

  • Download URL: nocturnusai-0.3.1.tar.gz
  • Upload date:
  • Size: 42.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for nocturnusai-0.3.1.tar.gz
Algorithm Hash digest
SHA256 14a1aa4904e9a2bcb18be362fd757c1dd33310dc9428e3a729ecaa7480325517
MD5 0d6e51927196f7419af67f8de88c409c
BLAKE2b-256 e3b27e2aef6015f4ae3e8cad0709836f513804a10511c647c57250b96192b5c7

See more details on using hashes here.

File details

Details for the file nocturnusai-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: nocturnusai-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 44.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for nocturnusai-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 6f917e703d0a920f487a035f17cb2e789cee15f50edebbf8180c382e581588ca
MD5 26239b80b5700636d4a9beb492acf7f2
BLAKE2b-256 21b9c5ea6963207746360efa144d2bf704264295b6bfbd27835deb1a78f1ef0c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page