Skip to main content

Python SDK for NocturnusAI — a logic-based inference engine and knowledge database

Project description

NocturnusAI Python SDK

Python SDK for NocturnusAI — a logic-based inference engine and knowledge database for AI agents.

Install

pip install nocturnusai
pip install "nocturnusai[langchain]"   # optional LangChain tools

Start the server

# Recommended — installs CLI, configures LLM provider, starts server
curl -fsSL https://raw.githubusercontent.com/Auctalis/nocturnusai/main/install.sh | bash
nocturnusai setup

# Or bare Docker (logic engine works; context extraction requires LLM config — see below)
docker run -p 9300:9300 ghcr.io/auctalis/nocturnusai:latest

Verify it's running:

curl http://localhost:9300/health

Quick start — facts, rules, and inference

This works immediately with a plain server — no LLM configuration needed.

from nocturnusai import SyncNocturnusAIClient

with SyncNocturnusAIClient("http://localhost:9300") as client:
    # Assert facts
    client.assert_fact("parent", ["alice", "bob"])
    client.assert_fact("parent", ["bob", "charlie"])

    # Teach a rule: grandparent(?x, ?z) :- parent(?x, ?y), parent(?y, ?z)
    client.assert_rule(
        head={"predicate": "grandparent", "args": ["?x", "?z"]},
        body=[
            {"predicate": "parent", "args": ["?x", "?y"]},
            {"predicate": "parent", "args": ["?y", "?z"]},
        ],
    )

    # Infer — the engine finds alice is grandparent of charlie
    results = client.infer("grandparent", ["?who", "charlie"])
    for atom in results:
        print(f"{atom.predicate}({', '.join(atom.args)})")
        # grandparent(alice, charlie)

    # Query — exact pattern match without inference
    parents = client.query("parent", ["?x", "?y"])
    print(f"{len(parents)} parent facts")  # 2 parent facts

    # Retract a fact
    client.retract("parent", ["bob", "charlie"])

Context window — salience-ranked retrieval

Use the context methods to retrieve the most relevant facts for an agent's working memory. This works with any facts you've asserted — no LLM needed.

from nocturnusai import SyncNocturnusAIClient

with SyncNocturnusAIClient("http://localhost:9300") as client:
    client.assert_fact("customer_tier", ["acme_corp", "enterprise"])
    client.assert_fact("contract_value", ["acme_corp", "2000000"])
    client.assert_fact("issue", ["acme_corp", "sla_credits_blocked"])

    # Simple salience-ranked window
    ctx = client.context(max_facts=10)
    for f in ctx.facts:
        print(f"{f.predicate}({', '.join(f.args)})")

    # Goal-driven optimization — backward chaining narrows to relevant facts
    ctx = client.context(
        max_facts=10,
        goals=[{"predicate": "customer_tier", "args": ["acme_corp", "?tier"]}],
        session_id="ticket-42",
    )
    print(f"Window has {ctx.window_size} facts out of {ctx.total_available} available")

Context reduction from raw text

Requires LLM. The context extraction workflow uses an LLM to convert raw text into structured facts. You must configure an LLM provider or this will time out / return empty results.

The simplest path: run nocturnusai setup — it auto-configures Ollama. Or set env vars manually:

  • Ollama (local): LLM_BASE_URL=http://host.docker.internal:11434/v1 + LLM_MODEL=granite3.3:8b
  • OpenAI: OPENAI_API_KEY=sk-...
  • Anthropic: ANTHROPIC_API_KEY=sk-ant-...
# Recommended: use the setup wizard (auto-detects Ollama)
nocturnusai setup

# Or manual Docker with Ollama:
docker run -p 9300:9300 \
  -e LLM_BASE_URL=http://host.docker.internal:11434/v1 \
  -e LLM_MODEL=granite3.3:8b \
  ghcr.io/auctalis/nocturnusai:latest
from nocturnusai import SyncNocturnusAIClient

with SyncNocturnusAIClient("http://localhost:9300") as client:
    # Extract facts from raw text, assert them, and get an optimized context window
    ctx = client.ingest_and_optimize(
        text="""
        user: Customer says they are enterprise and blocked on SLA credits.
        tool: CRM says account is Acme Corp with a 2M ARR contract.
        tool: Billing note says renewal is due next month.
        """,
        max_facts=12,
        session_id="ticket-42",
    )
    print(f"Extracted {ctx.total_facts_included} facts from raw text")
    for entry in ctx.entries:
        print(f"  {entry.predicate}({', '.join(entry.args)})")

    # On the next turn, get only what changed (incremental diff)
    diff = client.diff_context(session_id="ticket-42", max_facts=12)
    print(f"Diff: +{len(diff.added)} / -{len(diff.removed)}")

    # Clean up the session when the conversation ends
    client.clear_context_session("ticket-42")

Key context methods

Method Endpoint LLM required?
context() POST /memory/context No
diff_context() POST /context/diff No
summarize_context() POST /context/summary No
clear_context_session() POST /context/session/clear No
ingest_and_optimize() extract + optimize Yes

Named databases

Use the database parameter to isolate data. Call ensure_database() to create the database and its default tenant on the server.

with SyncNocturnusAIClient("http://localhost:9300", database="my-project") as client:
    client.ensure_database()
    client.assert_fact("status", ["ready"])

LangChain integration

from nocturnusai import SyncNocturnusAIClient
from nocturnusai.langchain import get_nocturnusai_tools

client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tools(client)

MCP helper

from nocturnusai.mcp import NocturnusAIMCPClient

async with NocturnusAIMCPClient("http://localhost:9300") as mcp:
    await mcp.initialize()
    tools = await mcp.list_tools()
    result = await mcp.call_tool("context", {"maxFacts": 10, "minSalience": 0.1})
    print(result.text)

Docs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nocturnusai-0.3.9.tar.gz (49.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nocturnusai-0.3.9-py3-none-any.whl (52.0 kB view details)

Uploaded Python 3

File details

Details for the file nocturnusai-0.3.9.tar.gz.

File metadata

  • Download URL: nocturnusai-0.3.9.tar.gz
  • Upload date:
  • Size: 49.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for nocturnusai-0.3.9.tar.gz
Algorithm Hash digest
SHA256 c831338a5cf5a5901ed86e6bd114f75915dddbf6c812a4b03767e3bb74c9efe7
MD5 7518f68dda70272bfb7726c4ed9a2a2c
BLAKE2b-256 05525f719306de0d81ef5d0b573c843992588a5fff2432ba2db9b070dc9d88d4

See more details on using hashes here.

File details

Details for the file nocturnusai-0.3.9-py3-none-any.whl.

File metadata

  • Download URL: nocturnusai-0.3.9-py3-none-any.whl
  • Upload date:
  • Size: 52.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for nocturnusai-0.3.9-py3-none-any.whl
Algorithm Hash digest
SHA256 2f38e947a810ecf8935fa23d4550143a4698641516c2ccd98965762a1cbe7012
MD5 d209fe995398a8ad6a226410f8a756b9
BLAKE2b-256 c6304e1150c98367b91020556d1640e225163d12107a30e992dd982c5e0d4f83

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page