Skip to main content

Python SDK for NocturnusAI — a logic-based inference engine and knowledge database

Project description

NocturnusAI Python SDK

Python SDK for NocturnusAI — a logic-based inference engine and knowledge database for AI agents.

Install

pip install nocturnusai
pip install "nocturnusai[langchain]"   # optional LangChain tools

Start the server

# Recommended — installs CLI, configures LLM provider, starts server
curl -fsSL https://raw.githubusercontent.com/Auctalis/nocturnusai/main/install.sh | bash
nocturnusai setup

# Or bare Docker (logic engine works; context extraction requires LLM config — see below)
docker run -p 9300:9300 ghcr.io/auctalis/nocturnusai:latest

Verify it's running:

curl http://localhost:9300/health

Quick start — facts, rules, and inference

This works immediately with a plain server — no LLM configuration needed.

from nocturnusai import SyncNocturnusAIClient

with SyncNocturnusAIClient("http://localhost:9300") as client:
    # Assert facts
    client.assert_fact("parent", ["alice", "bob"])
    client.assert_fact("parent", ["bob", "charlie"])

    # Teach a rule: grandparent(?x, ?z) :- parent(?x, ?y), parent(?y, ?z)
    client.assert_rule(
        head={"predicate": "grandparent", "args": ["?x", "?z"]},
        body=[
            {"predicate": "parent", "args": ["?x", "?y"]},
            {"predicate": "parent", "args": ["?y", "?z"]},
        ],
    )

    # Infer — the engine finds alice is grandparent of charlie
    results = client.infer("grandparent", ["?who", "charlie"])
    for atom in results:
        print(f"{atom.predicate}({', '.join(atom.args)})")
        # grandparent(alice, charlie)

    # Query — exact pattern match without inference
    parents = client.query("parent", ["?x", "?y"])
    print(f"{len(parents)} parent facts")  # 2 parent facts

    # Retract a fact
    client.retract("parent", ["bob", "charlie"])

Context window — salience-ranked retrieval

Use the context methods to retrieve the most relevant facts for an agent's working memory. This works with any facts you've asserted — no LLM needed.

from nocturnusai import SyncNocturnusAIClient

with SyncNocturnusAIClient("http://localhost:9300") as client:
    client.assert_fact("customer_tier", ["acme_corp", "enterprise"])
    client.assert_fact("contract_value", ["acme_corp", "2000000"])
    client.assert_fact("issue", ["acme_corp", "sla_credits_blocked"])

    # Simple salience-ranked window
    ctx = client.context(max_facts=10)
    for f in ctx.facts:
        print(f"{f.predicate}({', '.join(f.args)})")

    # Goal-driven optimization — backward chaining narrows to relevant facts
    ctx = client.context(
        max_facts=10,
        goals=[{"predicate": "customer_tier", "args": ["acme_corp", "?tier"]}],
        session_id="ticket-42",
    )
    print(f"Window has {ctx.window_size} facts out of {ctx.total_available} available")

Context reduction from raw text

Requires LLM. The context extraction workflow uses an LLM to convert raw text into structured facts. You must configure an LLM provider or this will time out / return empty results.

The simplest path: run nocturnusai setup — it auto-configures Ollama. Or set env vars manually:

  • Ollama (local): LLM_BASE_URL=http://host.docker.internal:11434/v1 + LLM_MODEL=granite3.3:8b
  • OpenAI: OPENAI_API_KEY=sk-...
  • Anthropic: ANTHROPIC_API_KEY=sk-ant-...
# Recommended: use the setup wizard (auto-detects Ollama)
nocturnusai setup

# Or manual Docker with Ollama:
docker run -p 9300:9300 \
  -e LLM_BASE_URL=http://host.docker.internal:11434/v1 \
  -e LLM_MODEL=granite3.3:8b \
  ghcr.io/auctalis/nocturnusai:latest
from nocturnusai import SyncNocturnusAIClient

with SyncNocturnusAIClient("http://localhost:9300") as client:
    # Extract facts from raw text, assert them, and get an optimized context window
    ctx = client.ingest_and_optimize(
        text="""
        user: Customer says they are enterprise and blocked on SLA credits.
        tool: CRM says account is Acme Corp with a 2M ARR contract.
        tool: Billing note says renewal is due next month.
        """,
        max_facts=12,
        session_id="ticket-42",
    )
    print(f"Extracted {ctx.total_facts_included} facts from raw text")
    for entry in ctx.entries:
        print(f"  {entry.predicate}({', '.join(entry.args)})")

    # On the next turn, get only what changed (incremental diff)
    diff = client.diff_context(session_id="ticket-42", max_facts=12)
    print(f"Diff: +{len(diff.added)} / -{len(diff.removed)}")

    # Clean up the session when the conversation ends
    client.clear_context_session("ticket-42")

Key context methods

Method Endpoint LLM required?
context() POST /memory/context No
diff_context() POST /context/diff No
summarize_context() POST /context/summary No
clear_context_session() POST /context/session/clear No
ingest_and_optimize() extract + optimize Yes

Named databases

Use the database parameter to isolate data. Call ensure_database() to create the database and its default tenant on the server.

with SyncNocturnusAIClient("http://localhost:9300", database="my-project") as client:
    client.ensure_database()
    client.assert_fact("status", ["ready"])

LangChain integration

from nocturnusai import SyncNocturnusAIClient
from nocturnusai.langchain import get_nocturnusai_tools

client = SyncNocturnusAIClient("http://localhost:9300")
tools = get_nocturnusai_tools(client)

MCP helper

from nocturnusai.mcp import NocturnusAIMCPClient

async with NocturnusAIMCPClient("http://localhost:9300") as mcp:
    await mcp.initialize()
    tools = await mcp.list_tools()
    result = await mcp.call_tool("context", {"maxFacts": 10, "minSalience": 0.1})
    print(result.text)

Docs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nocturnusai-0.3.7.tar.gz (47.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

nocturnusai-0.3.7-py3-none-any.whl (49.6 kB view details)

Uploaded Python 3

File details

Details for the file nocturnusai-0.3.7.tar.gz.

File metadata

  • Download URL: nocturnusai-0.3.7.tar.gz
  • Upload date:
  • Size: 47.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for nocturnusai-0.3.7.tar.gz
Algorithm Hash digest
SHA256 65f194153311beccef5897c2f0f2e5c2cb65a6636bb9076d7a84efc8a7b2d52e
MD5 62ab5a9763ae0f7d7494429670e7d857
BLAKE2b-256 fbac7062b78ff7de57af73f9744e897a5bdf95f5dbdcfa3757a2dbae2990d970

See more details on using hashes here.

File details

Details for the file nocturnusai-0.3.7-py3-none-any.whl.

File metadata

  • Download URL: nocturnusai-0.3.7-py3-none-any.whl
  • Upload date:
  • Size: 49.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for nocturnusai-0.3.7-py3-none-any.whl
Algorithm Hash digest
SHA256 fc2e95b81f3c5b522e6a0a1a0b3aa3b94975d8b407f2d22f3326a2a579f368b3
MD5 6b3e159a5d7142b3c386faa272a9819f
BLAKE2b-256 150a8c9099cd6ecfd34ede71cad914f9ab75c51c4cb59dc1898ea30fed29e5fc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page