Skip to main content

GIINT - General Intuitive Intelligence for Neural Transformers: Multi-fire cognitive architecture

Project description

LLM Intelligence

A systematic multi-fire cognitive response system that separates AI thinking from communication through response files and organized conversation tracking.

Overview

Based on Google's research showing embedding geometry doesn't work at scale, this system enables LLMs to use multiple "fires" for full intelligence expression through cognitive separation:

  • Conversation Channel: AI's thinking space (tool calls, analysis, exploration)
  • Response Channel: AI's deliberate communication (curated response files)

Key Features

  • Arbitrary Response Files: LLM writes response files anywhere, system organizes automatically
  • Emergent Tracking: Free-form project hierarchy (project → feature → component → deliverable → subtask → task → workflow)
  • STARLOG Integration: Logs to debug diary with structured format
  • Cognitive Separation: Clean separation between thinking and communication
  • JSON Safety: All content properly escaped through json module

Installation

pip install llm-intelligence

Usage

As MCP Server

llm-intelligence-server

Direct API Usage

from llm_intelligence import respond

# LLM writes response file anywhere
with open("/tmp/my_response.md", "w") as f:
    f.write("I implemented OAuth authentication...")

# System organizes everything automatically  
result = respond(
    qa_id="abc123",
    response_file_path="/tmp/my_response.md",  # Any path
    one_liner="OAuth implementation complete",
    key_tags=["oauth", "auth"],
    involved_files=["auth.py", "oauth.py"],
    project_id="auth_system",
    feature="oauth",
    component="middleware", 
    deliverable="auth_flow",
    subtask="jwt_validation",
    task="implement_verify",
    workflow_id="sprint_1"
)

Architecture

  • Core Module: llm_intelligence.core - All business logic
  • MCP Server: llm_intelligence.mcp_server - Thin wrapper for MCP integration
  • Organized Storage: qa_sets/{qa_id}/responses/response_XXX/response.md
  • JSON Tracking: Full conversation history with emergent metadata

Cognitive Flow

  1. Fire 1: LLM writes curated response file
  2. Fire 2: LLM does actual work (Read, Edit, Bash)
  3. Fire 3: LLM reports tool usage (optional)
  4. Fire 4: LLM harvests everything with respond()

System handles all organization, cleanup, and tracking automatically.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

giint_llm_intelligence-0.1.8.tar.gz (36.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

giint_llm_intelligence-0.1.8-py3-none-any.whl (40.8 kB view details)

Uploaded Python 3

File details

Details for the file giint_llm_intelligence-0.1.8.tar.gz.

File metadata

  • Download URL: giint_llm_intelligence-0.1.8.tar.gz
  • Upload date:
  • Size: 36.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for giint_llm_intelligence-0.1.8.tar.gz
Algorithm Hash digest
SHA256 c68cb8a45cc8c90fc5cc3d0322f269aa25f5bcdc7a97a8ca98eae9f3aad98623
MD5 7fdd67febb3f46497a2aeb0585d04d08
BLAKE2b-256 9a94175f0e73f35b73bba8d0b7b7b17bb6176dce84053a74836536a5e6b4f061

See more details on using hashes here.

File details

Details for the file giint_llm_intelligence-0.1.8-py3-none-any.whl.

File metadata

File hashes

Hashes for giint_llm_intelligence-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 792dfd4d4762c54968c26303853dd5c86a4f1ba7ae166ba96a3191612d884800
MD5 2798647e4eb57e3b60095255081a259e
BLAKE2b-256 2a4d6f61a66ee75dd3a743895a88d2a6210968ff1150e4a7e7096cd25e0cb70a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page