Skip to main content

GIINT - General Intuitive Intelligence for Neural Transformers: Multi-fire cognitive architecture

Project description

LLM Intelligence

A systematic multi-fire cognitive response system that separates AI thinking from communication through response files and organized conversation tracking.

Overview

Based on Google's research showing embedding geometry doesn't work at scale, this system enables LLMs to use multiple "fires" for full intelligence expression through cognitive separation:

  • Conversation Channel: AI's thinking space (tool calls, analysis, exploration)
  • Response Channel: AI's deliberate communication (curated response files)

Key Features

  • Arbitrary Response Files: LLM writes response files anywhere, system organizes automatically
  • Emergent Tracking: Free-form project hierarchy (project → feature → component → deliverable → subtask → task → workflow)
  • STARLOG Integration: Logs to debug diary with structured format
  • Cognitive Separation: Clean separation between thinking and communication
  • JSON Safety: All content properly escaped through json module

Installation

pip install llm-intelligence

Usage

As MCP Server

llm-intelligence-server

Direct API Usage

from llm_intelligence import respond

# LLM writes response file anywhere
with open("/tmp/my_response.md", "w") as f:
    f.write("I implemented OAuth authentication...")

# System organizes everything automatically  
result = respond(
    qa_id="abc123",
    response_file_path="/tmp/my_response.md",  # Any path
    one_liner="OAuth implementation complete",
    key_tags=["oauth", "auth"],
    involved_files=["auth.py", "oauth.py"],
    project_id="auth_system",
    feature="oauth",
    component="middleware", 
    deliverable="auth_flow",
    subtask="jwt_validation",
    task="implement_verify",
    workflow_id="sprint_1"
)

Architecture

  • Core Module: llm_intelligence.core - All business logic
  • MCP Server: llm_intelligence.mcp_server - Thin wrapper for MCP integration
  • Organized Storage: qa_sets/{qa_id}/responses/response_XXX/response.md
  • JSON Tracking: Full conversation history with emergent metadata

Cognitive Flow

  1. Fire 1: LLM writes curated response file
  2. Fire 2: LLM does actual work (Read, Edit, Bash)
  3. Fire 3: LLM reports tool usage (optional)
  4. Fire 4: LLM harvests everything with respond()

System handles all organization, cleanup, and tracking automatically.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

giint_llm_intelligence-0.1.6.tar.gz (15.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

giint_llm_intelligence-0.1.6-py3-none-any.whl (17.8 kB view details)

Uploaded Python 3

File details

Details for the file giint_llm_intelligence-0.1.6.tar.gz.

File metadata

  • Download URL: giint_llm_intelligence-0.1.6.tar.gz
  • Upload date:
  • Size: 15.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for giint_llm_intelligence-0.1.6.tar.gz
Algorithm Hash digest
SHA256 ca4d46b91d56e552021f933f57aec857e9ccc1266c4b8a5a19cbea3e219f02a3
MD5 00cd7ca7388894de7bd4429e2c8d41f9
BLAKE2b-256 9ba6ff430fa738ab251ebfb0035aa5ac34b8895d6a96ae5ca4996d8c6d65dfe2

See more details on using hashes here.

File details

Details for the file giint_llm_intelligence-0.1.6-py3-none-any.whl.

File metadata

File hashes

Hashes for giint_llm_intelligence-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 8e4994b608ea1512691a96b77be25854df66f0f49a495fe9ae7e0dda1b2a3599
MD5 948df0a4ed852e5d475e0de922631fa9
BLAKE2b-256 8dfddd8ed6d4cede9947bc28cc10749e82ef9cd14d8a75588acb34568e4d5e05

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page