Skip to main content

GIINT - General Intuitive Intelligence for Neural Transformers: Multi-fire cognitive architecture

Project description

LLM Intelligence

A systematic multi-fire cognitive response system that separates AI thinking from communication through response files and organized conversation tracking.

Overview

Based on Google's research showing embedding geometry doesn't work at scale, this system enables LLMs to use multiple "fires" for full intelligence expression through cognitive separation:

  • Conversation Channel: AI's thinking space (tool calls, analysis, exploration)
  • Response Channel: AI's deliberate communication (curated response files)

Key Features

  • Arbitrary Response Files: LLM writes response files anywhere, system organizes automatically
  • Emergent Tracking: Free-form project hierarchy (project → feature → component → deliverable → subtask → task → workflow)
  • STARLOG Integration: Logs to debug diary with structured format
  • Cognitive Separation: Clean separation between thinking and communication
  • JSON Safety: All content properly escaped through json module

Installation

pip install llm-intelligence

Usage

As MCP Server

llm-intelligence-server

Direct API Usage

from llm_intelligence import respond

# LLM writes response file anywhere
with open("/tmp/my_response.md", "w") as f:
    f.write("I implemented OAuth authentication...")

# System organizes everything automatically  
result = respond(
    qa_id="abc123",
    response_file_path="/tmp/my_response.md",  # Any path
    one_liner="OAuth implementation complete",
    key_tags=["oauth", "auth"],
    involved_files=["auth.py", "oauth.py"],
    project_id="auth_system",
    feature="oauth",
    component="middleware", 
    deliverable="auth_flow",
    subtask="jwt_validation",
    task="implement_verify",
    workflow_id="sprint_1"
)

Architecture

  • Core Module: llm_intelligence.core - All business logic
  • MCP Server: llm_intelligence.mcp_server - Thin wrapper for MCP integration
  • Organized Storage: qa_sets/{qa_id}/responses/response_XXX/response.md
  • JSON Tracking: Full conversation history with emergent metadata

Cognitive Flow

  1. Fire 1: LLM writes curated response file
  2. Fire 2: LLM does actual work (Read, Edit, Bash)
  3. Fire 3: LLM reports tool usage (optional)
  4. Fire 4: LLM harvests everything with respond()

System handles all organization, cleanup, and tracking automatically.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

giint_llm_intelligence-0.1.2.tar.gz (11.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

giint_llm_intelligence-0.1.2-py3-none-any.whl (13.9 kB view details)

Uploaded Python 3

File details

Details for the file giint_llm_intelligence-0.1.2.tar.gz.

File metadata

  • Download URL: giint_llm_intelligence-0.1.2.tar.gz
  • Upload date:
  • Size: 11.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for giint_llm_intelligence-0.1.2.tar.gz
Algorithm Hash digest
SHA256 0d1cd0dde5175afaf0e4b5314c320d75395852ba8208cd816866df3b59a81eb7
MD5 5e0b697fcda6d6ab9255318c6bc521e2
BLAKE2b-256 f75b58621b145ca48bab7bf01443dbeeea3630d2e616f0308d7e5fdf55a6b055

See more details on using hashes here.

File details

Details for the file giint_llm_intelligence-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for giint_llm_intelligence-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f2e81b6346e09dacd3227405e30c5abe3cfad589dd9ffbe95c7c3abce562754c
MD5 4f88df7331b72404d1aaade5ec213336
BLAKE2b-256 fe681f24a065b73d56eb57bb960dcd0c399d5d2381d7b5df075a7a00c8c496b5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page