Skip to main content

GIINT - General Intuitive Intelligence for Neural Transformers: Multi-fire cognitive architecture

Project description

LLM Intelligence

A systematic multi-fire cognitive response system that separates AI thinking from communication through response files and organized conversation tracking.

Overview

Based on Google's research showing embedding geometry doesn't work at scale, this system enables LLMs to use multiple "fires" for full intelligence expression through cognitive separation:

  • Conversation Channel: AI's thinking space (tool calls, analysis, exploration)
  • Response Channel: AI's deliberate communication (curated response files)

Key Features

  • Arbitrary Response Files: LLM writes response files anywhere, system organizes automatically
  • Emergent Tracking: Free-form project hierarchy (project → feature → component → deliverable → subtask → task → workflow)
  • STARLOG Integration: Logs to debug diary with structured format
  • Cognitive Separation: Clean separation between thinking and communication
  • JSON Safety: All content properly escaped through json module

Installation

pip install llm-intelligence

Usage

As MCP Server

llm-intelligence-server

Direct API Usage

from llm_intelligence import respond

# LLM writes response file anywhere
with open("/tmp/my_response.md", "w") as f:
    f.write("I implemented OAuth authentication...")

# System organizes everything automatically  
result = respond(
    qa_id="abc123",
    response_file_path="/tmp/my_response.md",  # Any path
    one_liner="OAuth implementation complete",
    key_tags=["oauth", "auth"],
    involved_files=["auth.py", "oauth.py"],
    project_id="auth_system",
    feature="oauth",
    component="middleware", 
    deliverable="auth_flow",
    subtask="jwt_validation",
    task="implement_verify",
    workflow_id="sprint_1"
)

Architecture

  • Core Module: llm_intelligence.core - All business logic
  • MCP Server: llm_intelligence.mcp_server - Thin wrapper for MCP integration
  • Organized Storage: qa_sets/{qa_id}/responses/response_XXX/response.md
  • JSON Tracking: Full conversation history with emergent metadata

Cognitive Flow

  1. Fire 1: LLM writes curated response file
  2. Fire 2: LLM does actual work (Read, Edit, Bash)
  3. Fire 3: LLM reports tool usage (optional)
  4. Fire 4: LLM harvests everything with respond()

System handles all organization, cleanup, and tracking automatically.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

giint_llm_intelligence-0.1.7.tar.gz (26.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

giint_llm_intelligence-0.1.7-py3-none-any.whl (30.4 kB view details)

Uploaded Python 3

File details

Details for the file giint_llm_intelligence-0.1.7.tar.gz.

File metadata

  • Download URL: giint_llm_intelligence-0.1.7.tar.gz
  • Upload date:
  • Size: 26.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for giint_llm_intelligence-0.1.7.tar.gz
Algorithm Hash digest
SHA256 b579f68c8c4776273cc872ac82ae4f2be20e955788ab841640b969e495a536c3
MD5 38b7c6c5955e8c2c9e19fb935f3bd644
BLAKE2b-256 c85594f01ee9d2990a2196982bf7298f5b50c9c38ad4fd535e0b003639409faa

See more details on using hashes here.

File details

Details for the file giint_llm_intelligence-0.1.7-py3-none-any.whl.

File metadata

File hashes

Hashes for giint_llm_intelligence-0.1.7-py3-none-any.whl
Algorithm Hash digest
SHA256 d8b8fe8003ef678f63edb0152a5693f98d05cd3edf018a9f68f83b338f4bdab6
MD5 d4c777cc7b8275a921a879a832417b9f
BLAKE2b-256 28b4b89c8dbf7452fbc6891eab5094d98bb51fb7fa45bfc116b24ac01bc34dac

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page