Skip to main content

GIINT - General Intuitive Intelligence for Neural Transformers: Multi-fire cognitive architecture

Project description

LLM Intelligence

A systematic multi-fire cognitive response system that separates AI thinking from communication through response files and organized conversation tracking.

Overview

Based on Google's research showing embedding geometry doesn't work at scale, this system enables LLMs to use multiple "fires" for full intelligence expression through cognitive separation:

  • Conversation Channel: AI's thinking space (tool calls, analysis, exploration)
  • Response Channel: AI's deliberate communication (curated response files)

Key Features

  • Arbitrary Response Files: LLM writes response files anywhere, system organizes automatically
  • Emergent Tracking: Free-form project hierarchy (project → feature → component → deliverable → subtask → task → workflow)
  • STARLOG Integration: Logs to debug diary with structured format
  • Cognitive Separation: Clean separation between thinking and communication
  • JSON Safety: All content properly escaped through json module

Installation

pip install llm-intelligence

Usage

As MCP Server

llm-intelligence-server

Direct API Usage

from llm_intelligence import respond

# LLM writes response file anywhere
with open("/tmp/my_response.md", "w") as f:
    f.write("I implemented OAuth authentication...")

# System organizes everything automatically  
result = respond(
    qa_id="abc123",
    response_file_path="/tmp/my_response.md",  # Any path
    one_liner="OAuth implementation complete",
    key_tags=["oauth", "auth"],
    involved_files=["auth.py", "oauth.py"],
    project_id="auth_system",
    feature="oauth",
    component="middleware", 
    deliverable="auth_flow",
    subtask="jwt_validation",
    task="implement_verify",
    workflow_id="sprint_1"
)

Architecture

  • Core Module: llm_intelligence.core - All business logic
  • MCP Server: llm_intelligence.mcp_server - Thin wrapper for MCP integration
  • Organized Storage: qa_sets/{qa_id}/responses/response_XXX/response.md
  • JSON Tracking: Full conversation history with emergent metadata

Cognitive Flow

  1. Fire 1: LLM writes curated response file
  2. Fire 2: LLM does actual work (Read, Edit, Bash)
  3. Fire 3: LLM reports tool usage (optional)
  4. Fire 4: LLM harvests everything with respond()

System handles all organization, cleanup, and tracking automatically.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

giint_llm_intelligence-0.1.5.tar.gz (15.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

giint_llm_intelligence-0.1.5-py3-none-any.whl (17.8 kB view details)

Uploaded Python 3

File details

Details for the file giint_llm_intelligence-0.1.5.tar.gz.

File metadata

  • Download URL: giint_llm_intelligence-0.1.5.tar.gz
  • Upload date:
  • Size: 15.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for giint_llm_intelligence-0.1.5.tar.gz
Algorithm Hash digest
SHA256 1a2cb35cb1c6fb8d9f18baa895a9c150eea56bcb95801877e388b37e97141eac
MD5 58c7ab6bf3aa9f17f13a6f5732b8d5c8
BLAKE2b-256 6bf82edb61e277f7ac919e18942558b2317ab76abab60733a48667e62aea9cf2

See more details on using hashes here.

File details

Details for the file giint_llm_intelligence-0.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for giint_llm_intelligence-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 62e9879c36a7e1091307415b4bfbaa8364ea95f06ad4767fe05dc5c5855b726c
MD5 f09378f49921a0199f17e6dc57c7427e
BLAKE2b-256 2cbcd41b4656df81c0be9b36690962f6f690c1e8a9010ce32dad85cf3b958351

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page