Skip to main content

A multi-agent cognitive architecture powered by LangGraph โ€” five specialized AI agents modeled after the human brain.

Project description

๐Ÿง  Brain System

A Multi-Agent Cognitive Architecture Powered by LangGraph

Five specialized AI agents โ€” modeled after the human brain โ€” collaborate to process your input and generate thoughtful, nuanced responses.

Python 3.10+ PyPI LangGraph License: MIT


๐Ÿงฉ How It Works

Brain System maps biological brain functions to specialized AI agents that process every input in parallel โ€” just like the human brain:

graph LR
    A[User Input] --> B[๐Ÿ”ต Sensory Agent<br>Thalamus]
    B --> C[๐ŸŸฃ Memory Agent<br>Hippocampus]
    B --> D[๐ŸŸข Logic Agent<br>Frontal Lobe]
    B --> E[๐Ÿ”ด Emotional Agent<br>Amygdala]
    C --> F[๐ŸŸก Executive Agent<br>Prefrontal Cortex]
    D --> F
    E --> F
    F --> G[Final Response]
Agent Brain Analog What It Does
Sensory Thalamus & Sensory Cortex Multi-layer signal classification, pattern recognition, salience detection
Memory Hippocampus & DLPFC LLM-driven contextual synthesis, associative linking, temporal weighting
Logic Left Frontal Lobe & DLPFC Deductive/inductive reasoning, fallacy detection, counter-arguments
Emotional Amygdala, Insula & Cingulate Emotional profiling, empathy reading, ethical safety checks
Executive Full Prefrontal Cortex Conflict resolution between agents, response calibration, integrated output

๐ŸŽญ Persona Mode

Upload a biography or autobiography, and the entire Brain responds as that person would.

The system extracts personality traits, speech patterns, reasoning style, and emotional tendencies โ€” then injects tailored context into each agent. The Logic Agent thinks in their reasoning style, the Emotional Agent mirrors their emotional tendencies, and the Executive Agent speaks in their voice.

Example: Upload Nelson Mandela's autobiography โ†’ ask about dealing with conflict โ†’ get a response reflecting his values of reconciliation, strategic patience, and ubuntu philosophy.

๐Ÿ“ฆ Install

pip install brain-system

For the web UI, install the optional extra: pip install brain-system[web]

๐Ÿš€ Quick Start โ€” Library Usage

from brain_system import BrainWrapper

# Create a Brain (choose provider: "gemini", "openai", or "ollama")
brain = BrainWrapper(provider="ollama", model_name="mistral")

# Process input through all 5 agents
result = brain.think("What is the meaning of justice?")

# Get the final synthesized response
print(result.response)

# Inspect individual agent signals
print(result.sensory)     # Thalamus โ€” input classification
print(result.memory)      # Hippocampus โ€” memory context
print(result.logic)       # Frontal Lobe โ€” logical analysis
print(result.emotional)   # Amygdala โ€” emotional analysis

Persona Mode

brain.load_persona("gandhi_autobiography.pdf")
result = brain.think("How should we deal with injustice?")
print(result.response)    # Responds in Gandhi's voice

brain.clear_persona()     # Revert to default

Memory Management

# Custom memory file location
brain = BrainWrapper(provider="gemini", memory_path="./my_memory.json")

# Clear all stored memories
brain.clear_memory()

API Reference

Class / Method Description
BrainWrapper(provider, model_name, memory_path) Create a Brain instance
.think(input) โ†’ BrainResult Process input through the 5-agent pipeline
.load_persona(filepath) Load a persona from .txt or .pdf
.clear_persona() Remove the active persona
.clear_memory() Erase all long-term memories
.persona_active bool โ€” is a persona loaded?
.persona_name Name of the active persona
BrainResult.response Final synthesized response
BrainResult.agent_signals dict of each agent's raw output
BrainResult.sensory / .memory / .logic / .emotional Shortcut accessors

See examples/ for complete usage scripts.


๐Ÿ–ฅ๏ธ Development Setup

Clone & Install

git clone https://github.com/shivamtyagi18/BRAIN.git
cd BRAIN
pip install -e ".[web,dev]"

Configure (Optional)

Create a .env file in the project root for cloud providers:

# Only needed if using Gemini or OpenAI
GOOGLE_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here

No API key needed for Ollama โ€” runs entirely on your local machine.

Run

Web UI

python -m brain_system.app

Open http://localhost:5001 in your browser.

Command Line

brain-cli

๐Ÿ–ฅ๏ธ Web Interface

The web UI features:

  • Provider selection โ€” choose Gemini, OpenAI, or Ollama at startup
  • Persona upload โ€” drag & drop a .txt or .pdf biography
  • Live chat โ€” dark-mode interface with agent activity indicators
  • Agent transparency โ€” expand each agent's internal reasoning with "Show agent signals"
  • Mid-conversation persona switching โ€” change or clear persona without restarting
  • New Chat โ€” full reset button to start fresh
  • Clear Memory โ€” wipe stored memories without restarting

๐Ÿค– Supported LLM Providers

Provider Requirements Best For
Ollama Ollama installed locally Privacy, offline use, no cost
Gemini GOOGLE_API_KEY in .env High-quality responses
OpenAI OPENAI_API_KEY in .env GPT-4 class models

Using Ollama (Local)

# Install Ollama, then pull a model:
ollama pull mistral

# For uncensored output, try:
ollama pull dolphin-mistral

๐Ÿ“ Project Structure

brain-system/
โ”œโ”€โ”€ pyproject.toml                  # Package config & dependencies
โ”œโ”€โ”€ run.sh                          # Single-command launcher
โ”œโ”€โ”€ examples/
โ”‚   โ”œโ”€โ”€ basic_usage.py              # Minimal library usage
โ”‚   โ”œโ”€โ”€ persona_mode.py             # Persona loading example
โ”‚   โ””โ”€โ”€ custom_provider.py          # Provider switching example
โ””โ”€โ”€ brain_system/
    โ”œโ”€โ”€ __init__.py                 # Public API exports
    โ”œโ”€โ”€ wrapper.py                  # BrainWrapper โ€” developer entry point
    โ”œโ”€โ”€ app.py                      # Flask web server (optional)
    โ”œโ”€โ”€ main.py                     # CLI entry point
    โ”œโ”€โ”€ agents/
    โ”‚   โ”œโ”€โ”€ base_agent.py           # Abstract base with persona injection
    โ”‚   โ”œโ”€โ”€ sensory_agent.py        # Input parsing (Thalamus)
    โ”‚   โ”œโ”€โ”€ memory_agent.py         # Context retrieval (Hippocampus)
    โ”‚   โ”œโ”€โ”€ emotional_agent.py      # Sentiment analysis (Amygdala)
    โ”‚   โ”œโ”€โ”€ logic_agent.py          # Reasoning (Frontal Lobe)
    โ”‚   โ””โ”€โ”€ executive_agent.py      # Decision synthesis (PFC)
    โ”œโ”€โ”€ core/
    โ”‚   โ”œโ”€โ”€ orchestrator.py         # LangGraph workflow engine
    โ”‚   โ”œโ”€โ”€ llm_interface.py        # Multi-provider LLM factory
    โ”‚   โ”œโ”€โ”€ memory_store.py         # Persistent memory (JSON)
    โ”‚   โ”œโ”€โ”€ document_loader.py      # TXT/PDF document ingestion
    โ”‚   โ””โ”€โ”€ persona.py              # Persona extraction & injection
    โ””โ”€โ”€ web/
        โ”œโ”€โ”€ templates/index.html    # Chat interface
        โ””โ”€โ”€ static/
            โ”œโ”€โ”€ css/style.css       # Dark-mode theme
            โ””โ”€โ”€ js/app.js           # Frontend logic

๐Ÿ”ง Architecture Highlights

  • LangGraph Orchestration โ€” Agents run as nodes in a compiled state graph with parallel execution for Memory, Logic, and Emotional processing
  • Modular LLM Factory โ€” Swap providers with a single parameter; no code changes needed
  • Dual Memory โ€” Short-term (conversation context) + Long-term (persistent JSON store with keyword retrieval)
  • Persona Injection โ€” Role-specific context: each agent gets different aspects of the persona profile tailored to its function

๐Ÿค Contributing

Contributions are welcome! Some ideas:

  • Vector memory โ€” Replace JSON keyword search with embedding-based retrieval
  • Additional agents โ€” Add a Creativity Agent, Social Agent, or Moral Reasoning Agent
  • Streaming responses โ€” Real-time token streaming in the web UI
  • Multi-turn persona โ€” Let the persona evolve based on the conversation
  • Voice interface โ€” Add speech-to-text input and text-to-speech output

๐Ÿ“ License

MIT License โ€” see LICENSE for details.


Built with ๐Ÿง  by mapping neuroscience to multi-agent AI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

brain_system-0.1.0.tar.gz (405.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

brain_system-0.1.0-py3-none-any.whl (38.5 kB view details)

Uploaded Python 3

File details

Details for the file brain_system-0.1.0.tar.gz.

File metadata

  • Download URL: brain_system-0.1.0.tar.gz
  • Upload date:
  • Size: 405.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for brain_system-0.1.0.tar.gz
Algorithm Hash digest
SHA256 7ce9bdf93f82c11d3adf8bc345ce4e6a1c53d57e80f15d36d653ecdd0b72b7ec
MD5 7289632994f0caa6d7c98e37e96e0884
BLAKE2b-256 6e9428bc0111f0612240cdb5bfcd2faf8f87a2d04435c744dab5404b427d2738

See more details on using hashes here.

File details

Details for the file brain_system-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: brain_system-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 38.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for brain_system-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 548c5970caf5c52b02a503f9ceb34fd0e7dc5fed0dc420089099b128be36df6b
MD5 d0cd870f964908c5d22dbc9a9ba9b549
BLAKE2b-256 7137169137ce53803d1da8b2dbbd6dbba64135f8c157fe83442597326b6146cb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page