Skip to main content

A multi-agent cognitive architecture powered by LangGraph โ€” five specialized AI agents modeled after the human brain.

Project description

๐Ÿง  Brain System

A Multi-Agent Cognitive Architecture Powered by LangGraph

Five specialized AI agents โ€” modeled after the human brain โ€” collaborate to process your input and generate thoughtful, nuanced responses.

Python 3.10+ PyPI LangGraph License: MIT


๐Ÿงฉ How It Works

Brain System maps biological brain functions to specialized AI agents that process every input in parallel โ€” just like the human brain:

graph LR
    A[User Input] --> B[๐Ÿ”ต Sensory Agent<br>Thalamus]
    B --> C[๐ŸŸฃ Memory Agent<br>Hippocampus]
    B --> D[๐ŸŸข Logic Agent<br>Frontal Lobe]
    B --> E[๐Ÿ”ด Emotional Agent<br>Amygdala]
    C --> F[๐ŸŸก Executive Agent<br>Prefrontal Cortex]
    D --> F
    E --> F
    F --> G[Final Response]
Agent Brain Analog What It Does
Sensory Thalamus & Sensory Cortex Multi-layer signal classification, pattern recognition, salience detection
Memory Hippocampus & DLPFC LLM-driven contextual synthesis, associative linking, temporal weighting
Logic Left Frontal Lobe & DLPFC Deductive/inductive reasoning, fallacy detection, counter-arguments
Emotional Amygdala, Insula & Cingulate Emotional profiling, empathy reading, ethical safety checks
Executive Full Prefrontal Cortex Conflict resolution between agents, response calibration, integrated output

๐ŸŽญ Persona Mode

Upload a biography or autobiography, and the entire Brain responds as that person would.

The system extracts personality traits, speech patterns, reasoning style, and emotional tendencies โ€” then injects tailored context into each agent. The Logic Agent thinks in their reasoning style, the Emotional Agent mirrors their emotional tendencies, and the Executive Agent speaks in their voice.

Example: Upload Nelson Mandela's autobiography โ†’ ask about dealing with conflict โ†’ get a response reflecting his values of reconciliation, strategic patience, and ubuntu philosophy.

๐Ÿ“ฆ Install

pip install brain-system

For the web UI, install the optional extra: pip install brain-system[web]

๐Ÿš€ Quick Start โ€” Library Usage

from brain_system import BrainWrapper

# Create a Brain (choose provider: "gemini", "openai", or "ollama")
brain = BrainWrapper(provider="ollama", model_name="mistral")

# Process input through all 5 agents
result = brain.think("What is the meaning of justice?")

# Get the final synthesized response
print(result.response)

# Inspect individual agent signals
print(result.sensory)     # Thalamus โ€” input classification
print(result.memory)      # Hippocampus โ€” memory context
print(result.logic)       # Frontal Lobe โ€” logical analysis
print(result.emotional)   # Amygdala โ€” emotional analysis

Persona Mode

Upload a biography/autobiography (.txt or .pdf) and the Brain responds as that person. Pass a relative path (resolved from your working directory) or an absolute path:

# Relative path โ€” looks in the directory where you run your script
brain.load_persona("gandhi_autobiography.pdf")

# Absolute path โ€” works from anywhere
brain.load_persona("/Users/you/documents/gandhi_autobiography.pdf")

result = brain.think("How should we deal with injustice?")
print(result.response)    # Responds in Gandhi's voice

brain.clear_persona()     # Revert to default

Memory Management

# Custom memory file location
brain = BrainWrapper(provider="gemini", memory_path="./my_memory.json")

# Clear all stored memories
brain.clear_memory()

๐Ÿ”Œ Wrap Your Own Agent

Already have an agent? Wrap it with Brain's cognitive pipeline using AgentWrapper. Your function receives a BrainContext with all four preprocessing agent signals:

from brain_system import AgentWrapper, BrainContext

def my_agent(query: str, ctx: BrainContext) -> str:
    """Your agent logic โ€” use brain signals however you want."""
    return f"Logic: {ctx.logic[:200]}\nEmotion: {ctx.emotional[:200]}"

agent = AgentWrapper(my_agent, provider="openai")
result = agent.run("Should AI be regulated?")
print(result.response)       # Your agent's response
print(result.sensory)         # Brain's sensory signal (also available)

Also works as a decorator:

@AgentWrapper(provider="ollama", model_name="mistral")
def my_agent(query: str, ctx: BrainContext) -> str:
    return f"Based on logic: {ctx.logic[:200]}"

result = my_agent("What is justice?")

API Reference

Class / Method Description
BrainWrapper(provider, model_name, memory_path) Create a standalone Brain instance
.think(input) โ†’ BrainResult Process input through the 5-agent pipeline
.load_persona(filepath) Load a persona from .txt or .pdf
.clear_persona() Remove the active persona
.clear_memory() Erase all long-term memories
.persona_active bool โ€” is a persona loaded?
.persona_name Name of the active persona
AgentWrapper(agent_fn, provider, ...) Wrap your agent with brain processing
.run(input) โ†’ BrainResult Run brain + your agent
BrainContext Dataclass with .query, .sensory, .memory, .logic, .emotional
BrainResult.response Final synthesized response
BrainResult.agent_signals dict of each agent's raw output
BrainResult.sensory / .memory / .logic / .emotional Shortcut accessors

See examples/ for complete usage scripts.


๐Ÿ–ฅ๏ธ Development Setup

Clone & Install

git clone https://github.com/shivamtyagi18/BRAIN.git
cd BRAIN
pip install -e ".[web,dev]"

Configure (Optional)

Create a .env file in the project root for cloud providers:

# Only needed if using Gemini or OpenAI
GOOGLE_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here

No API key needed for Ollama โ€” runs entirely on your local machine.

Run

Web UI

python -m brain_system.app

Open http://localhost:5001 in your browser.

Command Line

brain-cli

๐Ÿ–ฅ๏ธ Web Interface

The web UI features:

  • Provider selection โ€” choose Gemini, OpenAI, or Ollama at startup
  • Persona upload โ€” drag & drop a .txt or .pdf biography
  • Live chat โ€” dark-mode interface with agent activity indicators
  • Agent transparency โ€” expand each agent's internal reasoning with "Show agent signals"
  • Mid-conversation persona switching โ€” change or clear persona without restarting
  • New Chat โ€” full reset button to start fresh
  • Clear Memory โ€” wipe stored memories without restarting

๐Ÿค– Supported LLM Providers

Provider Requirements Best For
Ollama Ollama installed locally Privacy, offline use, no cost
Gemini GOOGLE_API_KEY in .env High-quality responses
OpenAI OPENAI_API_KEY in .env GPT-4 class models

Using Ollama (Local)

# Install Ollama, then pull a model:
ollama pull mistral

# For uncensored output, try:
ollama pull dolphin-mistral

๐Ÿ“ Project Structure

brain-system/
โ”œโ”€โ”€ pyproject.toml                  # Package config & dependencies
โ”œโ”€โ”€ run.sh                          # Single-command launcher
โ”œโ”€โ”€ examples/
โ”‚   โ”œโ”€โ”€ basic_usage.py              # Minimal library usage
โ”‚   โ”œโ”€โ”€ persona_mode.py             # Persona loading example
โ”‚   โ””โ”€โ”€ custom_provider.py          # Provider switching example
โ””โ”€โ”€ brain_system/
    โ”œโ”€โ”€ __init__.py                 # Public API exports
    โ”œโ”€โ”€ wrapper.py                  # BrainWrapper โ€” developer entry point
    โ”œโ”€โ”€ app.py                      # Flask web server (optional)
    โ”œโ”€โ”€ main.py                     # CLI entry point
    โ”œโ”€โ”€ agents/
    โ”‚   โ”œโ”€โ”€ base_agent.py           # Abstract base with persona injection
    โ”‚   โ”œโ”€โ”€ sensory_agent.py        # Input parsing (Thalamus)
    โ”‚   โ”œโ”€โ”€ memory_agent.py         # Context retrieval (Hippocampus)
    โ”‚   โ”œโ”€โ”€ emotional_agent.py      # Sentiment analysis (Amygdala)
    โ”‚   โ”œโ”€โ”€ logic_agent.py          # Reasoning (Frontal Lobe)
    โ”‚   โ””โ”€โ”€ executive_agent.py      # Decision synthesis (PFC)
    โ”œโ”€โ”€ core/
    โ”‚   โ”œโ”€โ”€ orchestrator.py         # LangGraph workflow engine
    โ”‚   โ”œโ”€โ”€ llm_interface.py        # Multi-provider LLM factory
    โ”‚   โ”œโ”€โ”€ memory_store.py         # Persistent memory (JSON)
    โ”‚   โ”œโ”€โ”€ document_loader.py      # TXT/PDF document ingestion
    โ”‚   โ””โ”€โ”€ persona.py              # Persona extraction & injection
    โ””โ”€โ”€ web/
        โ”œโ”€โ”€ templates/index.html    # Chat interface
        โ””โ”€โ”€ static/
            โ”œโ”€โ”€ css/style.css       # Dark-mode theme
            โ””โ”€โ”€ js/app.js           # Frontend logic

๐Ÿ”ง Architecture Highlights

  • LangGraph Orchestration โ€” Agents run as nodes in a compiled state graph with parallel execution for Memory, Logic, and Emotional processing
  • Modular LLM Factory โ€” Swap providers with a single parameter; no code changes needed
  • Dual Memory โ€” Short-term (conversation context) + Long-term (persistent JSON store with keyword retrieval)
  • Persona Injection โ€” Role-specific context: each agent gets different aspects of the persona profile tailored to its function

๐Ÿค Contributing

Contributions are welcome! Some ideas:

  • Vector memory โ€” Replace JSON keyword search with embedding-based retrieval
  • Additional agents โ€” Add a Creativity Agent, Social Agent, or Moral Reasoning Agent
  • Streaming responses โ€” Real-time token streaming in the web UI
  • Multi-turn persona โ€” Let the persona evolve based on the conversation
  • Voice interface โ€” Add speech-to-text input and text-to-speech output

๐Ÿ“ License

MIT License โ€” see LICENSE for details.


Built with ๐Ÿง  by mapping neuroscience to multi-agent AI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

brain_system-0.2.0.tar.gz (407.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

brain_system-0.2.0-py3-none-any.whl (40.4 kB view details)

Uploaded Python 3

File details

Details for the file brain_system-0.2.0.tar.gz.

File metadata

  • Download URL: brain_system-0.2.0.tar.gz
  • Upload date:
  • Size: 407.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for brain_system-0.2.0.tar.gz
Algorithm Hash digest
SHA256 21e3b8f5a1c9799ed19b2d15e8fe7929d458f807fc844ebeb8694473e742a049
MD5 2e7459e93e484170d682a92ce17e33ef
BLAKE2b-256 e22e43f77ed4588866913db9a518f70a458e4bb2ac36d09bc3abf9fa9a7a2b47

See more details on using hashes here.

File details

Details for the file brain_system-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: brain_system-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 40.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for brain_system-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8203c1dbf334cc2732eb63bc41cea3c5ae59399773fc33c6465822b928532a2d
MD5 ccfa7046c8b017a9e26ea721c59dbfcb
BLAKE2b-256 f03e071e4a6de3c130fa96ee6cbbe2a472e8721d79c43b257a60cb8008ff5fd0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page