Skip to main content

A multi-agent cognitive architecture powered by LangGraph โ€” five specialized AI agents modeled after the human brain.

Project description

๐Ÿง  Brain System

A Multi-Agent Cognitive Architecture Powered by LangGraph

Five specialized AI agents โ€” modeled after the human brain โ€” collaborate to process your input and generate thoughtful, nuanced responses.

Python 3.10+ PyPI LangGraph License: MIT


๐Ÿงฉ How It Works

Brain System maps biological brain functions to specialized AI agents that process every input in parallel โ€” just like the human brain:

graph LR
    A[User Input] --> B[๐Ÿ”ต Sensory Agent<br>Thalamus]
    B --> C[๐ŸŸฃ Memory Agent<br>Hippocampus]
    B --> D[๐ŸŸข Logic Agent<br>Frontal Lobe]
    B --> E[๐Ÿ”ด Emotional Agent<br>Amygdala]
    C --> F[๐ŸŸก Executive Agent<br>Prefrontal Cortex]
    D --> F
    E --> F
    F --> G[Final Response]
Agent Brain Analog What It Does
Sensory Thalamus & Sensory Cortex Multi-layer signal classification, pattern recognition, salience detection
Memory Hippocampus Persona biography retrieval via ZVec semantic search
Logic Left Frontal Lobe & DLPFC Deductive/inductive reasoning, fallacy detection, counter-arguments
Emotional Amygdala, Insula & Cingulate Emotional profiling, empathy reading, ethical safety checks
Executive Full Prefrontal Cortex Conflict resolution between agents, response calibration, integrated output

๐ŸŽญ Persona Mode

The Brain can embody famous personalities โ€” or anyone you provide a biography for.

Pre-curated Personas

8 personalities sourced from their autobiographies are available out of the box โ€” instant loading, no LLM call required:

Persona ID Source
๐Ÿ•Š๏ธ Mahatma Gandhi gandhi The Story of My Experiments with Truth
๐Ÿ”ฌ Albert Einstein einstein The World As I See It
โœŠ Nelson Mandela mandela Long Walk to Freedom
โš—๏ธ Marie Curie curie Madame Curie by รˆve Curie
๐ŸŽจ Leonardo da Vinci davinci Personal Notebooks
โœ๏ธ Martin Luther King Jr. mlk Stride Toward Freedom
โšก Nikola Tesla tesla My Inventions
๐Ÿ’ป Ada Lovelace lovelace Notes on the Analytical Engine

Custom Personas

Upload any biography or autobiography (.txt / .pdf), and the system extracts personality traits, speech patterns, reasoning style, and emotional tendencies โ€” then injects tailored context into each agent. The Logic Agent thinks in their reasoning style, the Emotional Agent mirrors their emotional tendencies, and the Executive Agent speaks in their voice.

Example: Select Nelson Mandela โ†’ ask about dealing with conflict โ†’ get a response reflecting his values of reconciliation, strategic patience, and ubuntu philosophy.

๐Ÿ“ฆ Install

pip install brain-system

For the web UI, install the optional extra: pip install brain-system[web]

๐Ÿš€ Quick Start โ€” Library Usage

from brain_system import BrainWrapper

# Create a Brain (choose provider: "gemini", "openai", or "ollama")
brain = BrainWrapper(provider="ollama", model_name="mistral")

# Process input through all 5 agents
result = brain.think("What is the meaning of justice?")

# Get the final synthesized response
print(result.response)

# Inspect individual agent signals
print(result.sensory)     # Thalamus โ€” input classification
print(result.memory)      # Hippocampus โ€” memory context
print(result.logic)       # Frontal Lobe โ€” logical analysis
print(result.emotional)   # Amygdala โ€” emotional analysis

Persona Mode

Use a pre-curated persona or upload a biography/autobiography (.txt or .pdf):

# Discover available personas
for p in brain.list_personas():
    print(f"{p['emoji']} {p['name']}  โ†’  ID: {p['id']}")

# Pre-curated persona โ€” loads instantly, no LLM call
brain.load_persona("gandhi")          # by ID
brain.load_persona("einstein")

# Custom persona โ€” pass a file path
brain.load_persona("gandhi_autobiography.pdf")

result = brain.think("How should we deal with injustice?")
print(result.response)    # Responds in persona's voice

brain.clear_persona()     # Revert to default

Memory Management

# Custom memory file location
brain = BrainWrapper(provider="gemini", memory_path="./my_memory.json")

# Clear all stored memories
brain.clear_memory()

๐Ÿ”Œ Wrap Your Own Agent

Already have an agent? Wrap it with Brain's cognitive pipeline using AgentWrapper. Your function receives a BrainContext with all four preprocessing agent signals:

from brain_system import AgentWrapper, BrainContext

def my_agent(query: str, ctx: BrainContext) -> str:
    """Your agent logic โ€” use brain signals however you want."""
    return f"Logic: {ctx.logic[:200]}\nEmotion: {ctx.emotional[:200]}"

agent = AgentWrapper(my_agent, provider="openai")
result = agent.run("Should AI be regulated?")
print(result.response)       # Your agent's response
print(result.sensory)         # Brain's sensory signal (also available)

Also works as a decorator:

@AgentWrapper(provider="ollama", model_name="mistral")
def my_agent(query: str, ctx: BrainContext) -> str:
    return f"Based on logic: {ctx.logic[:200]}"

result = my_agent("What is justice?")

API Reference

Class / Method Description
BrainWrapper(provider, model_name, memory_path) Create a standalone Brain instance
.think(input) โ†’ BrainResult Process input through the 5-agent pipeline
.load_persona(id_or_path) Load a pre-curated persona by ID or a custom .txt/.pdf
.list_personas() Returns list of available pre-curated persona dicts
.clear_persona() Remove the active persona
.clear_memory() Erase all long-term memories
.persona_active bool โ€” is a persona loaded?
.persona_name Name of the active persona
AgentWrapper(agent_fn, provider, ...) Wrap your agent with brain processing
.run(input) โ†’ BrainResult Run brain + your agent
BrainContext Dataclass with .query, .sensory, .memory, .logic, .emotional
BrainResult.response Final synthesized response
BrainResult.agent_signals dict of each agent's raw output
BrainResult.sensory / .memory / .logic / .emotional Shortcut accessors

See examples/ for complete usage scripts.


๐Ÿ–ฅ๏ธ Development Setup

Clone & Install

git clone https://github.com/shivamtyagi18/BRAIN.git
cd BRAIN
pip install -e ".[web,dev]"

Configure (Optional)

Create a .env file in the project root for cloud providers:

# Only needed if using Gemini or OpenAI
GOOGLE_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here

No API key needed for Ollama โ€” runs entirely on your local machine.

Run

Web UI

python -m brain_system.app

Open http://localhost:5001 in your browser.

Command Line

brain-cli

๐Ÿ–ฅ๏ธ Web Interface

The web UI features:

  • Provider selection โ€” choose Gemini, OpenAI, or Ollama at startup
  • Pre-curated personas โ€” pick from 8 famous personalities in a card grid
  • Custom persona upload โ€” drag & drop a .txt or .pdf biography
  • Live chat โ€” dark-mode interface with agent activity indicators
  • Agent transparency โ€” expand each agent's internal reasoning with "Show agent signals"
  • Mid-conversation persona switching โ€” change or clear persona without restarting
  • New Chat โ€” full reset button to start fresh
  • Clear Memory โ€” wipe stored memories without restarting

๐Ÿค– Supported LLM Providers

Provider Requirements Best For
Ollama Ollama installed locally Privacy, offline use, no cost
Gemini GOOGLE_API_KEY in .env High-quality responses
OpenAI OPENAI_API_KEY in .env GPT-4 class models

Using Ollama (Local)

# Install Ollama, then pull a model:
ollama pull mistral

# For uncensored output, try:
ollama pull dolphin-mistral

๐Ÿ“ Project Structure

brain-system/
โ”œโ”€โ”€ pyproject.toml                  # Package config & dependencies
โ”œโ”€โ”€ run.sh                          # Single-command launcher
โ”œโ”€โ”€ examples/
โ”‚   โ”œโ”€โ”€ basic_usage.py              # Minimal library usage
โ”‚   โ”œโ”€โ”€ persona_mode.py             # Persona loading example
โ”‚   โ””โ”€โ”€ custom_provider.py          # Provider switching example
โ””โ”€โ”€ brain_system/
    โ”œโ”€โ”€ __init__.py                 # Public API exports
    โ”œโ”€โ”€ wrapper.py                  # BrainWrapper โ€” developer entry point
    โ”œโ”€โ”€ app.py                      # Flask web server (optional)
    โ”œโ”€โ”€ main.py                     # CLI entry point
    โ”œโ”€โ”€ agents/
    โ”‚   โ”œโ”€โ”€ base_agent.py           # Abstract base with persona injection
    โ”‚   โ”œโ”€โ”€ sensory_agent.py        # Input parsing (Thalamus)
    โ”‚   โ”œโ”€โ”€ memory_agent.py         # Context retrieval (Hippocampus)
    โ”‚   โ”œโ”€โ”€ emotional_agent.py      # Sentiment analysis (Amygdala)
    โ”‚   โ”œโ”€โ”€ logic_agent.py          # Reasoning (Frontal Lobe)
    โ”‚   โ””โ”€โ”€ executive_agent.py      # Decision synthesis (PFC)
    โ”œโ”€โ”€ core/
    โ”‚   โ”œโ”€โ”€ orchestrator.py         # LangGraph workflow engine
    โ”‚   โ”œโ”€โ”€ llm_interface.py        # Multi-provider LLM factory
    โ”‚   โ”œโ”€โ”€ vector_memory.py        # ZVec persona biography search
    โ”‚   โ”œโ”€โ”€ working_memory.py       # Conversation context buffer
    โ”‚   โ”œโ”€โ”€ memory_store.py         # Legacy memory (JSON)
    โ”‚   โ”œโ”€โ”€ document_loader.py      # TXT/PDF document ingestion
    โ”‚   โ””โ”€โ”€ persona.py              # Persona extraction & injection
    โ”œโ”€โ”€ personas/
    โ”‚   โ”œโ”€โ”€ __init__.py             # Package exports
    โ”‚   โ””โ”€โ”€ persona_registry.py     # 8 pre-curated famous persona profiles
    โ””โ”€โ”€ web/
        โ”œโ”€โ”€ templates/index.html    # Chat interface
        โ””โ”€โ”€ static/
            โ”œโ”€โ”€ css/style.css       # Dark-mode theme
            โ””โ”€โ”€ js/app.js           # Frontend logic

๐Ÿ”ง Architecture Highlights

  • LangGraph Orchestration โ€” Agents run as nodes in a compiled state graph with parallel execution for Memory, Logic, and Emotional processing
  • Modular LLM Factory โ€” Swap providers with a single parameter; no code changes needed
  • Dual Memory Architecture โ€” Working Memory (conversation buffer) + ZVec-powered Hippocampus (semantic persona biography search with 384-dim sentence transformer embeddings)
  • Persona Injection โ€” Role-specific context: each agent gets different aspects of the persona profile tailored to its function

๐Ÿค Contributing

Contributions are welcome! Some ideas:

  • Additional agents โ€” Add a Creativity Agent, Social Agent, or Moral Reasoning Agent
  • Streaming responses โ€” Real-time token streaming in the web UI
  • Multi-turn persona โ€” Let the persona evolve based on the conversation
  • Voice interface โ€” Add speech-to-text input and text-to-speech output
  • RAG over full books โ€” Index entire autobiographies (not just profiles) for deeper persona embodiment

๐Ÿ“ License

MIT License โ€” see LICENSE for details.


Built with ๐Ÿง  by mapping neuroscience to multi-agent AI

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

brain_system-0.4.0.tar.gz (425.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

brain_system-0.4.0-py3-none-any.whl (57.6 kB view details)

Uploaded Python 3

File details

Details for the file brain_system-0.4.0.tar.gz.

File metadata

  • Download URL: brain_system-0.4.0.tar.gz
  • Upload date:
  • Size: 425.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for brain_system-0.4.0.tar.gz
Algorithm Hash digest
SHA256 b32bce13a3167ce11df487569353d27df5509f69d38a21687318aa06e2c5ce8a
MD5 950401703b8a0d487c1f4fcc348dacd8
BLAKE2b-256 94b03ecfaed2d237460b821ddee7dfd449d9535c7575b443719ff748df738c9c

See more details on using hashes here.

File details

Details for the file brain_system-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: brain_system-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 57.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.14

File hashes

Hashes for brain_system-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 93be64c1b07eafb78e47f6fb7b1dc2832bd89ec3826d0087536aeed497bc4196
MD5 a9b5bb9ea9f8bfb2c91f812fd6df33b6
BLAKE2b-256 803bd0b136df7d7fb5aec966ea933d2c001567da64d23bb2e5bdd0ea269f9855

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page