A multi-agent cognitive architecture powered by LangGraph โ five specialized AI agents modeled after the human brain.
Project description
๐ง Brain System
A Multi-Agent Cognitive Architecture Powered by LangGraph
Five specialized AI agents โ modeled after the human brain โ collaborate to process your input and generate thoughtful, nuanced responses.
๐งฉ How It Works
Brain System maps biological brain functions to specialized AI agents that process every input in parallel โ just like the human brain:
graph LR
A[User Input] --> B[๐ต Sensory Agent<br>Thalamus]
B --> C[๐ฃ Memory Agent<br>Hippocampus]
B --> D[๐ข Logic Agent<br>Frontal Lobe]
B --> E[๐ด Emotional Agent<br>Amygdala]
C --> F[๐ก Executive Agent<br>Prefrontal Cortex]
D --> F
E --> F
F --> G[Final Response]
| Agent | Brain Analog | What It Does |
|---|---|---|
| Sensory | Thalamus & Sensory Cortex | Multi-layer signal classification, pattern recognition, salience detection |
| Memory | Hippocampus | Persona biography retrieval via ZVec semantic search |
| Logic | Left Frontal Lobe & DLPFC | Deductive/inductive reasoning, fallacy detection, counter-arguments |
| Emotional | Amygdala, Insula & Cingulate | Emotional profiling, empathy reading, ethical safety checks |
| Executive | Full Prefrontal Cortex | Conflict resolution between agents, response calibration, integrated output |
๐ญ Persona Mode
The Brain can embody famous personalities โ or anyone you provide a biography for.
Pre-curated Personas
8 personalities sourced from their autobiographies are available out of the box โ instant loading, no LLM call required:
| Persona | ID | Source |
|---|---|---|
| ๐๏ธ Mahatma Gandhi | gandhi |
The Story of My Experiments with Truth |
| ๐ฌ Albert Einstein | einstein |
The World As I See It |
| โ Nelson Mandela | mandela |
Long Walk to Freedom |
| โ๏ธ Marie Curie | curie |
Madame Curie by รve Curie |
| ๐จ Leonardo da Vinci | davinci |
Personal Notebooks |
| โ๏ธ Martin Luther King Jr. | mlk |
Stride Toward Freedom |
| โก Nikola Tesla | tesla |
My Inventions |
| ๐ป Ada Lovelace | lovelace |
Notes on the Analytical Engine |
Custom Personas
Upload any biography or autobiography (.txt / .pdf), and the system extracts personality traits, speech patterns, reasoning style, and emotional tendencies โ then injects tailored context into each agent. The Logic Agent thinks in their reasoning style, the Emotional Agent mirrors their emotional tendencies, and the Executive Agent speaks in their voice.
Example: Select Nelson Mandela โ ask about dealing with conflict โ get a response reflecting his values of reconciliation, strategic patience, and ubuntu philosophy.
๐ฆ Install
pip install brain-system
For the web UI, install the optional extra:
pip install brain-system[web]
๐ Quick Start โ Library Usage
from brain_system import BrainWrapper
# Create a Brain (choose provider: "gemini", "openai", or "ollama")
brain = BrainWrapper(provider="ollama", model_name="mistral")
# Process input through all 5 agents
result = brain.think("What is the meaning of justice?")
# Get the final synthesized response
print(result.response)
# Inspect individual agent signals
print(result.sensory) # Thalamus โ input classification
print(result.memory) # Hippocampus โ memory context
print(result.logic) # Frontal Lobe โ logical analysis
print(result.emotional) # Amygdala โ emotional analysis
Persona Mode
Use a pre-curated persona or upload a biography/autobiography (.txt or .pdf):
# Discover available personas
for p in brain.list_personas():
print(f"{p['emoji']} {p['name']} โ ID: {p['id']}")
# Pre-curated persona โ loads instantly, no LLM call
brain.load_persona("gandhi") # by ID
brain.load_persona("einstein")
# Custom persona โ pass a file path
brain.load_persona("gandhi_autobiography.pdf")
result = brain.think("How should we deal with injustice?")
print(result.response) # Responds in persona's voice
brain.clear_persona() # Revert to default
Memory Management
# Custom memory file location
brain = BrainWrapper(provider="gemini", memory_path="./my_memory.json")
# Clear all stored memories
brain.clear_memory()
๐ Wrap Your Own Agent
Already have an agent? Wrap it with Brain's cognitive pipeline using AgentWrapper. Your function receives a BrainContext with all four preprocessing agent signals:
from brain_system import AgentWrapper, BrainContext
def my_agent(query: str, ctx: BrainContext) -> str:
"""Your agent logic โ use brain signals however you want."""
return f"Logic: {ctx.logic[:200]}\nEmotion: {ctx.emotional[:200]}"
agent = AgentWrapper(my_agent, provider="openai")
result = agent.run("Should AI be regulated?")
print(result.response) # Your agent's response
print(result.sensory) # Brain's sensory signal (also available)
Also works as a decorator:
@AgentWrapper(provider="ollama", model_name="mistral")
def my_agent(query: str, ctx: BrainContext) -> str:
return f"Based on logic: {ctx.logic[:200]}"
result = my_agent("What is justice?")
API Reference
| Class / Method | Description |
|---|---|
BrainWrapper(provider, model_name, memory_path) |
Create a standalone Brain instance |
.think(input) โ BrainResult |
Process input through the 5-agent pipeline |
.load_persona(id_or_path) |
Load a pre-curated persona by ID or a custom .txt/.pdf |
.list_personas() |
Returns list of available pre-curated persona dicts |
.clear_persona() |
Remove the active persona |
.clear_memory() |
Erase all long-term memories |
.persona_active |
bool โ is a persona loaded? |
.persona_name |
Name of the active persona |
AgentWrapper(agent_fn, provider, ...) |
Wrap your agent with brain processing |
.run(input) โ BrainResult |
Run brain + your agent |
BrainContext |
Dataclass with .query, .sensory, .memory, .logic, .emotional |
BrainResult.response |
Final synthesized response |
BrainResult.agent_signals |
dict of each agent's raw output |
BrainResult.sensory / .memory / .logic / .emotional |
Shortcut accessors |
See examples/ for complete usage scripts.
๐ฅ๏ธ Development Setup
Clone & Install
git clone https://github.com/shivamtyagi18/BRAIN.git
cd BRAIN
pip install -e ".[web,dev]"
Configure (Optional)
Create a .env file in the project root for cloud providers:
# Only needed if using Gemini or OpenAI
GOOGLE_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
No API key needed for Ollama โ runs entirely on your local machine.
Run
Web UI
python -m brain_system.app
Open http://localhost:5001 in your browser.
Command Line
brain-cli
๐ฅ๏ธ Web Interface
The web UI features:
- Provider selection โ choose Gemini, OpenAI, or Ollama at startup
- Pre-curated personas โ pick from 8 famous personalities in a card grid
- Custom persona upload โ drag & drop a
.txtor.pdfbiography - Live chat โ dark-mode interface with agent activity indicators
- Agent transparency โ expand each agent's internal reasoning with "Show agent signals"
- Mid-conversation persona switching โ change or clear persona without restarting
- New Chat โ full reset button to start fresh
- Clear Memory โ wipe stored memories without restarting
๐ค Supported LLM Providers
| Provider | Requirements | Best For |
|---|---|---|
| Ollama | Ollama installed locally | Privacy, offline use, no cost |
| Gemini | GOOGLE_API_KEY in .env |
High-quality responses |
| OpenAI | OPENAI_API_KEY in .env |
GPT-4 class models |
Using Ollama (Local)
# Install Ollama, then pull a model:
ollama pull mistral
# For uncensored output, try:
ollama pull dolphin-mistral
๐ Project Structure
brain-system/
โโโ pyproject.toml # Package config & dependencies
โโโ run.sh # Single-command launcher
โโโ examples/
โ โโโ basic_usage.py # Minimal library usage
โ โโโ persona_mode.py # Persona loading example
โ โโโ custom_provider.py # Provider switching example
โโโ brain_system/
โโโ __init__.py # Public API exports
โโโ wrapper.py # BrainWrapper โ developer entry point
โโโ app.py # Flask web server (optional)
โโโ main.py # CLI entry point
โโโ agents/
โ โโโ base_agent.py # Abstract base with persona injection
โ โโโ sensory_agent.py # Input parsing (Thalamus)
โ โโโ memory_agent.py # Context retrieval (Hippocampus)
โ โโโ emotional_agent.py # Sentiment analysis (Amygdala)
โ โโโ logic_agent.py # Reasoning (Frontal Lobe)
โ โโโ executive_agent.py # Decision synthesis (PFC)
โโโ core/
โ โโโ orchestrator.py # LangGraph workflow engine
โ โโโ llm_interface.py # Multi-provider LLM factory
โ โโโ vector_memory.py # ZVec persona biography search
โ โโโ working_memory.py # Conversation context buffer
โ โโโ memory_store.py # Legacy memory (JSON)
โ โโโ document_loader.py # TXT/PDF document ingestion
โ โโโ persona.py # Persona extraction & injection
โโโ personas/
โ โโโ __init__.py # Package exports
โ โโโ persona_registry.py # 8 pre-curated famous persona profiles
โโโ web/
โโโ templates/index.html # Chat interface
โโโ static/
โโโ css/style.css # Dark-mode theme
โโโ js/app.js # Frontend logic
๐ง Architecture Highlights
- LangGraph Orchestration โ Agents run as nodes in a compiled state graph with parallel execution for Memory, Logic, and Emotional processing
- Modular LLM Factory โ Swap providers with a single parameter; no code changes needed
- Dual Memory Architecture โ Working Memory (conversation buffer) + ZVec-powered Hippocampus (semantic persona biography search with 384-dim sentence transformer embeddings)
- Persona Injection โ Role-specific context: each agent gets different aspects of the persona profile tailored to its function
๐ค Contributing
Contributions are welcome! Some ideas:
- Additional agents โ Add a Creativity Agent, Social Agent, or Moral Reasoning Agent
- Streaming responses โ Real-time token streaming in the web UI
- Multi-turn persona โ Let the persona evolve based on the conversation
- Voice interface โ Add speech-to-text input and text-to-speech output
- RAG over full books โ Index entire autobiographies (not just profiles) for deeper persona embodiment
๐ License
MIT License โ see LICENSE for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file brain_system-0.4.0.tar.gz.
File metadata
- Download URL: brain_system-0.4.0.tar.gz
- Upload date:
- Size: 425.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b32bce13a3167ce11df487569353d27df5509f69d38a21687318aa06e2c5ce8a
|
|
| MD5 |
950401703b8a0d487c1f4fcc348dacd8
|
|
| BLAKE2b-256 |
94b03ecfaed2d237460b821ddee7dfd449d9535c7575b443719ff748df738c9c
|
File details
Details for the file brain_system-0.4.0-py3-none-any.whl.
File metadata
- Download URL: brain_system-0.4.0-py3-none-any.whl
- Upload date:
- Size: 57.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.14
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
93be64c1b07eafb78e47f6fb7b1dc2832bd89ec3826d0087536aeed497bc4196
|
|
| MD5 |
a9b5bb9ea9f8bfb2c91f812fd6df33b6
|
|
| BLAKE2b-256 |
803bd0b136df7d7fb5aec966ea933d2c001567da64d23bb2e5bdd0ea269f9855
|