Universal memory for agentic AI. Attach a brain to any LangChain/LangGraph agent in a single line.
Project description
Dasein
Universal memory for agentic AI. Attach a brain to any LangChain/LangGraph agent in a single line.
Dasein learns from your agent's execution history and automatically injects learned rules to improve performance, reduce costs, and increase reliability across runs.
Features
โจ Zero-friction integration - Wrap any LangChain or LangGraph agent in one line
๐ง Automatic learning - Agents learn from successes and failures
๐ Performance tracking - Built-in token usage, timing, and success metrics
๐ Retry logic - Intelligent retry with learned optimizations
๐ Execution traces - Detailed step-by-step visibility into agent behavior
โ๏ธ Cloud-powered - Distributed rule synthesis and storage
Installation
pip install dasein-core
Or install from source:
git clone https://github.com/nickswami/dasein-core.git
cd dasein-core
pip install -e .
๐ Try It Now in Colab
๐ Zero setup required! Try all three examples in your browser:
Three complete examples with automatic learning:
๐๏ธ SQL Agent โข ๐ Browser Agent โข ๐ Deep Research
30-50% token reduction โข Optimized navigation โข 20-40% multi-agent savings
Quick Start
Basic Usage
from dasein import cognate
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.agents import create_sql_agent
from langchain_community.agent_toolkits import SQLDatabaseToolkit
# Create your agent as usual
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
toolkit = SQLDatabaseToolkit(db=your_database, llm=llm)
agent = create_sql_agent(llm=llm, toolkit=toolkit, agent_type="tool-calling")
# Wrap with Dasein - that's it!
agent = cognate(agent)
# Use exactly like the original
result = agent.run("Show me the top 5 customers by revenue")
With Performance Tracking
from dasein import cognate
# Enable automatic retry and performance comparison
agent = cognate(
your_agent,
retry=2, # Run twice to learn and improve
performance_tracking=True # Show before/after metrics
)
result = agent.run("your query")
# ๐ฏ Dasein automatically shows improvement metrics
Advanced: Custom Optimization Weights
from dasein import cognate
# Customize what Dasein optimizes for
agent = cognate(
your_agent,
weights={
"w1": 2.0, # Heavily favor successful rules
"w2": 0.5, # Less emphasis on turn count
"w3": 1.0, # Standard uncertainty penalty
"w4": 3.0, # Heavily optimize for token efficiency
"w5": 0.1 # Minimal time emphasis
}
)
Architecture
Dasein uses a cloud-first architecture for rule learning and synthesis:
โโโโโโโโโโโโโโโโโโโ
โ Your Agent โ
โ (LangChain/ โ
โ LangGraph) โ
โโโโโโโโโโฌโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ Dasein Wrapper โ โโโ cognate()
โ - Trace Capture โ
โ - Rule Injectionโ
โโโโโโโโโโฌโโโโโโโโโ
โ
โโโโโโดโโโโโ
โผ โผ
โโโโโโโโโโ โโโโโโโโโโ
โPre-Run โ โPost-Runโ
โService โ โService โ
โ โ โ โ
โRecalls โ โLearns โ
โRules โ โRules โ
โโโโโโโโโโ โโโโโโโโโโ
How It Works
- Pre-Run: Dasein queries cloud services for relevant learned rules based on the task
- Execution: Rules are injected into the agent's prompts/tools at optimal injection points
- Trace Capture: Every LLM call, tool invocation, and decision is captured
- Post-Run: Traces are sent to cloud services for rule synthesis and learning
- Next Run: Improved rules are automatically available
API Reference
Core Functions
cognate(agent, weights=None, verbose=False, retry=0, performance_tracking=False, rule_trace=False)
Wrap any LangChain/LangGraph agent with Dasein's learning capabilities.
Parameters:
agent- LangChain or LangGraph agent instanceweights(dict) - Custom optimization weights for rule selection (w1-w5)verbose(bool) - Enable detailed debug loggingretry(int) - Number of retries with learning (0 = single run, 2 = run twice with improvement)performance_tracking(bool) - Show before/after performance metricsrule_trace(bool) - Show detailed rule application trace
Returns: Wrapped agent with identical interface to the original
print_trace()
Display the execution trace of the last agent run.
get_trace()
Retrieve the execution trace as a list of dictionaries.
Returns: List[Dict] - Trace steps with timestamps, tokens, and decisions
clear_trace()
Clear the current execution trace.
inject_hint(hint: str)
Manually inject a hint/rule for the next agent run.
Parameters:
hint(str) - The hint text to inject
reset_brain()
Clear all local state and event storage.
Supported Frameworks
- โ LangChain Agents (all agent types)
- โ LangGraph Agents (CompiledStateGraph)
- โ Custom agents implementing standard interfaces
Examples
See the examples/ directory for complete examples:
- SQL Agent - Learn query patterns for a Chinook database
- Browser Agent - Learn web scraping strategies
- Research Agent - Multi-agent research coordination
Verbose Mode
For debugging, enable verbose logging:
agent = cognate(your_agent, verbose=True)
This shows detailed information about:
- Rule retrieval from cloud services
- Rule injection points and content
- Trace capture steps
- Post-run learning triggers
Requirements
- Python 3.8+
- LangChain 0.1.0+
- LangChain Community 0.1.0+
- LangChain Google GenAI 0.0.6+
See pyproject.toml for complete dependency list.
Configuration
Dasein uses cloud services for rule synthesis and storage. Configure service endpoints via environment variables:
export DASEIN_PRE_RUN_URL="https://your-pre-run-service.com"
export DASEIN_POST_RUN_URL="https://your-post-run-service.com"
Contact the Dasein team for cloud service access.
Performance
Dasein is designed for minimal overhead:
- Pre-run: ~100-200ms for rule retrieval
- Runtime: <1% overhead for trace capture
- Post-run: Async - doesn't block your code
The benefits far outweigh the costs:
- ๐ฏ 30-50% token reduction on repeated tasks
- ๐ฏ Fewer failed runs through learned error handling
- ๐ฏ Faster execution with optimized tool usage
Contributing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
License
MIT License - see LICENSE file for details.
Troubleshooting
Common Issues in Colab/Jupyter
Q: I see timeout warnings for dasein-pre-run and dasein-post-run services
A: These warnings can appear on first connection while the cloud services wake up (cold start). The services are fully public and will work after a brief initialization period. Your agent will continue running and learning will activate automatically once the services respond.
Q: I see dependency conflict warnings
A: These are safe to ignore in Colab. The package will work correctly despite version mismatches with Colab's pre-installed packages.
Support
- ๐ Issues: GitHub Issues
Citation
If you use Dasein in your research, please cite:
@software{dasein2025,
title={Dasein: Universal Memory for Agentic AI},
author={Dasein Team},
year={2025},
url={https://github.com/nickswami/dasein-core}
}
Built with โค๏ธ for the agentic AI community
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dasein_core-0.1.6.tar.gz.
File metadata
- Download URL: dasein_core-0.1.6.tar.gz
- Upload date:
- Size: 75.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
af39d5146b742079fb264aa553fb4b7372b8d8388f50b45a2bdaddad209bb8ee
|
|
| MD5 |
83e5ed8d05ec612d4ebf9b8f6a1e2f51
|
|
| BLAKE2b-256 |
0e646ff2aa0c58e0c139c602dc7df1015455d6fd6101dcd55359c60d7a587984
|
File details
Details for the file dasein_core-0.1.6-py3-none-any.whl.
File metadata
- Download URL: dasein_core-0.1.6-py3-none-any.whl
- Upload date:
- Size: 72.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
687688e743b9d912d54861ffd5a3d61136862f460ae9db99f7574fd056faf56b
|
|
| MD5 |
80e5df319d357feebb54f937f623052e
|
|
| BLAKE2b-256 |
3edd152f842ea17aa255038fe7a39eedab996090878242a5df9bfc7fdd55d0da
|