Skip to main content

Universal memory for agentic AI. Attach a brain to any LangChain/LangGraph agent in a single line.

Project description

Dasein

Universal memory for agentic AI. Attach a brain to any LangChain/LangGraph agent in a single line.

Dasein learns from your agent's execution history and automatically injects learned rules to improve performance, reduce costs, and increase reliability across runs.

Features

โœจ Zero-friction integration - Wrap any LangChain or LangGraph agent in one line
๐Ÿง  Automatic learning - Agents learn from successes and failures
๐Ÿ“Š Performance tracking - Built-in token usage, timing, and success metrics
๐Ÿ”„ Retry logic - Intelligent retry with learned optimizations
๐Ÿ” Execution traces - Detailed step-by-step visibility into agent behavior
โ˜๏ธ Cloud-powered - Distributed rule synthesis and storage

Installation

pip install dasein-core

Or install from source:

git clone https://github.com/nickswami/dasein-core.git
cd dasein-core
pip install -e .

๐Ÿ““ Try It Now in Colab

๐Ÿš€ Zero setup required! Try all three examples in your browser:

Open In Colab

Three complete examples with automatic learning:

๐Ÿ—„๏ธ SQL Agent โ€ข ๐ŸŒ Browser Agent โ€ข ๐Ÿ” Deep Research

30-50% token reduction โ€ข Optimized navigation โ€ข 20-40% multi-agent savings


Quick Start

Basic Usage

from dasein import cognate
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.agents import create_sql_agent
from langchain_community.agent_toolkits import SQLDatabaseToolkit

# Create your agent as usual
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
toolkit = SQLDatabaseToolkit(db=your_database, llm=llm)
agent = create_sql_agent(llm=llm, toolkit=toolkit, agent_type="tool-calling")

# Wrap with Dasein - that's it!
agent = cognate(agent)

# Use exactly like the original
result = agent.run("Show me the top 5 customers by revenue")

With Performance Tracking

from dasein import cognate

# Enable automatic retry and performance comparison
agent = cognate(
    your_agent,
    retry=2,  # Run twice to learn and improve
    performance_tracking=True  # Show before/after metrics
)

result = agent.run("your query")
# ๐ŸŽฏ Dasein automatically shows improvement metrics

Advanced: Custom Optimization Weights

from dasein import cognate

# Customize what Dasein optimizes for
agent = cognate(
    your_agent,
    weights={
        "w1": 2.0,  # Heavily favor successful rules
        "w2": 0.5,  # Less emphasis on turn count
        "w3": 1.0,  # Standard uncertainty penalty
        "w4": 3.0,  # Heavily optimize for token efficiency
        "w5": 0.1   # Minimal time emphasis
    }
)

Architecture

Dasein uses a cloud-first architecture for rule learning and synthesis:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Your Agent     โ”‚
โ”‚  (LangChain/    โ”‚
โ”‚   LangGraph)    โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚
         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ Dasein Wrapper  โ”‚  โ—„โ”€โ”€ cognate()
โ”‚ - Trace Capture โ”‚
โ”‚ - Rule Injectionโ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚
    โ”Œโ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”
    โ–ผ         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚Pre-Run โ”‚ โ”‚Post-Runโ”‚
โ”‚Service โ”‚ โ”‚Service โ”‚
โ”‚        โ”‚ โ”‚        โ”‚
โ”‚Recalls โ”‚ โ”‚Learns  โ”‚
โ”‚Rules   โ”‚ โ”‚Rules   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

How It Works

  1. Pre-Run: Dasein queries cloud services for relevant learned rules based on the task
  2. Execution: Rules are injected into the agent's prompts/tools at optimal injection points
  3. Trace Capture: Every LLM call, tool invocation, and decision is captured
  4. Post-Run: Traces are sent to cloud services for rule synthesis and learning
  5. Next Run: Improved rules are automatically available

API Reference

Core Functions

cognate(agent, weights=None, verbose=False, retry=0, performance_tracking=False, rule_trace=False)

Wrap any LangChain/LangGraph agent with Dasein's learning capabilities.

Parameters:

  • agent - LangChain or LangGraph agent instance
  • weights (dict) - Custom optimization weights for rule selection (w1-w5)
  • verbose (bool) - Enable detailed debug logging
  • retry (int) - Number of retries with learning (0 = single run, 2 = run twice with improvement)
  • performance_tracking (bool) - Show before/after performance metrics
  • rule_trace (bool) - Show detailed rule application trace

Returns: Wrapped agent with identical interface to the original

print_trace()

Display the execution trace of the last agent run.

get_trace()

Retrieve the execution trace as a list of dictionaries.

Returns: List[Dict] - Trace steps with timestamps, tokens, and decisions

clear_trace()

Clear the current execution trace.

inject_hint(hint: str)

Manually inject a hint/rule for the next agent run.

Parameters:

  • hint (str) - The hint text to inject

reset_brain()

Clear all local state and event storage.

Supported Frameworks

  • โœ… LangChain Agents (all agent types)
  • โœ… LangGraph Agents (CompiledStateGraph)
  • โœ… Custom agents implementing standard interfaces

Examples

See the examples/ directory for complete examples:

  • SQL Agent - Learn query patterns for a Chinook database
  • Browser Agent - Learn web scraping strategies
  • Research Agent - Multi-agent research coordination

Verbose Mode

For debugging, enable verbose logging:

agent = cognate(your_agent, verbose=True)

This shows detailed information about:

  • Rule retrieval from cloud services
  • Rule injection points and content
  • Trace capture steps
  • Post-run learning triggers

Requirements

  • Python 3.8+
  • LangChain 0.1.0+
  • LangChain Community 0.1.0+
  • LangChain Google GenAI 0.0.6+

See pyproject.toml for complete dependency list.

Configuration

Dasein uses cloud services for rule synthesis and storage. Configure service endpoints via environment variables:

export DASEIN_PRE_RUN_URL="https://your-pre-run-service.com"
export DASEIN_POST_RUN_URL="https://your-post-run-service.com"

Contact the Dasein team for cloud service access.

Performance

Dasein is designed for minimal overhead:

  • Pre-run: ~100-200ms for rule retrieval
  • Runtime: <1% overhead for trace capture
  • Post-run: Async - doesn't block your code

The benefits far outweigh the costs:

  • ๐ŸŽฏ 30-50% token reduction on repeated tasks
  • ๐ŸŽฏ Fewer failed runs through learned error handling
  • ๐ŸŽฏ Faster execution with optimized tool usage

Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

License

MIT License - see LICENSE file for details.

Troubleshooting

Common Issues in Colab/Jupyter

Q: I see timeout warnings for dasein-pre-run and dasein-post-run services

A: These warnings can appear on first connection while the cloud services wake up (cold start). The services are fully public and will work after a brief initialization period. Your agent will continue running and learning will activate automatically once the services respond.

Q: I see dependency conflict warnings

A: These are safe to ignore in Colab. The package will work correctly despite version mismatches with Colab's pre-installed packages.


Support

Citation

If you use Dasein in your research, please cite:

@software{dasein2025,
  title={Dasein: Universal Memory for Agentic AI},
  author={Dasein Team},
  year={2025},
  url={https://github.com/nickswami/dasein-core}
}

Built with โค๏ธ for the agentic AI community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dasein_core-0.2.13.tar.gz (12.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dasein_core-0.2.13-py3-none-any.whl (12.9 MB view details)

Uploaded Python 3

File details

Details for the file dasein_core-0.2.13.tar.gz.

File metadata

  • Download URL: dasein_core-0.2.13.tar.gz
  • Upload date:
  • Size: 12.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for dasein_core-0.2.13.tar.gz
Algorithm Hash digest
SHA256 1b014a9e40ef10f0eeab34e988edd6b3716f7e97a81f58908ba58b4be9359878
MD5 a5dc705d42fea84a0488fca110a4df1b
BLAKE2b-256 c3ed16d131a7a040c94613f413cc3867ae2a634137ff43ce2653bfcb578b3158

See more details on using hashes here.

File details

Details for the file dasein_core-0.2.13-py3-none-any.whl.

File metadata

  • Download URL: dasein_core-0.2.13-py3-none-any.whl
  • Upload date:
  • Size: 12.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for dasein_core-0.2.13-py3-none-any.whl
Algorithm Hash digest
SHA256 2221ffe113f6985fb42bfa659e75f6c3fab526bc73d63a1137e6c762dcf81779
MD5 346a1f16f3a0becc29a7c265e7f5bb4c
BLAKE2b-256 7874001e7239c931b460d9d0b0a3b152f3a86d7a45839ed0c6f57adeedcb99eb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page