Skip to main content

FlowScript integrations for AI agent frameworks. Decision intelligence memory for LangGraph, CrewAI, Google ADK, and OpenAI Agents SDK.

Project description

flowscript-agents

Drop-in reasoning memory for AI agent frameworks.

Tests PyPI License: MIT Python


The Problem

Agent memory today is vector search over blobs. Your agent made a decision — why? What's blocking it? What tradeoffs did it weigh? Embeddings can't answer that.

flowscript-agents replaces flat memory with queryable reasoning for LangGraph, CrewAI, Google ADK, and OpenAI Agents SDK. Same interfaces your framework expects, but now memory.query.tensions() actually works.

Built on flowscript-core (TypeScript SDK) and flowscript-ldp (Python IR + query engine).


Install

# Core (framework-agnostic Memory class)
pip install flowscript-agents

# With your framework
pip install flowscript-agents[langgraph]
pip install flowscript-agents[crewai]
pip install flowscript-agents[google-adk]
pip install flowscript-agents[openai-agents]

# Everything
pip install flowscript-agents[all]

Quick Start (Framework-Agnostic)

The Memory class works standalone — no framework required.

from flowscript_agents import Memory

mem = Memory()

q = mem.question("Which database for agent sessions?")
mem.alternative(q, "Redis").decide(rationale="speed critical")
mem.alternative(q, "SQLite").block(reason="no concurrent writes")
mem.tension(
    mem.thought("Redis gives sub-ms reads"),
    mem.thought("cluster costs $200/mo"),
    axis="performance vs cost"
)

# Semantic queries — the thing no other memory gives you
print(mem.query.tensions())       # tradeoffs with named axes
print(mem.query.blocked())        # blockers + downstream impact
print(mem.query.alternatives(q.id))  # options + their states

# Persist
mem.save("./agent-memory.json")

# Next session
mem2 = Memory.load_or_create("./agent-memory.json")

LangGraph

Drop-in BaseStore implementation. Use as your LangGraph store — every item becomes a queryable FlowScript node.

from flowscript_agents.langgraph import FlowScriptStore

store = FlowScriptStore("./agent-memory.json")

# Standard LangGraph store operations
store.put(("agents", "planner"), "db_decision", {"value": "chose Redis for speed"})
items = store.search(("agents", "planner"), query="Redis")

# FlowScript queries on the same data
blockers = store.memory.query.blocked()
tensions = store.memory.query.tensions()

# Async support included
items = await store.aget(("agents",), "key")
await store.aput(("agents",), "key", {"value": "data"})

Install: pip install flowscript-agents[langgraph]


CrewAI

Duck-typed StorageBackend — plug into CrewAI's memory system.

from flowscript_agents.crewai import FlowScriptStorage

storage = FlowScriptStorage("./crew-memory.json")

# Standard CrewAI storage operations
storage.save({"content": "User prefers concise answers", "score": 0.9})
results = storage.search("user preferences", limit=5)

# Scoped storage
storage.save({"content": "API rate limit hit"}, metadata={"scope": "errors"})
scoped = storage.search("rate limit", scope="errors")

# FlowScript queries
tensions = storage.memory.query.tensions()
blockers = storage.memory.query.blocked()

Install: pip install flowscript-agents[crewai]


Google ADK

BaseMemoryService implementation for ADK agents.

from flowscript_agents.google_adk import FlowScriptMemoryService

memory_service = FlowScriptMemoryService("./adk-memory.json")

# Use with ADK Runner
# runner = Runner(agent=agent, memory_service=memory_service, ...)

# Session events are automatically extracted as FlowScript nodes
await memory_service.add_session_to_memory(session)

# Search enriched with FlowScript query results
results = await memory_service.search_memory("my-app", "user-1", "database decision")
# Results include tensions, blockers when search matches reasoning patterns

# Direct query access
tensions = memory_service.memory.query.tensions()

Install: pip install flowscript-agents[google-adk]


OpenAI Agents SDK

Session protocol implementation for the OpenAI Agents SDK.

from flowscript_agents.openai_agents import FlowScriptSession

session = FlowScriptSession("conversation_123", "./openai-memory.json")

# Standard session operations
session.add_items([
    {"role": "user", "content": "Which database should we use?"},
    {"role": "assistant", "content": "I recommend Redis for the speed requirement."}
])
history = session.get_items(limit=10)

# FlowScript queries on conversation reasoning
tensions = session.memory.query.tensions()
blockers = session.memory.query.blocked()

Install: pip install flowscript-agents[openai-agents]


What You Get That Vector Memory Doesn't

Capability Vector stores flowscript-agents
"Why did we decide X?" Dig through logs memory.query.why(node_id)
"What's blocking progress?" Hope you logged it memory.query.blocked()
"What tradeoffs exist?" Good luck memory.query.tensions()
"What alternatives were considered?" Not tracked memory.query.alternatives(q_id)
"What if we remove this?" Rebuild from scratch memory.query.what_if(node_id)
Human-readable export JSON blobs .fs files your PM can read

These aren't complementary to embeddings — they're orthogonal. Use both: vector search for "find similar," FlowScript for "understand reasoning."


API Reference

Memory (core)

from flowscript_agents import Memory

mem = Memory()                           # new empty
mem = Memory.load("./memory.json")       # from file
mem = Memory.load_or_create("./mem.json") # zero-friction entry

# Build reasoning
node = mem.thought("content")            # also: statement, question, action, insight, completion
alt = mem.alternative(question, "option") # linked to question
node.causes(other)                       # causal relationship
node.tension_with(other, axis="speed vs cost")
node.decide(rationale="reason")          # state: decided
node.block(reason="why")                 # state: blocked
node.unblock()                           # remove blocked state

# Query
mem.query.why(node_id)                   # causal chain
mem.query.tensions()                     # all tensions with axes
mem.query.blocked()                      # all blockers + impact
mem.query.alternatives(question_id)      # options + states
mem.query.what_if(node_id)               # downstream impact

# Persist
mem.save("./memory.json")               # atomic write
mem.save()                               # re-save to loaded path

Adapters

Framework Class Interface
LangGraph FlowScriptStore BaseStore (get/put/search/delete + async)
CrewAI FlowScriptStorage StorageBackend (save/search/update/delete + scopes)
Google ADK FlowScriptMemoryService BaseMemoryService (add_session/search_memory)
OpenAI Agents FlowScriptSession Session (get_items/add_items/pop_item/clear)

All adapters expose .memory for direct FlowScript query access.


Session Lifecycle

Memory gets smarter when you use it. Every query touches returned nodes — incrementing frequency and updating timestamps. Nodes that keep getting queried graduate through tiers (currentdevelopingprovenfoundation). Dormant nodes get pruned to an audit trail.

Standalone Memory

mem = Memory.load_or_create("./memory.json")

# Start of session: query to orient
blockers = mem.query.blocked()    # touches returned nodes
tensions = mem.query.tensions()   # touches returned nodes

# ... agent does work, adds decisions, queries reasoning ...

# End of session: prune dormant + save
mem.prune()   # dormant nodes → audit trail
mem.save()    # persist to disk

Per-Framework Guidance

Framework When to prune/save How
LangGraph After your graph run completes store.memory.prune(); store.memory.save()
CrewAI After crew kickoff finishes storage.memory.prune(); storage.memory.save()
Google ADK After runner session ends service.memory.prune(); service.memory.save()
OpenAI Agents After conversation turn/session session.memory.prune(); session.memory.save()

All adapters auto-save on put/save/add_items operations. Explicit prune() + save() at session boundaries keeps the memory garden healthy.


Ecosystem

  • flowscript-core — TypeScript SDK with Memory class, asTools() (12 OpenAI-format tools), token budgeting, audit trail
  • flowscript-ldp — Python IR types + query engine (the foundation this package builds on)
  • flowscript.org — Web editor, D3 visualization, live query panel

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flowscript_agents-0.1.1.tar.gz (44.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

flowscript_agents-0.1.1-py3-none-any.whl (35.4 kB view details)

Uploaded Python 3

File details

Details for the file flowscript_agents-0.1.1.tar.gz.

File metadata

  • Download URL: flowscript_agents-0.1.1.tar.gz
  • Upload date:
  • Size: 44.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for flowscript_agents-0.1.1.tar.gz
Algorithm Hash digest
SHA256 51a8637808a3edad3bf8c3cae19d0474a64af3dd35a2a59d5acb476f30f75d97
MD5 e0963c2611ff37ead7ab992e16995221
BLAKE2b-256 f8be66d112e4285473bf4400a7893533d3ed1594f8821b75830822ae05f917dd

See more details on using hashes here.

File details

Details for the file flowscript_agents-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for flowscript_agents-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 28e8b1aeca18e30773cec875796822a3776e0df05e47b8e8027327f95c124345
MD5 7841e70be25679847d17dedfa9db73d3
BLAKE2b-256 034cdfd4081573228044e497b09153d8701aa65158b0fbe79b71192f83652cfa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page