Single-file graph memory for local AI, agents, and Python applications
Project description
liel
Single-file graph memory for local AI, agents, and Python applications. Standalone. Zero core dependencies. No server.
liel is a lightweight local graph memory store for LLM tools, AI agents, and Python applications.
It stores facts, decisions, tasks, files, sources, tool results, and their relationships in one portable .liel file.
The core package has no runtime dependencies. No external database server, cloud service, or background daemon is required. On supported platforms, pip install liel is enough to get started.
MCP integration is optional. Install liel[mcp] only when you want to expose a .liel memory file to an MCP-capable AI tool.
Under the hood, liel is a Rust-core embedded Property Graph database with a Python-first API and optional MCP integration.
If SQLite is the one-file relational database, liel is the one-file graph memory layer for relationship-centric AI workflows.
Etymology: a portmanteau of French lier (to connect) and Latin ligare.
Table of contents
- What liel gives AI tools
- Problems this helps solve
- Install
- Quickstart: LLM memory with MCP
- Quickstart: Python property graph
- What to store
- Vector stores and liel
- When to use liel
- When not to use liel
- Features
- Reliability and failure model
- API reference
- File format
- Limitations
- Documentation
- Contributing
- License
What liel gives AI tools
liel gives local AI tools a memory file they can update, traverse, inspect, and carry between sessions.
With one .liel file, an AI tool can:
- Store entities such as projects, files, tasks, people, sources, and notes.
- Store explicit facts, decisions, observations, and tool results.
- Connect those records with typed relationships.
- Retrieve nearby context by traversing the graph.
- Keep memory local, portable, and easy to back up.
- Run without a database server or background daemon.
- Use the core library with no required runtime dependencies.
This turns scattered AI memory into a durable graph file that both humans and tools can inspect.
Problems this helps solve
Because memory is stored as an explicit local graph, liel helps with problems common in local AI workflows:
- Decisions and assumptions get lost across sessions in chat history.
- Facts, files, sources, tasks, and tool outputs become hard to connect later.
- Keyword search and vector similarity alone do not model explicit relationships.
- AI memory is hard for humans to inspect, clean up, copy, or back up.
- Small local agents often do not need a database server or cloud service.
- Memory needs to move between machines, archives, and experiments as one file.
Install
Install the dependency-free core package:
pip install liel
This installs prebuilt wheels for supported platforms — Rust is not required at install time.
Install the optional MCP integration only when you want an MCP-capable AI tool to use a .liel file as external memory:
pip install "liel[mcp]"
Platform support
- OS: Linux, macOS, Windows
- Architecture: x86_64 first, arm64 where practical
- Python: 3.9 or newer
Source build (for contributors)
You only need this if you are hacking on liel itself, or your platform/Python combination has no prebuilt wheel.
Prerequisites
# Linux / WSL
sudo apt-get update && sudo apt-get install -y build-essential
# macOS
xcode-select --install
# Rust (any OS)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source ~/.cargo/env
Build and install in editable mode
git clone https://github.com/hy-token/liel.git
cd liel
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements-dev.txt
maturin develop
Verify
python -c "import liel; print(liel.__version__)"
See CONTRIBUTING.md for the full developer workflow.
Quickstart: LLM memory with MCP
Install the MCP-enabled package:
pip install "liel[mcp]"
Start the MCP server with a local memory file:
liel-mcp --path agent-memory.liel
The .liel file becomes local external memory for your AI tool. Through MCP, the tool can read and write nodes, edges, and properties, then retrieve related context by traversing the graph.
An agent can store things like:
- Project goals
- User decisions
- Important files and their roles
- Tool results
- Task dependencies
- Sources behind a decision
For real MCP client configuration, prefer an absolute path because clients may start servers from a different working directory:
liel-mcp --path /absolute/path/to/agent-memory.liel
agent-memory.liel is just a file. Copy it to back it up, move it to another machine, or delete it when you no longer need it.
Quickstart: Python property graph
You can also use liel directly as an embedded property graph database from Python.
Basic graph
import liel
with liel.open(":memory:") as db:
alice = db.add_node(["Person"], name="Alice", age=30)
bob = db.add_node(["Person"], name="Bob", age=25)
db.add_edge(alice, "KNOWS", bob, since=2020)
db.commit()
friends = db.neighbors(alice, edge_label="KNOWS")
print(friends[0]["name"]) # Bob
Heterogeneous knowledge graph with QueryBuilder
import liel
with liel.open(":memory:") as db:
alice = db.add_node(["Person"], name="Alice", role="Engineer")
bob = db.add_node(["Person"], name="Bob", role="Designer")
carol = db.add_node(["Person"], name="Carol", role="Engineer")
dave = db.add_node(["Person", "Manager"], name="Dave", role="Manager")
acme = db.add_node(["Company"], name="Acme", industry="SaaS")
py = db.add_node(["Technology"], name="Python", category="Language")
db.add_edge(alice, "WORKS_AT", acme, since=2021)
db.add_edge(alice, "USES", py, proficiency="expert")
db.add_edge(alice, "KNOWS", carol)
db.commit()
engineers = (
db.nodes()
.label("Person")
.where_(lambda n: n.get("role") == "Engineer")
.fetch()
)
print([n["name"] for n in engineers]) # ['Alice', 'Carol']
managers = db.nodes().label("Manager").fetch() # multi-label filter
print([n["name"] for n in managers]) # ['Dave']
→ examples/02_knowledge_graph.py
Bulk import public graph data in a single transaction
import json, urllib.request, liel
url = "https://raw.githubusercontent.com/vega/vega-datasets/main/data/miserables.json"
data = json.loads(urllib.request.urlopen(url).read().decode("utf-8"))
with liel.open(":memory:") as db:
node_ids = []
with db.transaction(): # 1 fsync for the whole batch
for n in data["nodes"]:
node = db.add_node(["Character"], name=n["name"], group=n["group"])
node_ids.append(node.id)
for e in data["links"]:
db.add_edge(
node_ids[e["source"]],
"APPEARS_WITH",
node_ids[e["target"]],
weight=e["value"],
)
The WAL is flushed only on commit, so wrapping a bulk import in db.transaction() keeps I/O cost flat regardless of row count.
What to store
A .liel file can hold structured AI memory such as:
Project: repositories, products, research topicsTask: work items, TODOs, blockersDecision: choices made by the user or agentObservation: facts learned during tool useSource: files, URLs, documents, command outputsPerson/Team: people and ownershipFile/Module: codebase structure
Relationships can express:
DEPENDS_ONMENTIONSDERIVED_FROMDECIDED_BYBLOCKED_BYRELATED_TOUPDATED_BY
See examples/07_agent_memory.py for a small project-memory graph using tasks, files, decisions, sources, and observations.
Vector stores and liel
liel is not a vector database replacement.
Vector stores are useful for semantic similarity search over text. liel is for explicit memory: facts, entities, decisions, dependencies, provenance, and relationships you can traverse.
Many AI workflows can use both:
- Use vector search to find similar text.
- Use graph memory to answer "what is this decision based on?", "which files are related to this task?", or "what changed this assumption?"
When to use liel
Use liel when:
- You want local AI memory as a file, not a server.
- Relationships between entities matter.
- You want to persist decisions, facts, sources, tasks, and tool outputs across sessions.
- You need graph traversal and relationship modeling without running a separate database server.
- You want all memory in one portable
.lielfile that is easy to copy, back up, and archive. - You want a practical Rust-core graph engine with a Python-first developer experience.
Example use cases:
- Local agent memory
- Project memory for coding assistants
- Personal or project knowledge graphs
- MCP-backed memory for AI tools
- Tool result caches with provenance
- Research assistant memory
- Lightweight relationship stores for RAG pipelines
When not to use liel
| If you want… | Use instead |
|---|---|
| Semantic similarity search over text | A vector database or embedding index |
| Graph queries on top of existing tabular data | DuckDB recursive CTEs / DuckPGQ |
| Tens of millions of nodes/edges, or concurrent writes | Neo4j, Amazon Neptune |
| Graph-style queries from a SQL-familiar team on existing relational data | PostgreSQL WITH RECURSIVE |
| High-throughput, low-latency writes | A dedicated server-backed graph database |
| Documents or full-text search as the primary access pattern | MongoDB, Elasticsearch |
liel uses page-level WAL, has no full-text or aggregation queries, and is single-process by design — see Limitations.
Features
- Single file — memory lives in one
.lielfile. - Zero core runtime dependencies — the core
lielpackage has no required runtime dependencies. - Optional MCP integration — install
liel[mcp]only when you want to expose graph memory to MCP clients. - No database server — no external service, daemon, or cloud database is required.
- Property Graph — nodes and edges support multiple labels and arbitrary properties.
- Crash-safe — transactional guarantees via a Write-Ahead Log (WAL).
:memory:mode — in-memory operation for tests and experiments.- Python-first API — type stubs are included for editor support.
Status
- The Rust core is implemented and tested.
- The Python API is usable today for local development, scripts, research, prototypes, and local AI memory experiments.
- CI runs Rust + Python tests on Linux, Windows, and macOS for every pull request and for version-tag pushes such as
v0.1.0— see the Actions tab. - Practical scale (guidance, not a warranty): a few gigabytes in a single
.lielfile is a reasonable comfort zone on typical desktop hardware. Beyond that depends on RAM, disk, and access patterns — measure your workload. - This project does not promise fitness for a particular purpose, SLA-style support, or legal indemnity. See product trade-offs for the explicit list of trade-offs.
Tests
cargo test # Rust unit tests
pytest tests/python/ # Python integration tests
Latest CI results: GitHub Actions.
Reliability and failure model
liel is designed around a narrow reliability contract: one writer process, one local file, explicit commits.
What is covered:
- Committed data survives process crashes.
commit()writes modified pages to the page-level WAL, fsyncs the WAL, applies the pages to their canonical locations, and fsyncs the data file. - Interrupted commits are recovered on open. If a file is opened with a non-empty WAL, recovery replays complete WAL entries back into the data file.
- Double-open is rejected. Opening the same
.lielpath twice for writing raisesAlreadyOpenError; same-process conflicts use an in-process registry, and cross-process conflicts use a<file>.lock/directory. - Corrupt or incompatible files fail closed. Header, checksum, layout, and WAL validation errors surface as explicit
GraphDBErrorsubclasses rather than silent best-effort reads.
What is not covered:
- Multi-process concurrent mutation is not supported. The lock directory rejects a second writer to protect the file, but it does not make concurrent writes safe. If several tools need to write, put one service or worker in charge of the
.lielfile. - Uncommitted changes are disposable. If a process exits before
commit(), the next open returns to the last committed state. - Filesystem guarantees matter.
lielrelies on the local filesystem honoring write and fsync ordering. Network filesystems, sync folders, and unusual virtual filesystems may not provide the same durability semantics.
See the full reliability and failure model and product trade-offs before using liel as durable application state.
API reference
liel.open(path) → GraphDB
db = liel.open("path/to/graph.liel") # file (created if it does not exist)
db = liel.open(":memory:") # in-memory (for testing)
with liel.open("graph.liel") as db: # context manager
...
Use one writer process per .liel file. Concurrent multi-process writes are not supported; if several applications need to modify the same graph, centralize writes through one service or worker.
Opening the same .liel path twice is detected and rejected with liel.AlreadyOpenError. Within one process this uses an in-process registry; across processes it uses a <file>.lock/ directory. Close the previous handle (or let its with block exit) before re-opening:
with liel.open("graph.liel") as db:
...
# the with block releases the writer slot; re-opening here is fine
with liel.open("graph.liel") as db:
...
If a writer crashes and leaves .lock/ behind, the next open() reclaims it when the recorded owner PID is clearly dead. See product trade-offs for the write-safety trade-off and recommended deployment pattern.
Node operations
node = db.add_node(["Person", "Employee"], name="Alice", age=30)
node.id # int: node ID (1-based)
node.labels # list[str]
node["name"] # "Alice"
node.properties # dict (a copy)
"name" in node # True
node.get("x") # None (missing key)
db.get_node(1)
db.update_node(1, age=31) # replace the node's property map
db.delete_node(node) # also deletes incident edges
db.all_nodes()
db.node_count()
Edge operations
edge = db.add_edge(alice, "KNOWS", bob, since=2020)
edge.id # int
edge.label # "KNOWS"
edge.from_node # source node ID
edge.to_node # target node ID
edge["since"] # 2020
db.get_edge(1)
db.update_edge(1, since=2021)
db.delete_edge(edge)
db.all_edges()
db.edge_count()
# Returns an existing edge matching label + properties, or creates one
e = db.merge_edge(alice, "KNOWS", bob, since=2020)
db.out_edges(alice)
db.out_edges(alice, label="KNOWS")
db.in_edges(bob)
Adjacency queries
# direction: "out" (default) | "in" | "both"
db.neighbors(alice)
db.neighbors(alice, edge_label="KNOWS")
db.neighbors(alice, direction="in")
db.neighbors(alice, direction="both")
Traversal
# BFS / DFS → [(Node, depth), ...]
for node, depth in db.bfs(alice, max_depth=3):
print(f"{' ' * depth}{node['name']} (depth={depth})")
for node, depth in db.dfs(alice, max_depth=3):
...
# Minimum-hop directed path → [Node, ...] | None
# (unweighted BFS on out-edges; not Dijkstra)
path = db.shortest_path(alice, carol)
path = db.shortest_path(alice, carol, edge_label="KNOWS")
shortest_path follows out-edges only and minimizes the number of hops; edge properties are not weights. Performance notes for traversal and scan-heavy APIs live in the Python guide.
QueryBuilder (chained methods)
results = db.nodes().label("Person").where_(lambda n: n["age"] > 20).fetch()
count = db.nodes().label("Person").count()
exists = db.nodes().label("Person").where_(lambda n: n["name"] == "Alice").exists()
page2 = db.nodes().label("Person").skip(10).limit(10).fetch()
edges = db.edges().label("KNOWS").where_(lambda e: e["since"] >= 2020).fetch()
Transactions
db.add_node(["Person"], name="Alice")
db.commit()
db.rollback()
with db.transaction(): # recommended
db.add_node(["Person"], name="Alice")
db.add_edge(alice, "KNOWS", bob)
# normal exit -> commit; exception -> rollback
db.begin() # compatibility shim — no state change today
Utilities
db.vacuum() # compact the prop region
db.clear() # fully reset the DB, discard dirty state, and reset IDs to 1
db.repair_adjacency() # rebuild adjacency heads / degrees from live edges
db.info() # {"version": "1.0", "node_count": N, "edge_count": E, "file_size": bytes}
rows = db.all_nodes_as_records() # bulk dict records (fewer PyO3 objects)
rows = db.all_edges_as_records()
stats = db.degree_stats() # { node_id: (out_deg, in_deg) }
sub = db.edges_between({alice.id, bob.id, carol.id}) # edges fully inside the set
JSON import/export is not built into GraphDB. See examples/06_export.py and examples/03_bulk_import.py for reference scripts.
If liel.CorruptedFileError reports damaged adjacency metadata, stop writing to the file, take a backup, and run db.repair_adjacency() before retrying. If repair fails because a live edge points at a missing node, treat the file as more deeply damaged and restore from backup or salvage readable records into a new database.
Node / Edge objects
| Attribute / method | Type | Description |
|---|---|---|
.id |
int |
Auto-assigned ID (1-based) |
.labels |
list[str] |
Node labels |
.label |
str |
Edge label |
.from_node |
int |
Edge source node ID |
.to_node |
int |
Edge target node ID |
.properties |
dict |
Property dict (a copy) |
obj["key"] |
Any |
Property access (raises KeyError) |
obj.get("key") |
Any | None |
Property access (default None) |
"key" in obj |
bool |
Check property existence |
Supported property types
| Python | Stored as |
|---|---|
None |
Null |
bool |
Bool |
int |
Int64 |
float |
Float64 |
str |
String (UTF-8) |
list |
List (recursive) |
dict |
Map (recursive) |
Exception classes
liel.GraphDBError # base class for all liel exceptions
liel.NodeNotFoundError # node does not exist
liel.EdgeNotFoundError # edge does not exist
liel.CorruptedFileError # file is corrupted
liel.TransactionError # transaction violation
try:
db.delete_node(9999)
except liel.GraphDBError as e:
print(e)
Type stubs
Type definitions are provided in python/liel/liel.pyi, compatible with mypy and pyright.
File format
The on-disk unit is a 4096-byte page. Page 0 (offsets 0..4096) starts with the 128-byte file header; the remaining 3968 bytes of page 0 are unused. The WAL has a fixed 4 MiB reservation starting at byte offset 4096 (PAGE_SIZE). After the WAL reservation, node / edge / property extents (1 MiB each) and extent-index pages are appended as needed. Extent locations are tracked via header fields and index-page chains — there is no single contiguous "data region".
Offset 0 - 127 : File header (128 bytes); magic, counts, IDs, extent-index heads, WAL fields
Offset 128 - 4095 : Unused (padding to complete page 0)
Offset 4096 - (4096 + 4 MiB - 1) : WAL reservation (1024 pages; live length in header `wal_length`)
Offset 4198400 - end : Extents and index pages (4 KiB pages), allocated toward EOF
- NodeSlot: fixed 64 bytes
- EdgeSlot: fixed 80 bytes
- Adjacency list: singly linked, prepend on insert
- Properties: custom binary format (no external crate dependencies)
The byte-level format specification lives in the GitHub repository: docs/reference/format-spec.md.
Limitations
- No concurrent writes to the same file. A second writer is rejected with
AlreadyOpenErrorusing an in-process registry plus a cross-process lock directory. This protects the file, but it does not make peer-to-peer multi-writer mutation supported. - The Python
GraphDBuses a process-wide lock (Arc<Mutex<...>>). Concurrent calls from multiple threads serialize on the same handle. - No query language. Python API and QueryBuilder only. Cypher and similar DSLs are deliberate non-goals for the current product shape.
- No property index. Filtered queries use full scans plus optional Python predicates — see the Python guide for API-level performance notes.
- No WASM support. Browser and WASM support are backlog ideas, not part of the current compatibility promise.
If your deployment needs several producers, the recommended pattern today is one writer + many readers rather than peer-to-peer multi-process mutation of the same file.
Documentation
The PyPI source distribution is intentionally small and does not include the full documentation tree or example scripts. Use the GitHub repository for:
Contributing
Pull requests and issues are welcome. Please:
- Read CONTRIBUTING.md before opening a PR.
- Run the local checks (
cargo fmt,cargo clippy,cargo test,pytest tests/python/) — they mirror CI. - Keep changes focused. For larger changes, open an issue first to discuss the approach.
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file liel-0.1.0.tar.gz.
File metadata
- Download URL: liel-0.1.0.tar.gz
- Upload date:
- Size: 130.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: maturin/1.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6e8c88c36afa9fbfe032de0d267417dd3a1539e397cad4c74ab5e7095d07ea58
|
|
| MD5 |
fd699800b2de10f38b4c828f69bab094
|
|
| BLAKE2b-256 |
5f08b1a32a09aa13778cd4c078ad337904df715ad049848663900c0ac77559b6
|
File details
Details for the file liel-0.1.0-cp39-abi3-win_amd64.whl.
File metadata
- Download URL: liel-0.1.0-cp39-abi3-win_amd64.whl
- Upload date:
- Size: 351.2 kB
- Tags: CPython 3.9+, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: maturin/1.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9bef7fd400ab3a98d4d9be7032438f0a3676e4f6d6b3e0cffd91ae41a0172270
|
|
| MD5 |
ee4e1da8e511a9c66e7f619bf9a8e5ae
|
|
| BLAKE2b-256 |
9ffc88d937ff254d12c13a054b94e063937e28e11390bae450ae2d5ee4a248d9
|