Python client for the Anura Memory API — GraphRag + FilesRag
Project description
anura-graph
Python client for the Anura Memory API.
Anura Memory provides two memory products for AI agents:
- GraphRag — Knowledge graph with automatic triple extraction, deduplication, and hybrid retrieval
- FilesRag — Markdown file storage with heading-based chunking and semantic search
Installation
pip install anura-graph
Quick Start
from graphmem import GraphMem
mem = GraphMem(api_key="gm_your_key_here")
# --- GraphRag ---
mem.remember("Alice is VP of Engineering at Acme Corp")
ctx = mem.get_context("What does Alice do?")
# --- FilesRag ---
mem.write_file("/notes/standup.md", "# Standup\n## 2026-02-21\n- Shipped auth module")
results = mem.search_files("auth module")
Configuration
from graphmem import GraphMem, RetryConfig
mem = GraphMem(
api_key="gm_your_key_here",
base_url="https://anuramemory.com", # default
retry=RetryConfig(
max_retries=3, # default
base_delay=0.5, # seconds, default
max_delay=10.0, # seconds, default
retry_on=[429, 500, 502, 503, 504], # default
),
timeout=30.0, # seconds, default
)
The client can be used as a context manager:
with GraphMem(api_key="gm_your_key_here") as mem:
mem.remember("Alice works at Acme")
API Reference
GraphRag
remember(text) -> RememberResult
Extract knowledge from text and store as triples in the graph.
result = mem.remember("Einstein was born in Ulm, Germany")
print(result.extracted_count) # 1
print(result.merged_count) # 1
get_context(query, options?) -> ContextResult
Retrieve context from the knowledge graph.
from graphmem import ContextOptions
# JSON format (default)
ctx = mem.get_context("Einstein")
print(ctx.entities) # [{ "name": "Albert Einstein" }, ...]
# Markdown format (ideal for LLM system prompts)
ctx = mem.get_context("Einstein", ContextOptions(format="markdown"))
print(ctx.content) # "- Albert Einstein -> BORN_IN -> Ulm..."
# Hybrid mode (graph + vector + communities)
ctx = mem.get_context("Einstein", ContextOptions(mode="hybrid"))
search(entity) -> SearchResult
Find an entity and its direct (1-hop) connections.
result = mem.search("Alice")
print(result.edges)
ingest_triples(triples) -> IngestResult
Ingest pre-formatted triples directly (no LLM extraction).
from graphmem import Triple
result = mem.ingest_triples([
Triple(subject="TypeScript", predicate="CREATED_BY", object="Microsoft"),
])
get_graph() -> GraphData
Get the full graph (nodes, edges, communities).
delete_edge(id, blacklist=False)
Delete an edge. Optionally blacklist to prevent re-creation.
update_edge_weight(id, weight=None, increment=None)
Set or increment an edge's weight.
delete_node(id)
Delete a node and all its connected edges.
export_graph() -> ExportData
Export the graph as portable JSON.
import_graph(data) -> ImportResult
Import a graph export (merges, does not delete existing data).
list_communities() -> list[Community]
List all detected communities.
detect_communities() -> DetectCommunitiesResult
Run Louvain community detection + LLM summarization.
FilesRag
write_file(path, content, name=None) -> WriteFileResult
Create or update a markdown memory file. Files are automatically chunked by ## headings and indexed for semantic search.
result = mem.write_file(
"/docs/architecture.md",
"# Architecture\n\n## Backend\nNode.js with Prisma...\n\n## Frontend\nNext.js...",
)
print(result.file.id) # "clxx..."
print(result.chunk_count) # 3
print(result.created) # True
If a file already exists at the given path, its content is replaced and re-indexed.
list_files() -> list[MemoryFile]
List all files in the current project.
files = mem.list_files()
for f in files:
print(f"{f.path} ({f.size} bytes)")
read_file(id) -> FileWithContent
Get a file with its full content.
file = mem.read_file("file_id")
print(file.content) # full markdown
update_file(id, content, name=None) -> WriteFileResult
Update a file's content (re-chunks and re-indexes).
result = mem.update_file("file_id", "# Updated content\n...")
delete_file(id)
Delete a file and all its indexed chunks.
search_files(query, limit=None, file_id=None) -> list[FileSearchResult]
Semantic search across file chunks. Pass file_id to scope the search to a single file.
results = mem.search_files("authentication flow", limit=5)
# Search within a single file:
# results = mem.search_files("auth", file_id="file_abc123")
for r in results:
print(r.file["path"])
for chunk in r.chunks:
print(f" {chunk.heading_title} ({chunk.score:.2f})")
print(f" {chunk.excerpt}")
Projects
| Method | Description |
|---|---|
list_projects() |
List all projects |
create_project(name) |
Create a new project |
delete_project(id) |
Delete a project |
select_project(id) |
Switch active project |
Traces
| Method | Description |
|---|---|
list_traces(limit?, cursor?) |
List query traces with pagination |
get_trace(id) |
Get details for a specific trace |
Blacklist
| Method | Description |
|---|---|
list_blacklist(limit?, cursor?) |
List blacklisted triples |
add_to_blacklist(subject, predicate, object) |
Add a triple to the blacklist |
remove_from_blacklist(id) |
Remove from blacklist |
Pending Facts
| Method | Description |
|---|---|
list_pending(limit?, cursor?) |
List pending facts |
approve_fact(id) |
Approve a pending fact |
reject_fact(id, blacklist?) |
Reject a pending fact |
approve_all() |
Approve all pending facts |
reject_all() |
Reject all pending facts |
Usage
get_usage() -> UsageInfo
usage = mem.get_usage()
print(usage.tier) # "FREE"
print(usage.current_facts) # 42
print(usage.current_file_count) # 3
print(usage.current_file_storage) # 1024
Health
health() -> HealthResult
Error Handling
from graphmem import GraphMem, GraphMemError
try:
mem.read_file("nonexistent")
except GraphMemError as e:
print(f"API error {e.status}: {e}")
print(e.body) # raw response body
Rate Limiting
After each request, rate limit info is available:
mem.remember("some fact")
print(mem.rate_limit.remaining) # requests remaining
print(mem.rate_limit.limit) # total allowed per window
print(mem.rate_limit.reset) # unix timestamp when window resets
Types
All types are exported from the top-level package:
from graphmem import (
# GraphRag
RememberResult, ContextResult, SearchResult, Triple,
GraphNode, GraphEdge, GraphData, Community,
# FilesRag
MemoryFile, FileWithContent, FileSearchResult, WriteFileResult,
# Config
RetryConfig, RateLimitInfo, UsageInfo,
)
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file anura_graph-0.3.0.tar.gz.
File metadata
- Download URL: anura_graph-0.3.0.tar.gz
- Upload date:
- Size: 11.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
28a4202131839070849d3506dbae72f904c6b0ebf682a30a3a0850438b1c98bb
|
|
| MD5 |
81f21375bea344b329f01156bfabb0a2
|
|
| BLAKE2b-256 |
102c529c50061eafcb0449c94f04b34d41686c8dc3f3416bcd7b10621db81251
|
File details
Details for the file anura_graph-0.3.0-py3-none-any.whl.
File metadata
- Download URL: anura_graph-0.3.0-py3-none-any.whl
- Upload date:
- Size: 12.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bacdafd7af717aa7676663c5551f8bc76713da726ceccbdd5e62b90a924520d7
|
|
| MD5 |
f918acc081bf46ec224855c509cf42b9
|
|
| BLAKE2b-256 |
880577f923b10d9fff70bc553f09d1b430c956544d59da2c3e61746e331503f0
|