Skip to main content

Python client for the Anura Memory API — GraphRag + FilesRag

Project description

anura-graph

Python client for the Anura Memory API.

Anura Memory provides two memory products for AI agents:

  • GraphRag — Knowledge graph with automatic triple extraction, deduplication, and hybrid retrieval
  • FilesRag — Markdown file storage with heading-based chunking and semantic search

Installation

pip install anura-graph

Quick Start

from graphmem import GraphMem

mem = GraphMem(api_key="gm_your_key_here")

# --- GraphRag ---
mem.remember("Alice is VP of Engineering at Acme Corp")
ctx = mem.get_context("What does Alice do?")

# --- FilesRag ---
mem.write_file("/notes/standup.md", "# Standup\n## 2026-02-21\n- Shipped auth module")
results = mem.search_files("auth module")

Configuration

from graphmem import GraphMem, RetryConfig

mem = GraphMem(
    api_key="gm_your_key_here",
    base_url="https://anuramemory.com",  # default
    retry=RetryConfig(
        max_retries=3,       # default
        base_delay=0.5,      # seconds, default
        max_delay=10.0,      # seconds, default
        retry_on=[429, 500, 502, 503, 504],  # default
    ),
    timeout=30.0,  # seconds, default
)

The client can be used as a context manager:

with GraphMem(api_key="gm_your_key_here") as mem:
    mem.remember("Alice works at Acme")

API Reference

GraphRag

remember(text) -> RememberResult

Extract knowledge from text and store as triples in the graph.

result = mem.remember("Einstein was born in Ulm, Germany")
print(result.extracted_count)  # 1
print(result.merged_count)     # 1

If a new fact contradicts an existing one on a single-valued predicate (e.g., LIVES_IN, WORKS_AT), the old value is replaced and the conflict is returned:

r = mem.remember("Alice now works at Microsoft")
# r.conflicts → [ConflictResolution(subject_name="alice", predicate="WORKS_AT",
#                 old_object="google", new_object="microsoft", resolution="auto_recency")]

get_context(query, options?) -> ContextResult

Retrieve context from the knowledge graph.

from graphmem import ContextOptions

# JSON format (default)
ctx = mem.get_context("Einstein")
print(ctx.entities)  # [{ "name": "Albert Einstein" }, ...]

# Markdown format (ideal for LLM system prompts)
ctx = mem.get_context("Einstein", ContextOptions(format="markdown"))
print(ctx.content)  # "- Albert Einstein -> BORN_IN -> Ulm..."

# Hybrid mode (graph + vector + communities)
ctx = mem.get_context("Einstein", ContextOptions(mode="hybrid"))

search(entity) -> SearchResult

Find an entity and its direct (1-hop) connections.

result = mem.search("Alice")
print(result.edges)

ingest_triples(triples) -> IngestResult

Ingest pre-formatted triples directly (no LLM extraction).

from graphmem import Triple

result = mem.ingest_triples([
    Triple(subject="TypeScript", predicate="CREATED_BY", object="Microsoft"),
])

get_graph() -> GraphData

Get the full graph (nodes, edges, communities).

delete_edge(id, blacklist=False)

Delete an edge. Optionally blacklist to prevent re-creation.

update_edge_weight(id, weight=None, increment=None)

Set or increment an edge's weight.

delete_node(id)

Delete a node and all its connected edges.

export_graph() -> ExportData

Export the graph as portable JSON.

import_graph(data) -> ImportResult

Import a graph export (merges, does not delete existing data).

list_communities() -> list[Community]

List all detected communities.

detect_communities() -> DetectCommunitiesResult

Run Louvain community detection + LLM summarization.

FilesRag

write_file(path, content, name=None) -> WriteFileResult

Create or update a markdown memory file. Files are automatically chunked by ## headings and indexed for semantic search.

result = mem.write_file(
    "/docs/architecture.md",
    "# Architecture\n\n## Backend\nNode.js with Prisma...\n\n## Frontend\nNext.js...",
)
print(result.file.id)       # "clxx..."
print(result.chunk_count)   # 3
print(result.created)       # True

If a file already exists at the given path, its content is replaced and re-indexed.

list_files() -> list[MemoryFile]

List all files in the current project.

files = mem.list_files()
for f in files:
    print(f"{f.path} ({f.size} bytes)")

read_file(id) -> FileWithContent

Get a file with its full content.

file = mem.read_file("file_id")
print(file.content)  # full markdown

update_file(id, content, name=None) -> WriteFileResult

Update a file's content (re-chunks and re-indexes).

result = mem.update_file("file_id", "# Updated content\n...")

delete_file(id)

Delete a file and all its indexed chunks.

search_files(query, limit=None, file_id=None) -> list[FileSearchResult]

Semantic search across file chunks. Pass file_id to scope the search to a single file.

results = mem.search_files("authentication flow", limit=5)
# Search within a single file:
# results = mem.search_files("auth", file_id="file_abc123")
for r in results:
    print(r.file["path"])
    for chunk in r.chunks:
        print(f"  {chunk.heading_title} ({chunk.score:.2f})")
        print(f"  {chunk.excerpt}")

Projects

Method Description
list_projects() List all projects
create_project(name) Create a new project
delete_project(id) Delete a project
select_project(id) Switch active project

Traces

Method Description
list_traces(limit?, cursor?) List query traces with pagination
get_trace(id) Get details for a specific trace

Blacklist

Method Description
list_blacklist(limit?, cursor?) List blacklisted triples
add_to_blacklist(subject, predicate, object) Add a triple to the blacklist
remove_from_blacklist(id) Remove from blacklist

Conflict Log

Method Description
list_conflicts(limit?, cursor?) List conflict resolution log entries (newest first)
conflicts = mem.list_conflicts(limit=10)
for c in conflicts:
    print(f"{c.subject_name} {c.predicate}: {c.old_object}{c.new_object}")

Pending Facts

Method Description
list_pending(limit?, cursor?) List pending facts
approve_fact(id) Approve a pending fact
reject_fact(id, blacklist?) Reject a pending fact
approve_all() Approve all pending facts
reject_all() Reject all pending facts

Usage

get_usage() -> UsageInfo

usage = mem.get_usage()
print(usage.tier)                  # "FREE"
print(usage.current_facts)         # 42
print(usage.current_file_count)    # 3
print(usage.current_file_storage)  # 1024

Health

health() -> HealthResult

Error Handling

from graphmem import GraphMem, GraphMemError

try:
    mem.read_file("nonexistent")
except GraphMemError as e:
    print(f"API error {e.status}: {e}")
    print(e.body)  # raw response body

Rate Limiting

After each request, rate limit info is available:

mem.remember("some fact")
print(mem.rate_limit.remaining)  # requests remaining
print(mem.rate_limit.limit)      # total allowed per window
print(mem.rate_limit.reset)      # unix timestamp when window resets

Types

All types are exported from the top-level package:

from graphmem import (
    # GraphRag
    RememberResult, ConflictResolution, ConflictLogEntry,
    ContextResult, SearchResult, Triple,
    GraphNode, GraphEdge, GraphData, Community,
    # FilesRag
    MemoryFile, FileWithContent, FileSearchResult, WriteFileResult,
    # Config
    RetryConfig, RateLimitInfo, UsageInfo,
)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

anura_graph-0.3.1.tar.gz (12.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

anura_graph-0.3.1-py3-none-any.whl (13.3 kB view details)

Uploaded Python 3

File details

Details for the file anura_graph-0.3.1.tar.gz.

File metadata

  • Download URL: anura_graph-0.3.1.tar.gz
  • Upload date:
  • Size: 12.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.6

File hashes

Hashes for anura_graph-0.3.1.tar.gz
Algorithm Hash digest
SHA256 26434fef03415205d606941c5d6bb28282f6ec1bf69bdedfcde1f95c363f4118
MD5 76bb745a7f048862b451b2e51d4e5722
BLAKE2b-256 86e839f3c2e6bccfc60d83ad16b852ff61852d938438b8cc8dc80e1cd255a775

See more details on using hashes here.

File details

Details for the file anura_graph-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: anura_graph-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 13.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.6

File hashes

Hashes for anura_graph-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b43dc9c0c06523b923015fc1780396251242cf3fb9aa12df050ff7346c224f60
MD5 5468bbe62ba93976caccb815cbd8a650
BLAKE2b-256 b0dbeb926d74a321530da871e393ddb5e4daef8ff7d2ed44fbec611b267b3bbb

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page