Skip to main content

HGCacheMem: A robust architectural approach to improve contextual coherence and retrieval speed in agentic systems.

Project description

HGCacheMem

A robust architectural approach to improve contextual coherence and retrieval speed in agentic systems.

LLM agents often encounter a significant trade-off between retrieval speed and contextual coherence. Semantic caching systems offer sub-millisecond retrieval speeds but are plagued by "context ignorance," resulting in inaccurate cache hits. Graph-based and other complex memory systems yield high contextual accuracy but come with latency drawbacks that impact real-time user experience.

HGCacheMem is a robust architecture that merges vector similarity searches with a topological Graph Validator. The Graph Validator checks potential cache hits against the user's active context within a knowledge graph prior to delivery.

Experimental tests on the RGB Noise Robustness benchmark testbed show that HGCacheMem achieves a contextual coherence accuracy of 97.7%, significantly surpassing other caching alternatives and complex memory systems, with a mean retrieval latency of 23.66 ms — proving that structural validation can be achieved within real-time interaction thresholds.

How It Works

User Query
    │
    ▼
┌──────────────┐     ┌──────────────────┐     ┌──────────────┐
│ Vector Cache  │────▶│ Graph Validator   │────▶│   Response   │
│ (Chroma)      │     │ (Neo4j path check │     │              │
│ similarity    │     │  weight > 0.6)    │     │ Cache hit OR │
│ search        │     │                   │     │ LLM fallback │
└──────────────┘     └───────────────────┘     └──────────────┘
  1. A user query hits the vector cache (Chroma) via similarity search.
  2. The Graph Validator runs a Cypher shortest-path query on a Neo4j knowledge graph to verify that the candidate answer is structurally connected to the user's current context anchor, with every edge weight exceeding a configurable threshold (default 0.6).
  3. If the graph validation passes → return the cached answer (fast path).
  4. If it fails → fall back to LLM generation grounded in graph context (safe path).

Benchmark Results

Evaluated on the RGB Noise Robustness benchmark (300 cases):

Metric Value
Contextual Coherence Accuracy 97.7%
Trapped (cross-context hallucination) 0.0%
Mean Retrieval Latency 23.66 ms
p50 Latency 17.02 ms
p90 Latency 34.27 ms
p95 Latency 50.16 ms

Installation

pip install hgcachemem

Quick Start

from hgcachemem import GraphValidator, connect_graph, connect_cache

# Connect to your stores
graph = connect_graph(url="bolt://localhost:7687", username="neo4j", password="secret")
cache = connect_cache(persist_directory="./chroma_db")

# Create the Graph Validator
graph_validator = GraphValidator(graph=graph, weight_threshold=0.6)

# Validate a cache hit before returning it
results = cache.similarity_search("What is the budget?", k=1)
candidate_entity = results[0].metadata["linked_entity"]

if graph_validator.validate(active_entity="Project Alpha", candidate_entity=candidate_entity):
    print("Cache hit is valid for this context!")
    print(results[0].metadata["answer"])
else:
    print("Blocked — answer belongs to a different context.")

Full Pipeline Example

See examples/langchain_example.py for a complete LangChain integration with anchor resolution, vector search, Graph Validator validation, and LLM fallback.

API Reference

GraphValidator(graph, weight_threshold=0.6, max_hops=2)

The core validation class.

  • graph — A LangChain Neo4jGraph instance.
  • weight_threshold — Minimum edge weight required on every relationship in the path (default 0.6).
  • max_hops — Maximum path length for the shortest path check (default 2).

graph_validator.validate(active_entity, candidate_entity) -> bool

Returns True if the candidate is contextually valid for the current anchor.

connect_graph(url, username, password) -> Neo4jGraph

Factory for a connected Neo4j graph instance.

connect_cache(collection_name, persist_directory, embedding_model) -> Chroma

Factory for a Chroma vector store with HuggingFace embeddings.

SessionState(user_id)

Tracks the user's current context anchor across a conversation.

  • update_anchor(entity_id) — Move focus to a new entity.
  • active_entity_id — The current anchor (or None).

resolve_anchor(user_query, known_entities, llm=None) -> str | None

Uses an LLM to extract the entity from a query and match it against known graph nodes.

generate_response(user_query, context, llm=None) -> str

Fallback LLM generation grounded in graph-retrieved context.

get_context_neighbors(graph, active_entity, limit=10) -> str

Retrieves neighbours of the active entity from the graph as formatted context.

get_known_entities(graph) -> list[str]

Returns all entity names present in the graph.

Requirements

  • Python >= 3.10
  • Neo4j >= 5.0 (running instance)
  • An OpenAI API key (for anchor resolution and fallback generation)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hgcachemem-0.1.0.tar.gz (7.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hgcachemem-0.1.0-py3-none-any.whl (8.6 kB view details)

Uploaded Python 3

File details

Details for the file hgcachemem-0.1.0.tar.gz.

File metadata

  • Download URL: hgcachemem-0.1.0.tar.gz
  • Upload date:
  • Size: 7.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for hgcachemem-0.1.0.tar.gz
Algorithm Hash digest
SHA256 75da6bace014918b6272e4d1c67d3df538df4a7351e02f0ccb7b8d1b7b9d8f8a
MD5 0291a074dfe85ccca92e75278d096844
BLAKE2b-256 b0468c5ea050b81250b6295947beb5c9bdb58bead50038a11eefb1ee481d6e62

See more details on using hashes here.

File details

Details for the file hgcachemem-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: hgcachemem-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 8.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for hgcachemem-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f57b4e083576911f7b904c2eb21b9e9af461b8f7cf125c294f76f1f8e46c1876
MD5 8877d0f4d62b8fecd7c0a5cd4ab9e2c0
BLAKE2b-256 50ce94ec618305ed3a30178c9b87c4c5befd74fb144d3bc90a3895a13287b903

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page