Skip to main content

Semantic memory for LLM agent calls with an equivalence-first cache architecture.

Project description

SmartMemo

SmartMemo is a semantic memory and caching layer for LLM agent calls. Its core thesis is simple: cosine similarity is a useful candidate selector, but it is not semantic equivalence. SmartMemo uses embedding search to find likely cache candidates, then can use a learned equivalence classifier to decide whether a cached response is safe to reuse.

The current implementation ships the baseline and the first classifier-gated cache path:

  • async SmartMemo.get_or_call(...)
  • SQLite persistence
  • embedding provider protocol
  • FAISS-backed vector search when smartmemo[ml] is installed
  • dependency-light in-memory search for tests and smoke demos
  • measured cosine-baseline benchmark fixtures for customer-support prompts
  • classifier training, evaluation, checkpoint inference, and optional classifier-gated hits
  • durable feedback export for classifier retraining data

By default, SmartMemo keeps the lightweight cosine baseline. When you provide a classifier checkpoint, cosine search becomes the candidate selector and the learned classifier makes the final cache-hit decision. SmartMemo does not ship a production pretrained classifier yet.

Install

pip install smartmemo
pip install "smartmemo[ml]"

For local development:

uv sync --all-extras
uv run pytest
uv run ruff check
uv run pyright

Minimal Example

from smartmemo import SmartMemo

cache = SmartMemo(domain="customer-support")

async def call_llm(prompt: str) -> str:
    return "fresh LLM response"

result = await cache.get_or_call(
    prompt="Summarize this customer's latest billing ticket",
    llm_function=call_llm,
)

print(result.response)
print(result.was_cache_hit)

Baseline Benchmark

The customer-support benchmark is intentionally designed to show the baseline failure mode: prompts about the same object can require opposite actions.

uv run python benchmarks/cosine_baseline_customer_support.py

The numbers from that benchmark are the only performance claims this implementation makes.

Classifier Pipeline

SmartMemo includes a trainable pair classifier over prompt embeddings:

uv run smartmemo train-classifier \
  --data data/fixtures/customer_support_pairs.jsonl \
  --out models/classifier-smoke.pt \
  --embedding-provider hash \
  --embedding-dim 64 \
  --epochs 2

Use the hash provider only for smoke checks. Real experiments should install smartmemo[ml] and use the SentenceTransformers embedding provider.

Use a trained checkpoint for classifier-gated cache decisions:

from smartmemo import ClassifierConfig, SmartMemo

cache = SmartMemo(
    domain="customer-support",
    classifier=ClassifierConfig(model_path="models/classifier-smoke.pt"),
)

When the classifier is active, CacheResult.classifier_score is populated for classifier hits and classifier-gated misses that had candidates.

Feedback Export

SmartMemo records cache-hit lookups so explicit feedback can become training data:

result = await cache.get_or_call(
    prompt="Approve the customer's refund request",
    llm_function=call_llm,
)

if result.was_cache_hit and user_rejected_answer:
    await cache.report_bad_hit(result.query_id, reason="wrong refund decision")

written = cache.export_feedback_pairs("data/feedback_pairs.jsonl")
print(written)

The exported JSONL uses the same prompt-pair shape accepted by smartmemo train-classifier. Feedback export is manual training data preparation; SmartMemo does not automatically retrain or deploy classifiers yet.

Release

Version 0.0.3 is configured for PyPI as smartmemo. The repository publishes through GitHub Actions trusted publishing from .github/workflows/publish-pypi.yml with the pypi environment.

git tag v0.0.3
git push origin v0.0.3

That tag builds the source distribution and wheel, then uploads them to PyPI.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

smartmemo-0.0.3.tar.gz (126.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

smartmemo-0.0.3-py3-none-any.whl (25.7 kB view details)

Uploaded Python 3

File details

Details for the file smartmemo-0.0.3.tar.gz.

File metadata

  • Download URL: smartmemo-0.0.3.tar.gz
  • Upload date:
  • Size: 126.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for smartmemo-0.0.3.tar.gz
Algorithm Hash digest
SHA256 fd91a4eb198790ccc283083c5dbb8ea4a5e1019bb523c42d12b17d34b8681a65
MD5 424800913eb16d078ee5ae12be58d716
BLAKE2b-256 370114473b7150d3335dc437113ea4ab9f5d81c92d7d3c32a9b2980524a46619

See more details on using hashes here.

Provenance

The following attestation bundles were made for smartmemo-0.0.3.tar.gz:

Publisher: publish-pypi.yml on awesome-pro/smartmemo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file smartmemo-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: smartmemo-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 25.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for smartmemo-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a4220368ebdb9bca31426542fe8f73a9a9f2ebb9f8a3bdf079e2a1fd2e335604
MD5 3db9d438019fd913d176c0e1c6eec101
BLAKE2b-256 f1ba5955df51574434ac7209902b41d481ae90a3c9ca912fd5d8589430b9857e

See more details on using hashes here.

Provenance

The following attestation bundles were made for smartmemo-0.0.3-py3-none-any.whl:

Publisher: publish-pypi.yml on awesome-pro/smartmemo

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page