Skip to main content

Drop-in semantic + exact caching layer for LLM applications

Project description

๐Ÿš€ llm-cachex

Drop-in caching + retrieval layer for LLM applications (RAG, agents, chatbots).

Stop paying for repeated LLM calls. Automatically reuse responses using exact + semantic caching with zero changes to your business logic.


โœจ Why llm-cachex?

Most LLM apps repeatedly call the model for:

  • Slightly rephrased questions
  • Agent/tool loops
  • Chat history variations

๐Ÿ‘‰ This wastes latency + money

llm-cachex fixes that automatically.


๐Ÿ”ฅ Features

  • โšก Exact cache (Redis-backed)
  • ๐Ÿง  Semantic cache (FAISS + embeddings)
  • ๐Ÿ” Hybrid retrieval (BM25 + vector search)
  • ๐Ÿงฌ Cross-encoder reranking (high-quality matches)
  • ๐Ÿค– Agent + tool support
  • ๐Ÿงต Memory-aware context support
  • ๐Ÿ’ฐ Token + cost tracking
  • ๐Ÿงฉ Plug-and-play decorator API

๐Ÿ—๏ธ Architecture

User Query
   โ†“
llm_cache decorator
   โ”œโ”€โ”€ Exact Cache (Redis)
   โ”œโ”€โ”€ Semantic Engine
   โ”‚     โ”œโ”€โ”€ FAISS (vector)
   โ”‚     โ”œโ”€โ”€ BM25 (lexical)
   โ”‚     โ””โ”€โ”€ CrossEncoder (rerank)
   โ””โ”€โ”€ LLM / Agent

๐Ÿ“ฆ Installation

pip install -e .

(For now, install locally. PyPI support coming soon.)


๐Ÿš€ Quick Start

from llm_cachex import llm_cache, CacheConfig

@llm_cache(CacheConfig())
def ask_llm(prompt):
    return llm(prompt)

print(ask_llm("What is AI?"))        # LLM call
print(ask_llm("Explain AI"))         # Semantic cache hit

๐Ÿค– Agent Example

Works seamlessly with tools:

@llm_cache(CacheConfig())
def agent(raw_query, full_prompt):

    if "calculate" in raw_query:
        return str(eval(raw_query.replace("calculate", "").strip()))

    if "search" in raw_query:
        return f"[TOOL SEARCH RESULT] {raw_query}"

    return llm(full_prompt)

๐Ÿง  Semantic Cache (What makes this powerful)

Unlike basic caching, this system:

"What is AI?"
"Explain artificial intelligence"

๐Ÿ‘‰ returns cached answer (no LLM call)


โš™๏ธ Configuration

CacheConfig(
    enable_exact=True,
    enable_semantic=True,
    similarity_threshold=0.7,
    top_k=3,
    model_name="gpt-4o-mini",
    enable_metrics=True,
    enable_token_cost=True
)

๐Ÿ“Š Metrics

from llm_cachex import metrics

print(metrics.summary())

Example:

{
  'hits': 2,
  'misses': 1,
  'hit_rate': 66.67,
  'avg_llm_latency_ms': 2000,
  'avg_cache_latency_ms': 30,
  'total_cost_rupees': 0.01
}

๐Ÿงช Examples

Run demos:

python examples/basic.py
python examples/rag_demo.py
python examples/agent_demo.py
python examples/strict_test.py

๐Ÿ“ Project Structure

llm_cachex/
โ”œโ”€โ”€ api/            # decorator layer
โ”œโ”€โ”€ core/           # cache, metrics, memory
โ”œโ”€โ”€ semantic/       # hybrid search + reranker
โ”œโ”€โ”€ embedding/      # embeddings
โ”œโ”€โ”€ index/          # FAISS index
โ”œโ”€โ”€ similarity/     # similarity utils
โ”œโ”€โ”€ utils/          # helpers

๐Ÿงญ Roadmap

  • Async support
  • Streaming support
  • Batch inference
  • Multi-model caching
  • Pluggable vector DBs (Chroma / Pinecone)
  • Observability dashboard

๐Ÿค Contributing

PRs welcome. Open an issue for discussions.


๐Ÿ“œ License

MIT License


๐Ÿ‘ค Author

Himanshu Singh


โญ If this helps you

Give a star. It helps the project grow.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmcachex_ai-0.1.0.tar.gz (12.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmcachex_ai-0.1.0-py3-none-any.whl (15.3 kB view details)

Uploaded Python 3

File details

Details for the file llmcachex_ai-0.1.0.tar.gz.

File metadata

  • Download URL: llmcachex_ai-0.1.0.tar.gz
  • Upload date:
  • Size: 12.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for llmcachex_ai-0.1.0.tar.gz
Algorithm Hash digest
SHA256 f7337124b11d89b1242336cda487483578ff95625a8ec3746eb8bb1f9a3c88cb
MD5 d12b2311ecedb7714c37ba794c7a5e62
BLAKE2b-256 840c33ebf2cbbaa41661965a05670422f9fbc58ea4a386a189248c45e48d1a0a

See more details on using hashes here.

File details

Details for the file llmcachex_ai-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: llmcachex_ai-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 15.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for llmcachex_ai-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 24f6218b681f0757b80fb5802de219b29971f9297cb4b6269e6523be53467767
MD5 68b150a33e9affe7e808722a26afcc8b
BLAKE2b-256 43ee5a9dc5bb5fe55bf4f15e51bd6b5979154cf37cd512a6faae7871dcaaa385

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page