Skip to main content

Drop-in semantic + exact caching layer for LLM applications

Project description

๐Ÿš€ llmcachex-ai

PyPI Python License

Drop-in semantic + exact caching layer for LLM applications (RAG, agents, chatbots)

Save up to 80% LLM cost and reduce latency by avoiding repeated model calls using intelligent caching.


โšก Installation

pip install llmcachex-ai

โœจ Why llmcachex-ai?

Most LLM applications repeatedly call the model for:

  • Slightly rephrased queries
  • Agent/tool loops
  • Chat history variations

This leads to higher latency and unnecessary cost.

llmcachex-ai solves this automatically by caching responses intelligently using exact + semantic matching.


๐Ÿ”ฅ Features

  • โšก Exact cache (Redis-backed)
  • ๐Ÿง  Semantic cache (FAISS + embeddings)
  • ๐Ÿ” Hybrid retrieval (BM25 + vector search)
  • ๐Ÿงฌ Cross-encoder reranking (high-quality matches)
  • ๐Ÿค– Works with agents and tools
  • ๐Ÿงต Memory-aware context support
  • ๐Ÿ’ฐ Token usage and cost tracking
  • ๐Ÿงฉ Plug-and-play decorator API

๐Ÿ—๏ธ How It Works

User Query
   โ†“
llm_cache decorator
   โ”œโ”€โ”€ Exact Cache (Redis)
   โ”œโ”€โ”€ Semantic Engine
   โ”‚     โ”œโ”€โ”€ FAISS (vector)
   โ”‚     โ”œโ”€โ”€ BM25 (lexical)
   โ”‚     โ””โ”€โ”€ CrossEncoder (rerank)
   โ””โ”€โ”€ LLM / Agent

๐Ÿš€ Quick Start

from llm_cachex import llm_cache, CacheConfig

@llm_cache(CacheConfig())
def ask_llm(prompt):
    return llm(prompt)

print(ask_llm("What is AI?"))      # LLM call
print(ask_llm("Explain AI"))       # Semantic cache hit

๐Ÿค– Agent Example

Works seamlessly with tools:

@llm_cache(CacheConfig())
def agent(raw_query, full_prompt):

    if "calculate" in raw_query:
        return str(eval(raw_query.replace("calculate", "").strip()))

    if "search" in raw_query:
        return f"[TOOL SEARCH RESULT] {raw_query}"

    return llm(full_prompt)

๐Ÿง  Semantic Cache (Why itโ€™s powerful)

"What is AI?"
"Explain artificial intelligence"

Both return the same cached response โ€” no additional LLM call required.


โš™๏ธ Configuration

CacheConfig(
    enable_exact=True,
    enable_semantic=True,
    similarity_threshold=0.7,
    top_k=3,
    model_name="gpt-4o-mini",
    enable_metrics=True,
    enable_token_cost=True
)

๐Ÿ“Š Metrics

from llm_cachex import metrics

print(metrics.summary())

Example output:

{
  "hits": 2,
  "misses": 1,
  "hit_rate": 66.67,
  "avg_llm_latency_ms": 2000,
  "avg_cache_latency_ms": 30,
  "total_cost_rupees": 0.01
}

๐ŸŽฏ Use Cases

  • RAG pipelines
  • AI agents & tool execution
  • Chatbots with memory
  • Cost optimization for LLM APIs
  • High-frequency query systems

โšก Performance Impact

Typical improvements:

  • 2โ€“10x latency reduction
  • 50โ€“80% cost savings

๐Ÿ“ Project Structure

llm_cachex/
โ”œโ”€โ”€ api/            # decorator layer
โ”œโ”€โ”€ core/           # cache, metrics, memory
โ”œโ”€โ”€ semantic/       # hybrid search + reranker
โ”œโ”€โ”€ embedding/      # embeddings
โ”œโ”€โ”€ index/          # FAISS index
โ”œโ”€โ”€ similarity/     # similarity utils
โ”œโ”€โ”€ utils/          # helpers

๐Ÿงญ Roadmap

  • Async support
  • Streaming support
  • Batch inference
  • Multi-model caching
  • Pluggable vector DBs (Chroma / Pinecone)
  • Observability dashboard

๐Ÿค Contributing

Contributions are welcome. Open an issue to discuss ideas or submit a PR.


๐Ÿ“œ License

MIT License


๐Ÿ‘ค Author

Himanshu Singh


โญ Support

If this project helps you, consider giving it a star โญ on GitHub.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmcachex_ai-0.1.2.tar.gz (13.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmcachex_ai-0.1.2-py3-none-any.whl (15.4 kB view details)

Uploaded Python 3

File details

Details for the file llmcachex_ai-0.1.2.tar.gz.

File metadata

  • Download URL: llmcachex_ai-0.1.2.tar.gz
  • Upload date:
  • Size: 13.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for llmcachex_ai-0.1.2.tar.gz
Algorithm Hash digest
SHA256 8e23340602bae3f7c5ae9c9ea9e1078a1ef8baec6e87a0957d31864001ac06de
MD5 29e714bf06c5ac6b594c7f81f0c9051c
BLAKE2b-256 1e10984f0af4f93d4be00b503d69a6e60bab2729505e9e0117f40e27f7cb4f7d

See more details on using hashes here.

File details

Details for the file llmcachex_ai-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: llmcachex_ai-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 15.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for llmcachex_ai-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 a0421112afdd8682ae0c81180ed86913e752d7b48d08226376198dbdb9535191
MD5 30d9ec1a7da5befa16e590ac1aba792c
BLAKE2b-256 679fa7d0633ee3af505c68d3b2d5bdf00621d0c2ae856e4eed2ec3e74f3839fe

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page