Skip to main content

Drop-in semantic + exact caching layer for LLM applications

Project description

๐Ÿš€ llmcachex-ai

pip install llmcachex-ai

Drop-in caching + retrieval layer for LLM applications (RAG, agents, chatbots)

Stop paying for repeated LLM calls. Automatically reuse responses using exact + semantic caching with zero changes to your business logic.


โšก Installation

pip install llmcachex-ai

โœจ Why llmcachex-ai?

Most LLM applications repeatedly call the model for:

  • Slightly rephrased questions
  • Agent/tool loops
  • Chat history variations

๐Ÿ‘‰ This leads to higher latency + unnecessary cost

llmcachex-ai solves this automatically by caching intelligently.


๐Ÿ”ฅ Features

  • โšก Exact cache (Redis-backed)
  • ๐Ÿง  Semantic cache (FAISS + embeddings)
  • ๐Ÿ” Hybrid retrieval (BM25 + vector search)
  • ๐Ÿงฌ Cross-encoder reranking
  • ๐Ÿค– Agent + tool compatible
  • ๐Ÿงต Memory-aware context support
  • ๐Ÿ’ฐ Token + cost tracking
  • ๐Ÿงฉ Plug-and-play decorator API

๐Ÿ—๏ธ How It Works

User Query
   โ†“
llm_cache decorator
   โ”œโ”€โ”€ Exact Cache (Redis)
   โ”œโ”€โ”€ Semantic Engine
   โ”‚     โ”œโ”€โ”€ FAISS (vector)
   โ”‚     โ”œโ”€โ”€ BM25 (lexical)
   โ”‚     โ””โ”€โ”€ CrossEncoder (rerank)
   โ””โ”€โ”€ LLM / Agent

๐Ÿš€ Quick Start

from llmcachex_ai import llm_cache, CacheConfig

@llm_cache(CacheConfig())
def ask_llm(prompt):
    return llm(prompt)

print(ask_llm("What is AI?"))      # LLM call
print(ask_llm("Explain AI"))       # Semantic cache hit

๐Ÿค– Agent Example

Works seamlessly with tools:

@llm_cache(CacheConfig())
def agent(raw_query, full_prompt):

    if "calculate" in raw_query:
        return str(eval(raw_query.replace("calculate", "").strip()))

    if "search" in raw_query:
        return f"[TOOL SEARCH RESULT] {raw_query}"

    return llm(full_prompt)

๐Ÿง  Semantic Cache (Why itโ€™s powerful)

Unlike basic caching:

"What is AI?"
"Explain artificial intelligence"

๐Ÿ‘‰ Both return the same cached response ๐Ÿ‘‰ No LLM call needed


โš™๏ธ Configuration

CacheConfig(
    enable_exact=True,
    enable_semantic=True,
    similarity_threshold=0.7,
    top_k=3,
    model_name="gpt-4o-mini",
    enable_metrics=True,
    enable_token_cost=True
)

๐Ÿ“Š Metrics

from llmcachex_ai import metrics

print(metrics.summary())

Example output:

{
  "hits": 2,
  "misses": 1,
  "hit_rate": 66.67,
  "avg_llm_latency_ms": 2000,
  "avg_cache_latency_ms": 30,
  "total_cost_rupees": 0.01
}

๐Ÿ“ Project Structure

llm_cachex/
โ”œโ”€โ”€ api/            # decorator layer
โ”œโ”€โ”€ core/           # cache, metrics, memory
โ”œโ”€โ”€ semantic/       # hybrid search + reranker
โ”œโ”€โ”€ embedding/      # embeddings
โ”œโ”€โ”€ index/          # FAISS index
โ”œโ”€โ”€ similarity/     # similarity utils
โ”œโ”€โ”€ utils/          # helpers

๐Ÿงญ Roadmap

  • Async support
  • Streaming support
  • Batch inference
  • Multi-model caching
  • Pluggable vector DBs (Chroma / Pinecone)
  • Observability dashboard

๐Ÿค Contributing

PRs welcome. Open an issue to discuss ideas.


๐Ÿ“œ License

MIT License


๐Ÿ‘ค Author

Himanshu Singh


โญ If this helps you

Give it a star โญ โ€” it helps the project grow.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmcachex_ai-0.1.1.tar.gz (12.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmcachex_ai-0.1.1-py3-none-any.whl (15.2 kB view details)

Uploaded Python 3

File details

Details for the file llmcachex_ai-0.1.1.tar.gz.

File metadata

  • Download URL: llmcachex_ai-0.1.1.tar.gz
  • Upload date:
  • Size: 12.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for llmcachex_ai-0.1.1.tar.gz
Algorithm Hash digest
SHA256 7f7d2429cc98298afd5036da506ea4fd77c5dcc3c8bd50e4361c55011c9655ad
MD5 695f6a3b8649f6cff0799cae28de27dd
BLAKE2b-256 48378bd6ba01f1dda1997b8cad82a8e2049fdc195ee40ebab38c3629ee37cbf6

See more details on using hashes here.

File details

Details for the file llmcachex_ai-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: llmcachex_ai-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 15.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for llmcachex_ai-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d5649c1b5f12c600b2db891e1fd0775302a832debe9fc5785e7073231423657b
MD5 97ee25596f9e0cb2c395fcad281657af
BLAKE2b-256 8f2f91a6a5bd7830ae41ddf741246d9880d0fc48c1d561ff8721cae3dc7526f5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page