Skip to main content

A cacheing CLI for quickly asking questions of LLMs

Project description

llm-questioncache

A plugin for llm for sending questions to LLMs and getting succinct answers. It also saves answers in a SQLite database along with embeddings of the corresponding questions and will answer future, similar questions from the cache rather than the LLM.

Installation

llm install llm-questioncache

Usage

The plugin adds a new questioncache command group to llm. See llm questioncache --help for the full list of subcommands.

Ask a Question

llm questioncache ask "What is the capital of France?"

This will:

  1. Check if similar questions exist in the cache
  2. If found, show the cached answers
  3. If not found, ask the LLM and cache the response

You can also pipe questions through stdin:

echo "What is the capital of France?" | llm questioncache ask -

Send Last Question Directly to LLM

To bypass the cache and send the last asked question directly to the LLM:

llm questioncache send

You might have to do this if you've previously asked a similar-but-distinct question

Import Previous Answers

You can import a collection of previous questions and answers from a JSON file:

llm questioncache importanswers answers.json

The JSON file should contain an array of objects with question and answer fields.

If you've been using LLM in this way already you might have some useful answers already. To retrieve and format all the LLM responses with a particular system prompt, use sqlite-utils:

uvx sqlite-utils "$(llm logs path)" "select prompt as question, response as answer from responses where system = 'Answer in as few words as possible. Use a brief style with short replies.'"

Clear the Cache

To delete all cached questions and answers:

llm questioncache clearcache

Configuration

The plugin uses your default LLM and embedding models as configured in llm. No additional configuration is required.

Key parameters (configured in the code):

  • Relevance cutoff for similar questions: 0.8
  • Number of similar answers to show: 3
  • System prompt for brief answers: "Answer in as few words as possible. Use a brief style with short replies."

Shell integration

You might find it useful to create a shell script to succinctly invoke llm questioncache:

For example, save this as ~/.local/bin/q:

#!/usr/bin/env sh
llm questioncache $*

You can now pose questions with:

q how do you exit vim

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_questioncache-0.1.0.tar.gz (55.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_questioncache-0.1.0-py3-none-any.whl (9.1 kB view details)

Uploaded Python 3

File details

Details for the file llm_questioncache-0.1.0.tar.gz.

File metadata

  • Download URL: llm_questioncache-0.1.0.tar.gz
  • Upload date:
  • Size: 55.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for llm_questioncache-0.1.0.tar.gz
Algorithm Hash digest
SHA256 97432c9a7e6a692270349e1d3ec9bf6d2afcea2bb0dada9674dbbc76d36b5efe
MD5 c665ed5cc2bbb298a0f15178805228db
BLAKE2b-256 a4b27125ed36f7917934ee5eeab5af52b0b28b8526f267febbbf5a4e9e51153d

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_questioncache-0.1.0.tar.gz:

Publisher: publish.yml on nathanielknight/llm-questioncache

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llm_questioncache-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_questioncache-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3eb26c029130557c482b59dd0611c31d2622fc6c37e3f7cd83e52bcf1a316ded
MD5 7f3c33aa02dda078a144bb677cd4c672
BLAKE2b-256 2e29fce71afd222eb738f697dc3e61b4f1c751f04db3025d043b2065d39ec1a7

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_questioncache-0.1.0-py3-none-any.whl:

Publisher: publish.yml on nathanielknight/llm-questioncache

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page