Skip to main content

Provide LLMs with codebase context for improved coding assistance.

Project description

CodeCache

CodeCache is a Python package that allows developers to analyze their codebase and cache the context using the Gemini API's context caching feature. This tool helps in generating summaries and insights about your codebase, enabling efficient querying and interaction with large codebases.

Features

  • Codebase Analysis: Recursively analyzes your codebase, generating summaries for each file.
  • Context Caching: Utilizes the Gemini API to cache the context of your codebase, reducing the need to pass the same input tokens repeatedly.
  • Querying: Allows querying the cached context to get insights or answers related to your codebase.
  • Summarization: Supports both quick and detailed summaries of your codebase.
  • Customizable: Configurable options for ignoring files/directories, setting cache TTL, and choosing summary modes.
  • CLI Interface: Provides a user-friendly command-line interface for all functionalities.

Installation

You can install CodeCache via pip:

pip install codecache

Alternatively, you can install it from source:

git clone https://github.com/DineshRai/codecache.git
cd codecache
pip install .

Prerequisites

  • Python 3.7 or higher
  • Gemini API key (you can get one from Google AI Studio)

Usage

Setting Up the Environment

Before using CodeCache, you need to set the GEMINI_API_KEY environment variable:

export GEMINI_API_KEY='your_api_key_here'

You can also create a .env file in your project root:

GEMINI_API_KEY=your_api_key_here

Command-Line Interface

CodeCache provides a CLI with several commands. You can get help by running:

codecache --help

Analyzing and Caching Context

To analyze your codebase and cache the context:

codecache cache [DIRECTORY] [OPTIONS]
  • DIRECTORY: The root directory of your codebase (default is the current directory).
  • Options:
    • --ignore-file: Path to a custom ignore file. If not provided, .codecachignore in the root directory will be used if it exists.
    • --ttl: Cache time-to-live in seconds (default: 3600).
    • --summary-mode: Summary mode to use (quick or detailed, default: quick).
    • --model: Gemini model to use (e.g., gemini-1.5-flash-001).
    • --verbose or -v: Enable verbose output.

Example:

codecache cache . --ttl 7200 --summary-mode detailed --model gemini-1.5-pro-001 --verbose

Querying the Cached Context

To query the cached context:

codecache query CACHE_KEY QUERY
  • CACHE_KEY: The cache key returned when you cached the context.
  • QUERY: The question or query you want to ask about your codebase.

Example:

codecache query your_cache_key "What does the CacheManager class do?"

Other Commands

  • List cached contexts: codecache list-caches
  • Update cache TTL: codecache update-ttl CACHE_KEY NEW_TTL
  • Delete a cached context: codecache delete-cache CACHE_KEY

Configuration

Ignore Files

You can specify files or directories to ignore during analysis by creating a .codecachignore file in your project root:

# Ignore all .pyc files
*.pyc

# Ignore the build directory
build/

# Ignore specific files
secret_config.py

Alternatively, you can specify a custom ignore file using the --ignore-file option.

Config File

You can configure default settings by creating a .codebase_context file in your project root (in TOML format):

[settings]
gemini_model = "gemini-1.5-flash-001"
default_ttl = 3600
ignore_file = ".codecachignore"

Examples

Quick Start

Analyze and Cache Context:

codecache cache .

Query the Cached Context:

codecache query your_cache_key "Explain the purpose of the Summarizer class."

Logging and Verbose Output

To enable verbose logging, add the --verbose or -v flag to any command:

codecache cache . --verbose

Development and Contribution

Contributions are welcome! Please follow these steps to contribute:

  1. Fork the repository.
  2. Create a new branch for your feature or bug fix.
  3. Write your code and tests.
  4. Submit a pull request.

Setting Up for Development

git clone https://github.com/DineshRai/codecache.git
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
pip install -e .

Running Tests

We use pytest for testing. To run tests:

pytest tests/

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • Google Gemini API for the context caching feature.
  • Click for the command-line interface.
  • Python dotenv for managing environment variables.

Contact

For questions, feedback, or issues related to CodeCache:

For security-related concerns, please email directly rather than creating a public issue.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

codecache-0.1.1.tar.gz (13.7 kB view details)

Uploaded Source

Built Distribution

codecache-0.1.1-py3-none-any.whl (13.3 kB view details)

Uploaded Python 3

File details

Details for the file codecache-0.1.1.tar.gz.

File metadata

  • Download URL: codecache-0.1.1.tar.gz
  • Upload date:
  • Size: 13.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.5

File hashes

Hashes for codecache-0.1.1.tar.gz
Algorithm Hash digest
SHA256 737844182ca139d46c2159469a872e8f8cb32c9941897b49d2e477ece0ed3afe
MD5 923e271a30dc659f1d43572ec3e3c959
BLAKE2b-256 b6291af242aa4ba132821e879a7a634699ae32b30cb75b5e68c443355d2e7347

See more details on using hashes here.

File details

Details for the file codecache-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: codecache-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 13.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.5

File hashes

Hashes for codecache-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 03ba7eb17325c272d4302af49777b4e67c5f4bcc57f8c844dbd8c17f2e0f38e5
MD5 daa5e4ddb6455360c693b53f629c3806
BLAKE2b-256 08d527bea17ddfa5f2ae053d8a942b267fadba2667e844435dbad021f20467f0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page