Skip to main content

A tool to help with copying and pasting context into LLM chats (Deprecated: Renamed to llm-context)

Project description

Important: Project Name Change

This project is being renamed from llm-code-context to llm-context to better reflect its capability to handle various types of text files, not just code. This repository will be renamed, and future releases will be under the new name. Please update your references and dependencies accordingly.

For the latest version and updates, please visit: https://github.com/cyberchitta/llm-context.py

Thank you for your understanding and continued support!

LLM Code Context

LLM Code Context is a Python-based tool designed to streamline the process of sharing code context with Large Language Models (LLMs) using a standard Chat UI. It allows developers to easily select, format, and copy relevant code snippets and project structure information, enhancing the quality of interactions with AI assistants in coding tasks.

This project was developed with significant input from Claude 3 Opus and Claude 3.5 Sonnet. All of the code that makes it into the repo is human curated (by me ๐Ÿ˜‡, @restlessronin).

Features

  • File Selection: Offers a command-line interface for selecting files from your project.
  • Intelligent Ignoring: Respects .gitignore rules and additional custom ignore patterns to exclude irrelevant files.
  • Clipboard Integration: Automatically copies the generated context to your clipboard for easy pasting.
  • Optional Technical Summary: Allows inclusion of a markdown file summarizing the project's technical aspects.

Installation

Using pipx (Recommended)

pipx is a tool to help you install and run end-user CLI applications written in Python.

  1. If you haven't installed pipx yet, follow the installation instructions in the pipx documentation.
  2. Once pipx is installed, you can install LLM Code Context:
    pipx install llm-code-context
    

This will install LLM Code Context in an isolated environment and make its commands available in your shell.

Usage

LLM Code Context offers several command-line tools, each designed for a specific task. All commands should be run from the root directory of your project, where the primary .gitignore file is located.

Here are the main commands:

# Select all files which are not gitignored
lcc-select
# Generate full context (including folder structure and summary), using selected files
lcc-gencontext
# Generate full text contents from a list of paths in the clipboard
lcc-genfiles

Typical workflow

Let's say that you are collaborating with an LLM on a code repo. Use a system or custom prompt similar to this custom-prompt.md.

Provide context for your chat.

  1. Navigate to your project's root directory in the terminal.
  2. Edit the project configuration file .llm-code-context/config.json to add any files to the "gitignores" key that should be in git but may not be useful for code context (e.g., "LICENSE" and "poetry.lock", maybe even "README.md").
  3. Run lcc-select to choose the files you want to include in your context. You can look at .llm-code-context/scratch.json to see what files are currently selected. If you prefer, you can edit the scratch file directly, before the next step.
  4. Run lcc-gencontext to generate and copy the full text of all selected files, the folder structure diagram and the technical summary of the project (if available).
  5. Paste the context into the first message of your conversation with the LLM, or equivalently into a Claude project file.

Respond to LLM requests for files

  1. The LLM will request a list of files in a markdown block quote.
  2. Select the block and copy into the clipboard
  3. Run lcc-genfiles to copy the text context of the requested files into the clipboard (thus replacing it's original contents - the file list).
  4. Paste the file content list into the next user message in the chat.

Technical Summary

LLM Code Context supports an optional technical summary feature, although its utility is currently unclear. This feature allows you to include a markdown file that provides project-specific information that may not be easily inferred from the code alone. To use this feature:

  1. Create a markdown file in your .llm-code-context folder (e.g., .llm-code-context/tech-summary.md).
  2. In your .llm-code-context/config.json file, set the summary_file key to the name of your summary file:
    {
      "summary_file": "tech-summary.md"
    }
    

If the key is missing or null, no summary will be included in the context.

The summary can include information like architectural decisions, non-obvious performance considerations, or future plans. For example:

  • "We chose a microservices architecture to allow for independent scaling of components."
  • "The process_data() function uses custom caching to optimize repeated calls with similar inputs."
  • "The authentication system is slated for an overhaul in Q3 to implement OAuth2."

When you run lcc-gencontext, this summary will be included after the folder structure diagram in the generated context.

For an example of a technical summary, you can refer to the tech-summary.md file for this repository.

Project Structure

โ””โ”€โ”€ llm-code-context.py
    โ”œโ”€โ”€ .gitignore
    โ”œโ”€โ”€ .llm-code-context
    โ”‚   โ”œโ”€โ”€ .gitignore
    โ”‚   โ”œโ”€โ”€ config.json
    โ”‚   โ”œโ”€โ”€ custom-prompt.md
    โ”‚   โ”œโ”€โ”€ tech-summary.md
    โ”‚   โ””โ”€โ”€ templates
    โ”‚       โ”œโ”€โ”€ full-context.j2
    โ”‚       โ””โ”€โ”€ sel-file-contents.j2
    โ”œโ”€โ”€ LICENSE
    โ”œโ”€โ”€ MANIFEST.in
    โ”œโ”€โ”€ README.md
    โ”œโ”€โ”€ poetry.lock
    โ”œโ”€โ”€ pyproject.toml
    โ”œโ”€โ”€ src
    โ”‚   โ””โ”€โ”€ llm_code_context
    โ”‚       โ”œโ”€โ”€ __init__.py
    โ”‚       โ”œโ”€โ”€ config_manager.py
    โ”‚       โ”œโ”€โ”€ context_generator.py
    โ”‚       โ”œโ”€โ”€ file_selector.py
    โ”‚       โ”œโ”€โ”€ folder_structure_diagram.py
    โ”‚       โ”œโ”€โ”€ git_ignorer.py
    โ”‚       โ”œโ”€โ”€ path_converter.py
    โ”‚       โ”œโ”€โ”€ pathspec_ignorer.py
    โ”‚       โ”œโ”€โ”€ template.py
    โ”‚       โ””โ”€โ”€ templates
    โ”‚           โ”œโ”€โ”€ full-context.j2
    โ”‚           โ””โ”€โ”€ sel-file-contents.j2
    โ””โ”€โ”€ tests
        โ”œโ”€โ”€ test_path_converter.py
        โ””โ”€โ”€ test_pathspec_ignorer.py

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the Apache License, Version 2.0. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_code_context-0.0.9.tar.gz (13.8 kB view details)

Uploaded Source

Built Distribution

llm_code_context-0.0.9-py3-none-any.whl (14.0 kB view details)

Uploaded Python 3

File details

Details for the file llm_code_context-0.0.9.tar.gz.

File metadata

  • Download URL: llm_code_context-0.0.9.tar.gz
  • Upload date:
  • Size: 13.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.12.4 Darwin/23.6.0

File hashes

Hashes for llm_code_context-0.0.9.tar.gz
Algorithm Hash digest
SHA256 c887c5cafa558caa4c133fcd530354372da05c727e8c85d39c3e7403bc06f1b0
MD5 b4eed960613e421c70038c808c76c540
BLAKE2b-256 eef34ada8f04a7276c96df112e5d03e196ed6fc77b792a8796d966fb4fc59c62

See more details on using hashes here.

File details

Details for the file llm_code_context-0.0.9-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_code_context-0.0.9-py3-none-any.whl
Algorithm Hash digest
SHA256 af7a100dab33de556f1ad38eb2e14c0eb9f8ad9cbc4c895d6fd86c97a6ffc508
MD5 e2774a470bf70c2693629640cafe9943
BLAKE2b-256 14521b2eee4e85eb2a9bc53e919d569c6210108e2a5e21e5647982d9ec89f945

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page