A command-line tool for copying code context to clipboard for use in LLM chats
Project description
LLM Context
LLM Context is a command-line tool designed to help developers efficiently copy and paste relevant context from code / text repositories into the web-chat interface of Large Language Models (LLMs). It leverages .gitignore
patterns for smart file selection and uses the clipboard for seamless integration with LLM interfaces.
Note: This project was developed in collaboration with Claude-3.5-Sonnet, using LLM Context itself to share code context during development. All code in the repository is human-curated (by me 😇, @restlessronin).
Current Usage Patterns
- LLM Integration: Primarily used with Claude (Project Knowledge) and GPT (Knowledge), but supports all LLM chat interfaces.
- Project Types: Suitable for code repositories and collections of text/markdown/html documents.
- Project Size: Optimized for projects that fit within an LLM's context window. Large project support is in development.
Installation
Use pipx to install LLM Context:
pipx install llm-context
Usage
LLM Context enables rapid project context updates for each AI chat.
Quick Start and Typical Workflow
- Install LLM Context if you haven't already.
- Navigate to your project's root directory.
- Run
lc-init
to set up LLM Context for your project (only needed once per repository). - For chat interfaces with built-in context storage (e.g., Claude Pro Projects, ChatGPT Plus GPTs):
- Set up your custom prompt manually in the chat interface.
- A default prompt is available in
.llm-context/templates/lc-prompt.md
.
- (Optional) Edit
.llm-context/config.toml
to add custom ignore patterns. - Run
lc-sel-files
to select files for full content inclusion. - (Optional) Review the selected file list in
.llm-context/curr_ctx.toml
. - Generate and copy the context:
- For chat interfaces with built-in storage: Run
lc-context
- For other interfaces (including free plans): Run
lc-context --with-prompt
to include the default prompt
- For chat interfaces with built-in storage: Run
- Paste the generated context:
- For interfaces with built-in storage: Into the Project Knowledge (Claude Pro) or GPT Knowledge (ChatGPT Plus) section
- For other interfaces: Into the system message or the first chat message, as appropriate
- Start your conversation with the LLM about your project.
To maintain up-to-date AI assistance:
- Repeat steps 6-9 at the start of each new chat session. This process takes only seconds.
- For interfaces with built-in storage, update your custom prompt separately if needed.
Handling LLM File Requests
When the LLM asks for a file that isn't in the current context:
- Copy the LLM's file request (typically in a markdown block) to your clipboard.
- Run
lc-read-cliplist
to generate the content of the requested files. - Paste the generated file contents back into your chat with the LLM.
Configuration
Customizing Ignore Patterns
Add custom ignore patterns in .llm-context/config.toml
to exclude specific files or directories not covered by your project's .gitignore
. This is useful for versioned files that don't contribute to code context, such as media files, large generated files, detailed changelogs, or environment-specific configurations.
Example:
# /.llm-context/config.toml
[gitignores]
full_files = [
"*.svg",
"*.png",
"CHANGELOG.md",
".env",
# Add more patterns here
]
Reviewing Selected Files
Review the list of selected files in .llm-context/curr_ctx.toml
to check what's included in the context. This is particularly useful when trying to minimize context size.
# /.llm-context/curr_ctx.toml
[context]
full = [
"/llm-context.py/pyproject.toml",
# more files ...
]
Command Reference
lc-init
: Initialize LLM Context for your project (only needed once per repository)lc-sel-files
: Select files for full content inclusionlc-sel-outlines
: Select files for outline inclusion (experimental)lc-context
: Generate and copy context to clipboard- Use
--with-prompt
flag to include the prompt for LLMs without built-in storage
- Use
lc-read-cliplist
: Read contents for LLM-requested files, and copy to clipboard
Experimental: Handling Larger Repositories
For larger projects, we're exploring a combined approach of full file content and file outlines. Use lc-sel-outlines
after lc-sel-files
to experiment with this feature.
Note: The outlining feature currently supports the following programming languages: C, C++, C#, Elisp, Elixir, Elm, Go, Java, JavaScript, OCaml, PHP, Python, QL, Ruby, Rust, and TypeScript. Files in unsupported languages will not be outlined and will be excluded from the outline selection.
Feedback and Contributions
We welcome feedback, issue reports, and pull requests on our GitHub repository.
Acknowledgments
LLM Context evolves from a lineage of AI-assisted development tools:
- This project succeeds LLM Code Highlighter, a TypeScript library I developed for IDE integration.
- The concept originated from my work on RubberDuck and continued with later contributions to Continue.
- LLM Code Highlighter was heavily inspired by Aider Chat. I worked with GPT-4 to translate several Aider Chat Python modules into TypeScript, maintaining functionality while restructuring the code.
- This project uses tree-sitter tag query files from Aider Chat.
- LLM Context exemplifies the power of AI-assisted development, transitioning from Python to TypeScript and back to Python with the help of GPT-4 and Claude-3.5-Sonnet.
I am grateful for the open-source community's innovations and the AI assistance that have shaped this project's evolution.
I am grateful for the help of Claude-3.5-Sonnet in the development of this project.
License
This project is licensed under the Apache License, Version 2.0. See the LICENSE file for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llm_context-0.1.0.tar.gz
.
File metadata
- Download URL: llm_context-0.1.0.tar.gz
- Upload date:
- Size: 26.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.12.6 Darwin/23.6.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8738f33322f441b7023d811475e7dc06b883928cddc18bf11159a610e8854e2e |
|
MD5 | e0c808d0f1a5a24a4836420a9b0e4776 |
|
BLAKE2b-256 | a0b6237375a325e4f22cb255c374b4d11e3e07a42c76c43cfea82a11512899ff |
File details
Details for the file llm_context-0.1.0-py3-none-any.whl
.
File metadata
- Download URL: llm_context-0.1.0-py3-none-any.whl
- Upload date:
- Size: 33.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.12.6 Darwin/23.6.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9bc0cee67f748b137b9d48e639fb84f38fe2c0c691e85caf05db42db6338a204 |
|
MD5 | 5f158afcffc4ef355f076d93d14bdc4d |
|
BLAKE2b-256 | 24d134384f0886e1d79cacfb11883a145616a38a92cf321344c7eca7189f41cb |