A debugger you can talk to
Project description
redshift
Redshift is a Python debugger with an LLM inside. When a breakpoint is hit, you can ask questions like:
- "Why is this function returning null?"
- "How many items in
arrayare strings?" - "Which condition made the loop break?"
An agent will navigate the call stack, inspect variables, and look at your code to figure out an answer. Think of this as vibe debugging. You can diagnose issues just by talking.
Features
Redshift is an extension of Python's native debugger, pdb. It can do everything pdb does, plus a few new commands:
ask PROMPT
Ask a question about the state of your program. An agent will operate the debugger to investigate and figure out an answer. Save yourself the busywork of digging into the stack trace.
fix [PROMPT]
(Coming soon) When an exception is thrown, run this to find the root cause of the issue and get a fix. The output will be a patch that you can apply to your codebase. You can provide an optional prompt describing the issue.
run PROMPT
(Coming soon) Generates and executes code in the context of the current scope. It'll run in an interpreter whose namespace is a direct copy of the program state at the current line of code. Generated code will not be executed without your approval.
Installation
> pip install redshift-cli
After installing, you need connect to Anthropic. Get an API key here, then add it to your environment:
> export ANTHROPIC_API_KEY="..."
You can also use OpenAI or other providers, including local ones like ollama. Redshift wraps LiteLLM, which supports over 100 models.
Note: This is still experimental and likely to be buggy. Stable release coming very soon.
Usage
You can set a breakpoint the same way you would in pdb:
import redshift
def foo():
# ...
redshift.set_trace()
# ...
Alternatively, you can avoid the import by overriding the built-in breakpoint function:
export PYTHONBREAKPOINT=redshift.set_trace
->
def foo():
# ...
breakpoint()
# ...
Configuration
You can customize Redshift using some environment variables:
REDSHIFT_AGENT_MODEL
LLM that's used by the agent for tool-calling. Default is "anthropic/claude-sonnet-4-20250514". Use LiteLLM names to identify the model (e.g. "openai/gpt-4o").
REDSHIFT_RESPONSE_MODEL
LLM that's used to generate the final response. This is used after the agent has collected context. Default is "anthropic/claude-sonnet-4-20250514".
REDSHIFT_MAX_ITERS
Number of tool calls the agent is allowed to make before generating a response. Default is 25.
REDSHIFT_HIDE_EXTERNAL_FRAMES
Toggles whether or not stack frames from external libraries are ignored by Redshift. Default is True, which means Redshift only cares about the frames in your codebase.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file redshift_cli-1.0.1.tar.gz.
File metadata
- Download URL: redshift_cli-1.0.1.tar.gz
- Upload date:
- Size: 25.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.18
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0cc0d73fd807490c1c2fc3d84731a6f1f1f461729dae710f1e03b532d794a962
|
|
| MD5 |
50f3b1c7a3bfe6b863763bf3b9c9c4ef
|
|
| BLAKE2b-256 |
c2f3f32c00405a0ba36e98ed34eb26fffea79f419669866241b549f8d6e5e753
|
File details
Details for the file redshift_cli-1.0.1-py3-none-any.whl.
File metadata
- Download URL: redshift_cli-1.0.1-py3-none-any.whl
- Upload date:
- Size: 36.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.18
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b7e238353a1d3ad2c64baaf099834700d77ee9857d33505643fd02b2bd5e78b6
|
|
| MD5 |
bfaed23a0d1f56b48b66e63e426f132a
|
|
| BLAKE2b-256 |
611d4fc240e224cfc9e4eeb383357a9d8a4ce79cdbbc865c4480ab67031929e5
|