Skip to main content

An OpenAI LLM based CLI coding assistant.

Project description

llm-code

PyPi Coverage Status


An OpenAI LLM based CLI coding assistant.

llm-code is inspired by Simon Wilson's llm package. It takes a similar approach of developing a simple tool to create an LLM based assistant that helps write code.

Installation

pipx install llm-code

Configuration

llm-code requires an OpenAI API key. You can get one from OpenAI.

You can set the key in a few different ways, depending on your preference:

  1. Set the OPENAI_API_KEY environment variable.
export OPENAI_API_KEY = sk-...
  1. Use an env file in ~/.llm_code/env
mkdir -p ~/.llm_code
echo "OPENAI_API_KEY=sk-..." > ~/.llm_code/env

Usage

llm-code is meant to be simple to use. The default prompts should be good enough. There are two broad modes:

  1. Generage some code from scratch.
llm-code "write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints."
  1. Give in some input files and ask for changes.
llm-code -i my_file.py "add docstrings to all python functions."
llm-code --help
Usage: llm-code [OPTIONS] [INSTRUCTIONS]...

  Coding assistant using OpenAI's chat models.

  Requires OPENAI_API_KEY as an environment variable. Alternately, you can set
  it in ~/.llm_code/env.

Options:
  -i, --inputs TEXT  Glob of input files. Use repeatedly for multiple files.
  -cb, --clipboard   Copy code to clipboard.
  -nc, --no-cache    Don't use cache.
  -4, --gpt-4        Use GPT-4.
  --version          Show version.
  --help             Show this message and exit.

Changing OpenAI parameters

Any of the OpenAI parameters can be changed using environment variables. GPT-4 is one exception: you can also set it using -4 for convenience.

export MAX_TOKENS=2000
export TEMPERATURE=0.5
export MODEL=gpt-4

or

llm-code -4 ...

Caching

A common usage pattern is to examine the output of a model and either accept it, or continue to play around with the prompts. When "accepting" the output, a common thing is to append it to a file, or copy it to the clipboard (using pbcopy on a mac, for example.). To facilitate this workflow of inspection and acceptance, llm-code caches the output of the model in a local sqlite database. This allows you to replay the same query without having to hit the OpenAI API.

llm-code 'write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints.'

Following this, assuming you like the output:

llm-code 'write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints.' > sum.py

Database

Borrowing simonw's excellent idea of logging things to a local sqlite, as demonstrated in llm, llm-code also logs all queries to a local sqlite database. This is useful for a few reasons:

  1. It allows you to replay the same query without having to hit the OpenAI API.
  2. It allows you to see what queries you've made in the past with responses, and number of tokens used.

Examples

Simple hello world.

llm-code write hello world in rust
fn main() {
    println!("Hello, world!");
}

Sum of two numbers with type hints.

llm-code "write a function that takes a list of numbers and returns the sum of the numbers in python. Add type hints."
from typing import List

def sum_numbers(numbers: List[int]) -> int:
    return sum(numbers)

Lets assume that we stuck the output of the previous call in out.py. We can now say:

llm-code -i out.py "add appropriate docstrings"
from typing import List

def sum_numbers(numbers: List[int]) -> int:
    """Return the sum of the given list of numbers."""
    return sum(numbers)

Or we could write some unit tests.

llm-code -i out.py "write a complete unit test file using pytest.
import pytest

from typing import List
from my_module import sum_numbers


def test_sum_numbers():
    assert sum_numbers([1, 2, 3]) == 6
    assert sum_numbers([-1, 0, 1]) == 0
    assert sum_numbers([]) == 0

TODO

  • Add a simple cache to replay the same query.
  • Add logging to a local sqllite db.
  • Add an --exec option to execute the generated code.
  • Add a --stats option to output token counts.
  • Add pyperclip integration to copy to clipboard.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_code-0.8.0.tar.gz (9.5 kB view details)

Uploaded Source

Built Distribution

llm_code-0.8.0-py3-none-any.whl (9.8 kB view details)

Uploaded Python 3

File details

Details for the file llm_code-0.8.0.tar.gz.

File metadata

  • Download URL: llm_code-0.8.0.tar.gz
  • Upload date:
  • Size: 9.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.11.9 Darwin/23.4.0

File hashes

Hashes for llm_code-0.8.0.tar.gz
Algorithm Hash digest
SHA256 3638b4d9b8514898d02e6c7a7d89b2ef079a76ff0e979ae8af7c81dcc98ad83b
MD5 dd67218ccfe8b7043297cccf1faa88ff
BLAKE2b-256 58333077b3bc3744eb62ff573c07e9d97f7fbc535ce6be31b8e1e49811943de6

See more details on using hashes here.

File details

Details for the file llm_code-0.8.0-py3-none-any.whl.

File metadata

  • Download URL: llm_code-0.8.0-py3-none-any.whl
  • Upload date:
  • Size: 9.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.11.9 Darwin/23.4.0

File hashes

Hashes for llm_code-0.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 fc72b09f60dbcab0f2d3aeffa4cfaa2b3e7a66bb6133d62399d2904e7fc43d4a
MD5 7064f2035a70ea2c3a4c41ebbb8e12ae
BLAKE2b-256 1fb380399a43eb71990596a9326533a754ca4e2d1c3d3a0b1ddca8247d1ca39d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page