Skip to main content

No project description provided

Project description

Command Line Loom

Command Line Loom is a Python command-line tool for generating text using OpenAI's GPT-3 and other models. It includes a modular model system that allows for easy integration of new models and customization of existing ones.

Includes templates, look in the Turbo Text Transformer Prompts repository for more documentation and to find a list of the templates!

Installation

To install Command Line Loom, you can use pip:

pip install command-line-loom

or clone the repository and install it manually:

git clone https://github.com/fergusfettes/command-line-loom.git
cd command-line-loom
poetry install

Usage

The basic syntax for running cll is as follows:

cll <prompt> [options]

Here, <prompt> is the text that you want to transform. You can use the --prompt_file option to load the prompt from a file instead of typing it out on the command line, or you can cat some text in:

cat some_file.txt | cll

for example, to generate this readme I did

cat pyproject.toml cll/__main__.py | cll -t readme > README.md

where I'm using the 'readme' template, which is a template for generating a readme file.

Options

There are several options you can use with the cll command:

  • --format/-f: Output format (default: "clean"). Valid options are "clean", "json", or "logprobs".
  • --echo_prompt/-e: Echo the prompt in the output.
  • --list_models/-l: List available models.
  • --prompt_file/-P: File to load for the prompt.
  • --template_file/-t: Template file to apply to the prompt.
  • --template_args/-x: Extra values for the template.
  • --chunk_size/-c: Max size of chunks.
  • --summary_size/-s: Size of chunk summaries.
  • --model/-m: Name of the model to use (default: "gpt-3.5-turbo").
  • --number/-N: Number of completions.
  • --logprobs/-L: Show logprobs for completion.
  • --max_tokens/-M: Max number of tokens to return.
  • --temperature/-T: Temperature, [0, 2]-- 0 is deterministic, >0.9 is creative.
  • --force/-F: Force chunking of prompt.

Configuration

Before using Command Line Loom, you need to set up a configuration file. This should happen automatically when you run the cll command for the first time.

This will create a configuration file in your home directory. See the documentation for each model to learn how to obtain an API key.

api_key: sk-<your api key here>
engine_params:
  frequency_penalty: 0
  logprobs: null
  max_tokens: 1000
  model: davinci
  n: 4
  presence_penalty: 0
  stop: null
  temperature: 0.9
  top_p: 1
models:
- babbage
- davinci
- gpt-3.5-turbo-0301
- text-davinci-003
etc.

Examples

Here are some examples of how to use Command Line Loom:

# Generate text with the default model
cll "Once upon a time, there was a"

# Generate text with a specific model
cll -m text-davinci-003 "The meaning of life is"

# Generate multiple completions
cll -n 5 "I like to eat"

# Show logprobs
cll "I like to eat" -f logprobs

# Use the JSON format
cll -f json "I like to eat"

If you put in the 'logprobs' flag, it will try to color the terminal output based on the logprobs. This is a bit janky at the moment.

You can also tell it to output a formatted json file with the -f json flag. This is useful for piping into other programs.

cll -f json "The cat sat on the"

If you want to input more text freely, just use it without a prompt and you can write or paste directly into stdin.

Chunking

If you dump in a tonne of text, it will try to chunk it up into smaller pieces:

cat song-of-myself.txt | cll -t poet -x 'poet=Notorious B.I.G.' > song_of_biggie.txt

(Note, this is an incredibly wasteful way to extract the text from a website, but at current prices should only cost ~$0.30 so, unhinged as it its, its probably about parity with clicking and dragging.)

Models

Command Line Loom includes support for text generation with all the openai models. Have a look at the model list with cll -l.

Contributing

If you find a bug or would like to contribute to Command Line Loom, please create a new GitHub issue or pull request.

Inspiration/Similar

Inspired by Loom (more to come on this front-- aiming for a command-line-loom).

License

Command Line Loom is released under the MIT License. See LICENSE for more information.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

command_line_loom-0.0.4.tar.gz (23.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

command_line_loom-0.0.4-py3-none-any.whl (26.3 kB view details)

Uploaded Python 3

File details

Details for the file command_line_loom-0.0.4.tar.gz.

File metadata

  • Download URL: command_line_loom-0.0.4.tar.gz
  • Upload date:
  • Size: 23.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.2 CPython/3.10.6 Linux/5.19.0-41-generic

File hashes

Hashes for command_line_loom-0.0.4.tar.gz
Algorithm Hash digest
SHA256 bd0683bc307e27041115f3368d90dff6276ce7a611d784c80c02b4628a9088b8
MD5 2fb42c318ec2d237f3bf0e0570c946df
BLAKE2b-256 3a4491b8100641f3ec46da00069d9127c278e610ad6d7e7bf475d57842b94862

See more details on using hashes here.

File details

Details for the file command_line_loom-0.0.4-py3-none-any.whl.

File metadata

  • Download URL: command_line_loom-0.0.4-py3-none-any.whl
  • Upload date:
  • Size: 26.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.4.2 CPython/3.10.6 Linux/5.19.0-41-generic

File hashes

Hashes for command_line_loom-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 91634096b97962883c0fafafb03a2acdafcfea9f046e0a91adbf93046929975c
MD5 0337b83b4d21c85e692cd509a4b38bf2
BLAKE2b-256 e86eb6c983341bb437beb4cb729ad7a11aeba9b992060c97be35b20b8b18c434

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page