Skip to main content

The primary CLI of LLM-Toolbox facilitates direct interaction with ChatGPT from your terminal, offering customizable templates for message generation.

Project description

LMT: The CLI Tool for ChatGPT

lmt is the primary tool of the LLM-Toolbox and a versatile CLI interface tool that allows you to interact directly with OpenAI's ChatGPT models.

As a crucial component of the LLM-Toolbox, lmt exemplifies the toolbox's dedication to provide powerful and flexible tools that harness the capabilities of language models. It allows access to all available models of ChatGPT, including gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, and gpt-4-32k.

Furthermore, lmt enables you to design and use your own templates, broadening its potential applications. The ability to read from stdin using pipes also allows for convenient integration with other command-line tools.

If you find this project beneficial, consider expressing your support by giving it a star ⭐😊.

cioran

Table of Contents

  1. Features
  2. Installation
    1. pip
    2. pipx, the Easy Way
    3. Installing lmt with pipx
    4. Cloning the lmt Repository
  3. Getting Started
    1. Configuring your OpenAI API key
  4. Usage
    1. Basic Example
    2. Add a Persona
    3. Switching Models
    4. Template Utilization
    5. Emoji Integration
    6. Prompt Cost Estimation
    7. Reading from stdin
    8. Output Redirection
  5. Theming Colors for Code Blocks
    1. Example
  6. License
  7. Upcoming Features

Features

  • Access All ChatGPT Models: lmt supports all available ChatGPT models (gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, gpt-4-32k), giving you the power to choose the most suitable one for your task.
  • Custom Templates: Design and use your personalized toolbox of templates to streamline and automate your workflow.
  • Read From stdin: Using pipes, lmt can read from stdin, enabling you to use file content as a prompt.
  • Command-Line & Template Requests: lmt offers the flexibility of making requests directly from the command line or using pre-designed templates.
  • Vim Integration: As a CLI tool, it can easily be integrated in Vim as a filter command.

Installation

pip

python3 -m pip install lmt-cli

pipx, the Easy Way

To use these tools, I recommend that you first install pipx. It's a package manager for Python that makes the installation and upgrade of CLI apps easy (no more hassle with virtual environments 😌).

  • Debian / Ubuntu

    sudo apt install pipx
    
  • macOS

    brew install pipx
    

Installing lmt with pipx

To install the latest stable version of lmt, simply run this command:

pipx install lmt-cli

If you want to follow the main branch:

pipx install git+https://github.com/sderev/lmt

To upgrade it:

pipx upgrade lmt-cli

Cloning the lmt Repository

You can clone this repository with the following command:

git clone https://github.com/sderev/lmt.git

Getting Started

Configuring your OpenAI API key

For LMT to work properly, it is necessary to acquire and configure an OpenAI API key. Follow these steps to accomplish this:

  1. Acquire the OpenAI API key: You can do this by creating an account on the OpenAI website. Once registered, you will have access to your unique API key.

  2. Set usage limit: Before you start using the API, you need to define a usage limit. You can configure this in your OpenAI account settings by navigating to Billing -> Usage limits.

  3. Configure the OpenAI API key: Once you have your API key, you can set it up by running the lmt key set command.

    lmt key set
    

With these steps, you should now have successfully set up your OpenAI API key, ready for use with the LMT.

Usage

The lmt CLI tool is equipped with a helpful --help flag that displays useful information about how to use the tool and its commands.

For a general overview of all commands, you can type:

lmt --help

If you want detailed information about a specific command, such as prompt, you can display its help message like so:

lmt --help

Basic Example

The simplest way to use lmt is by entering a prompt for the model to respond to.

Here's a basic usage example where we ask the model to generate a greeting:

lmt "Say hello"

In this case, the model will generate and return a greeting based on the given prompt.

Add a Persona

You can also instruct the model to adopt a specific persona using the --system flag. This is useful when you want the model's responses to emulate a certain character or writing style.

Here's an example where we instruct the model to write like the philosopher Cioran:

lmt "Tell me what you think of large language models." \
        --system "You are Cioran. You write like Cioran."

In this case, the model will generate a response based on its understanding of Cioran's writing style and perspective.

Switching Models

Switching between different models is a breeze with lmt. Use the -m flag followed by the alias of the model you wish to employ.

lmt "Explain what is a large language model" -m 4

Below is a table outlining available model aliases for your convenience:

Alias Corresponding Model
chatgpt gpt-3.5-turbo
chatgpt-16k gpt-3.5-turbo-16k
3.5 gpt-3.5-turbo
3.5-16k gpt-3.5-turbo-16k
4 gpt-4
gpt4 gpt-4
4-32k gpt-4-32k
gpt4-32k gpt-4-32k

For instance, if you want to use the gpt-4 model, simply include -m 4 in your command.

Template Utilization

Templates, stored in ~/.config/lmt/templates and written in YAML, can be generated using the following command:

lmt templates add

For help regarding the templates subcommand, use:

lmt templates --help

Here's an example of invoking a template named "cioran":

lmt "Tell me how AI will change the world." --template cioran

You can also use the shorter version: -t cioran.

Emoji Integration

To infuse a touch of emotion into your requests, append the --emoji flag option.

Prompt Cost Estimation

For an estimation of your prompt's cost before sending, utilize the --tokens flag option.

Reading from stdin

lmt facilitates reading inputs directly from stdin, allowing you to pipe in the content of a file as a prompt. This feature can be particularly useful when dealing with longer or more complex prompts, or when you want to streamline your workflow by incorporating lmt into a larger pipeline of commands.

To use this feature, you simply need to pipe your content into the lmt command like this:

cat your_file.txt | lmt

In this example, lmt would use the content of your_file.txt as the input for the prompt command.

Also, remember that you can still use all other command line options with stdin. For instance, you might run:

cat your_file.py | lmt \
        --system "You explain code in the style of \
        a fast-talkin' wise guy from a 1940's gangster movie" \
        -m 4 --emoji

In this example, lmt takes the content of your_file.py as the input for the prompt command. With the gpt-4 model selected via -m 4, the system is instructed to respond in the style of a fast-talking wiseguy from a 1940s gangster movie, as specified in the --system option. The --emoji flag indicates that the response may include emojis for added expressiveness.

Output Redirection

You can use output redirections with the tools. For instance:

lmt "List 5 Wikipedia articles" --raw > wiki_articles.md

Theming Colors for Code Blocks

Once you used lmt, you should have a configuration file (~/.config/lmt/config.json) in which you can configure the colors for inline code and code blocks.

Here are the styles for the code blocks: https://pygments.org/styles/

As for the inline code blocks, they can be styled with the 256 colors (names or hexadecimal code).

Example

{
    "code_block_theme": "autumn",
    "inline_code_theme": "blue on #f0f0f0"
}

License

lmt is licensed under Apache License version 2.0.

Upcoming Features

While lmt already boasts a wide array of features, development is ongoing. Expect more features and improvements in the future!


https://github.com/sderev/lmt

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lmt-cli-0.0.25.tar.gz (20.2 kB view hashes)

Uploaded Source

Built Distribution

lmt_cli-0.0.25-py3-none-any.whl (18.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page