Interact with OpenAI's ChatGPT models from the comfort of your terminal.
Project description
LMterminal (lmt
)
LMterminal (lmt
) is a CLI tool that enables you to interact directly with OpenAI's ChatGPT models from the comfort of your terminal.
Table of Contents
Features
- Access All ChatGPT Models:
lmt
supports all available ChatGPT models (gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, gpt-4-32k), giving you the power to choose the most suitable one for your task. - Custom Templates: Design your own toolbox of templates to streamline your workflow.
- Read File: Incorporate file content into your prompts seamlessly.
- Output to a File: Redirect standard output (
stdout
) to a file or another program as needed. - Easy Vim Integration: Integrate ChatGPT into Vim effortlessly by using
lmt
as a filter command.
Installation
pip
python3 -m pip install LMterminal
pipx
, the Easy Way
pipx install LMterminal
Getting Started
Configuring your OpenAI API key
For LMT to work properly, it is necessary to acquire and configure an OpenAI API key. Follow these steps to accomplish this:
-
Acquire the OpenAI API key: You can do this by creating an account on the OpenAI website. Once registered, you will have access to your unique API key.
-
Set usage limit: Before you start using the API, you need to define a usage limit. You can configure this in your OpenAI account settings by navigating to Billing -> Usage limits.
-
Configure the OpenAI API key: Once you have your API key, you can set it up by running the
lmt key set
command.lmt key set
With these steps, you should now have successfully set up your OpenAI API key, ready for use with the LMT.
Usage
Basic Example
The simplest way to use lmt
is by entering a prompt for the model to respond to.
Here's a basic usage example where we ask the model to generate a greeting:
lmt "Say hello"
In this case, the model will generate and return a greeting based on the given prompt.
Add a Persona
You can also instruct the model to adopt a specific persona using the --system
flag. This is useful when you want the model's responses to emulate a certain character or writing style.
Here's an example where we instruct the model to write like the philosopher Cioran:
lmt "Tell me what you think of large language models." \
--system "You are Cioran. You write like Cioran."
In this case, the model will generate a response based on its understanding of Cioran's writing style and perspective.
Switching Models
Switching between different models is a breeze with lmt
. Use the -m
flag followed by the alias of the model you wish to employ.
lmt "Explain what is a large language model" -m 4
Below is a table outlining available model aliases for your convenience:
Alias | Corresponding Model |
---|---|
chatgpt | gpt-3.5-turbo |
chatgpt-16k | gpt-3.5-turbo-16k |
3.5 | gpt-3.5-turbo |
3.5-16k | gpt-3.5-turbo-16k |
4 | gpt-4 |
gpt4 | gpt-4 |
4-32k | gpt-4-32k |
gpt4-32k | gpt-4-32k |
For instance, if you want to use the gpt-4
model, simply include -m 4
in your command.
Template Utilization
Templates, stored in ~/.config/lmt/templates
and written in YAML, can be generated using the following command:
lmt templates add
For help regarding the templates
subcommand, use:
lmt templates --help
Here's an example of invoking a template named "cioran":
lmt "Tell me how AI will change the world." --template cioran
You can also use the shorter version: -t cioran
.
Emoji Integration
To infuse a touch of emotion into your requests, append the --emoji
flag option.
Prompt Cost Estimation
For an estimation of your prompt's cost before sending, utilize the --tokens
flag option.
Reading from stdin
lmt
facilitates reading inputs directly from stdin
, allowing you to pipe in the content of a file as a prompt. This feature can be particularly useful when dealing with longer or more complex prompts, or when you want to streamline your workflow by incorporating lmt
into a larger pipeline of commands.
To use this feature, you simply need to pipe your content into the lmt
command like this:
cat your_file.txt | lmt
In this example, lmt
would use the content of your_file.txt
as the input for the prompt
command.
Also, remember that you can still use all other command line options with stdin
. For instance, you might run:
cat your_file.py | lmt \
--system "You explain code in the style of \
a fast-talkin' wise guy from a 1940's gangster movie" \
-m 4 --emoji
In this example, lmt
takes the content of your_file.py
as the input for the prompt
command. With the gpt-4
model selected via -m 4
, the system is instructed to respond in the style of a fast-talking wiseguy from a 1940s gangster movie, as specified in the -s/--system
option. The --emoji
flag indicates that the response may include emojis for added expressiveness.
Append an Additional Prompt to Piped stdin
Beyond the -s/--system
option, lmt
offers the capability to append an additional user prompt when reading from stdin
. This is especially useful when you want to add context or specific instructions to the piped input without altering the system prompt.
For example, with a grocery_list.txt
file, you can append a prompt for healthy alternatives and set the system prompt to guide the AI's chef-like response.
cat grocery_list.txt | lmt "What are some healthy alternatives to these items?" \
--system "You are a chef with a focus on healthy and sustainable cooking."
Output Redirection
You can use output redirections. For instance:
lmt "List 5 Wikipedia articles" > wiki_articles.md
Using lmt
as a Vim Filter Command
To invoke lmt
as a filter command in Vim, you can use the command :.!lmt
. Remember, Vim offers the shortcut !!
as a quick way to enter :.!
. This means you can simply type !!lmt
to initiate your prompt.
Example: :.!lmt write an implementation of binary search
Additionally, you can filter specific lines from your text and pass them as a prompt to lmt
. To achieve this, highlight the desired lines in VISUAL
mode (or use ex
syntax), and then enter :.!lmt "Your additional prompt here"
.
Theming Colors for Code Blocks
Once you used lmt
, you should have a configuration file (~/.config/lmt/config.json
) in which you can configure the colors for inline code and code blocks.
Here are the styles for the code blocks: https://pygments.org/styles/
As for the inline code blocks, they can be styled with the 256 colors (names or hexadecimal code).
Example
{
"code_block_theme": "default",
"inline_code_theme": "blue on #f0f0f0"
}
License
lmt
is licensed under Apache License version 2.0.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file lmterminal-0.0.38.tar.gz
.
File metadata
- Download URL: lmterminal-0.0.38.tar.gz
- Upload date:
- Size: 16.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5dfc7df1baaabbee7f3885516d4f263c9d77b1dfbce06e1168b0f3dbe542d706 |
|
MD5 | a4e12920a7184ccc34bf7728ac5c475b |
|
BLAKE2b-256 | 729b89927a79f064adfdad4b4550720aea9a61455dff1db5451f64b984c91fa7 |
File details
Details for the file LMterminal-0.0.38-py3-none-any.whl
.
File metadata
- Download URL: LMterminal-0.0.38-py3-none-any.whl
- Upload date:
- Size: 18.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 41b4e46b5defb9f5a4bc2c4ca739c56c76ddd5b2c4b95748539be22390ff3743 |
|
MD5 | 81fad97a10d40fd9b25f60cdef81582c |
|
BLAKE2b-256 | 777dbde7fa0d6bc196ae5cce3fca8ecaed27c0a353acb515b8edfe09a9488975 |