A Python-based CLI tool to quickly interact with OpenAIs GPT models instead of relying on the web interface.
Project description
qprom a.k.a Quick Prompt - ChatGPT CLI
A Python-based CLI tool to quickly interact with OpenAI's GPT models instead of relying on the web interface.
Table of Contents
Description
qprom is a small project that lets you interact with OpenAI's GPT-4 and 3.5 chat API, quickly without having to use the web-ui. This enables quicker response times and better data privacy
Installation
pip install qprom
Setup
Make sure you have your OpenAI API key.
When running qprom the script tries to fetch the OpenAI API key from a credentials file located in the .qprom
folder within the user's home directory.
If the API key is not found in the credentials file, the user is prompted to provide it, and the provided key is then stored in the aforementioned credentials file for future use.
Usage
Argument | Type | Default | Choices | Description | Optional |
---|---|---|---|---|---|
-p |
String | None | None | Option to directly enter your prompt (Do not use this flag if you intend to have a multi-line prompt.) | yes |
-m |
String | gpt-3.5-turbo |
gpt-3.5-turbo , gpt-4 , ... |
Option to select the model | yes |
-M |
String | gpt-3.5-turbo |
gpt-3.5-turbo , gpt-4 , ... |
Set the default model | yes |
-t |
Float | 0.3 |
Between 0 and 2 |
Option to configure the temperature | yes |
-v |
Boolean | False |
None | Enable verbose mode | yes |
-c |
Boolean | False |
None | Enable conversation mode | yes |
-tk |
String | 6500 |
None | Option to set the currently used token limit | yes |
-TK |
String | 6500 |
None | Option to configure the currently used token limit | yes |
Usage
qprom -p <prompt> -m <model> -t <temperature> -v -c
<prompt>
: Replace with your prompt<model>
: Replace with eithergpt-3.5-turbo
orgpt-4
<temperature>
: Replace with a float value between0
and2
-v
: Add this flag to enable verbose mode-c
: Add this flag to enable conversation mode
For example:
qprom -p "Translate the following English text to French: '{text}'" -m gpt-4 -t 0.7 -v
This will run the script with the provided prompt, using the gpt-4
model, a temperature of 0.7
, and verbose mode enabled.
Multi line prompting
To facilitate multi-line input for the prompt, invoke qprom without utilizing the -p parameter. This will prompt you for your input at runtime, where you can provide multiple lines as needed. To signal the end of your input, simply enter the string 'END'.
qprom
This will run qprom with default values model: gpt-3.5-turbo
, a temperature of 0.7
and ask for the prompt during runtime.
Set default model
qprom -M <model-name>
Set token limit for prompt/conversation
qprom -tk <token-limit>
Set default token limit
qprom -TK <token-limit>
Piping console input into qprom
Just pipe the prompt into qprom.
cat prompt.txt | qprom
Todos
- Add option to set default temperature
- Add option to re-set the API token
- Testing
- Add option to disable streaming and only print the full response
Bug reports:
License
MIT Link
Support me :heart: :star: :money_with_wings:
If this project provided value, and you want to give something back, you can give the repo a star or support by buying me a coffee.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file qprom-0.5.6.tar.gz
.
File metadata
- Download URL: qprom-0.5.6.tar.gz
- Upload date:
- Size: 9.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1fb914f03f6e541f7626a8fda6047ff60a59f42785c7883d02a2022dce2f9ddc |
|
MD5 | 14b9bced1f2872f32d5aefc1e119825a |
|
BLAKE2b-256 | a5b6223942c330bfc5dac553f60597508ee2736d05530bd02a05b4c0d15d9262 |