A Python-based CLI tool to quickly interact with OpenAIs GPT models instead of relying on the web interface.
Project description
qprom a.k.a Quick Prompt
A Python-based CLI tool to quickly interact with OpenAI's GPT models instead of relying on the web interface.
Table of Contents
Description
qprom is a small project that lets you interact with OpenAI's GPT-4 and 3.5 chat API, quickly without having to use the web-ui. This enables quicker response times and better data privacy
Installation
pip install qprom
Setup
Make sure you have your OpenAI API key.
When running qprom the script tries to fetch the OpenAI API key from a credentials file located in the .qprom
folder within the user's home directory.
If the API key is not found in the credentials file, the user is prompted to provide it, and the provided key is then stored in the aforementioned credentials file for future use.
Usage
Argument | Type | Default | Choices | Description | Optional |
---|---|---|---|---|---|
-p |
String | None | None | Option to directly enter your prompt (Do not use this flag if you intend to have a multi-line prompt.) | yes |
-m |
String | gpt-4 |
gpt-3.5-turbo , gpt-4 |
Option to select the model | yes |
-t |
Float | 0.3 |
Between 0 and 2 |
Option to configure the temperature | yes |
-v |
Boolean | False |
None | Enable verbose mode | yes |
Usage
qprom -p <prompt> -m <model> -t <temperature> -v
<prompt>
: Replace with your prompt<model>
: Replace with eithergpt-3.5-turbo
orgpt-4
<temperature>
: Replace with a float value between0
and2
-v
: Add this flag to enable verbose mode
For example:
qprom -p "Translate the following English text to French: '{text}'" -m gpt-4 -t 0.7 -v
This will run the script with the provided prompt, using the gpt-4
model, a temperature of 0.7
, and verbose mode enabled.
Multi line prompting
To facilitate multi-line input for the prompt, invoke qprom without utilizing the -p parameter. This will prompt you for your input at runtime, where you can provide multiple lines as needed. To signal the end of your input, simply enter the string 'END'.
qprom
This will run qprom with default values model: gpt-4
, a temperature of 0.7
and ask for the prompt during runtime.
Todos
- Testing
- Add conversation mode
- Add option to select default model in config
- Add option to re-set the API token
- Add option to set the token limit for the conversation modes history
- Add option to disable streaming and only print the full response
Bug reports:
License
MIT Link
Support me :heart: :star: :money_with_wings:
If this project provided value, and you want to give something back, you can give the repo a star or support by buying me a coffee.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file qprom-0.5.2.tar.gz
.
File metadata
- Download URL: qprom-0.5.2.tar.gz
- Upload date:
- Size: 8.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 02f12307a75416a53fb4789044ed8f28b7968aa9c4c427b85b34062062cb26ee |
|
MD5 | 8c953eb7d88873924bdf99a793c68a82 |
|
BLAKE2b-256 | b43f0cc734c4155ba292920a6a62f2b6b5e0b68d8d0a30ece13ac638efc193c3 |
File details
Details for the file qprom-0.5.2-py3-none-any.whl
.
File metadata
- Download URL: qprom-0.5.2-py3-none-any.whl
- Upload date:
- Size: 7.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 66d883c85dd4e479d3d7e52587fa9d5a1b638f7883e458717a5635fa868ea645 |
|
MD5 | 4080a8aa25d7b97dbba33e05128d0c8f |
|
BLAKE2b-256 | 53de12c8eee4d3200b3e7fff4802acd257bfb857493df1f1faf62c907fc05e6e |