No project description provided
Project description
chatgpt-prompt-wrapper
Python CLI implementation for ChatGPT.
Requirement
- Python 3.9, 3.10, 3.11
- Poetry (For development)
Installation
$ pip install chatgpt-prompt-wrapper
Usage
Command line interface
$ cg help
usage: cg [-h] [-k KEY] [-c CONF] [-m MODEL] [-t TOKENS] [-l LIMIT] [--show] [--hide] [--multiline]
[--no_multiline] [--show_cost]
subcommand [message ...]
positional arguments:
subcommand Subcommand to run. Use 'commands' subcommand to list up available subcommands.
message Message to send to ChatGPT
optional arguments:
-h, --help show this help message and exit
-k KEY, --key KEY OpenAI API key.
-c CONF, --conf CONF Path to the configuration toml file.
-m MODEL, --model MODEL
ChatGPT Model to use.
-t TOKENS, --tokens TOKENS
The maximum number of tokens to generate in the chat completion. Set 0 to use
the max values for the model minus prompt tokens.
-l LIMIT, --limit LIMIT
The limit of the total tokens of the prompt and the completion. Set 0 to use
the max values for the model.
--show Show prompt for ask command.
--hide Hide prompt for ask command.
--multiline Use multiline input for chat command.
--no_multiline Use single line input for chat command.
--show_cost Show cost used.
$ cg commands
Available subcommands:
Reserved commands:
init : Initialize config file with an example command.
cost : Show estimated cost used until now.
commands : List up subcommands (show this).
version : Show version.
help : Show help.
User commands:
ask : Ask a question w/o predefined prompt.
test : Example command to test the OpenAI API.
...
Configuration file
File path
The default path to the configuration file is $XDG_CONFIG_HOME/cg/config.toml.
If $XDG_CONFIG_HOME is not defined, use ~/.config/cg/config.toml.
If it does not exist and ~/.cg/config.toml exists, the existing file is used.
You can change the path by -c <file>
(--conf <file>
) option.
How to write the configuration file
The configuration file is written in the TOML format.
Subcommand is defined as the top table name.
The options for each table can be:
description
: Description of the command.chat
: Settrue
to make the command chat mode (default is ask mode, only one exchange).show_cost
: Settrue
to show cost at the end of the command.model
: The model to use. (default: "gpt-3.5-turbo")max_tokens
: The maximum number of tokens to generate in the chat completion. Set 0 to use the max values for the model. (default: 0)tokens_limit
: The limit of the total tokens of the prompt and the completion. Set 0 to use the max values for the model. (default: 0)temperature
: Sampling temperature (0 ~ 2). (default: 1)top_p
: Probability (0 ~ 1) that the model will consider the top_p tokens. Do not set both temperature and top_p in the same time. (default: 1)presence_penalty
: The penalty for the model to return the same token (-2 ~ 2). (default: 0)frequency_penalty
: The penalty for the model to return the same token multiple times (-2 ~ 2). (default: 0)- List of
messages
: Dictionary of message, which must haverole
('system', 'user' or 'assistant') andcontent
(message text).
The options for ask mode:
show
: Settrue
to show prompt for non chat command.hide
: Settrue
to hide prompt for non chat command (default).
The options for chat mode:
multiline
: Settrue
to hide prompt for non chat command (default).no_multiline
: Settrue
to hide prompt for non chat command.
Here is a example configuration (if you execute cg init
at the first time, this configuration file is created).
[ask]
description = "Ask a question w/o predefined prompt."
[test]
# Example command to test the OpenAI API, taken from below.
# [Chat completion - OpenAI API](https://platform.openai.com/docs/guides/chat/introduction)
description = "Example command to test the OpenAI API."
show = true
[[test.messages]]
role = "system"
content = "You are a helpful assistant."
[[test.messages]]
role = "user"
content = "Who won the world series in 2020?"
[[test.messages]]
role = "assistant"
"content" = "The Los Angeles Dodgers won the World Series in 2020."
[[test.messages]]
role = "user"
content = "Where was it played?"
[sh]
description = "Ask a shell scripting question."
[[sh.messages]]
role = "user"
content = "You are an expert of the shell scripting. Answer the following questions."
[py]
description = "Ask a python programming question."
[[py.messages]]
role = "user"
content = "You are an expert python programmer. Answer the following questions."
[chat]
description = "Chat with the assistant."
chat = true
[[chat.messages]]
role = "user"
content = "Let's enjoy a chat."
These messages will be sent as an prompt before your input message.
You can give full questions and use cg
w/o input messages like a first example test
command.
Command examples:
- test
- sh
- py
- caht
Development
Poetry
Use Poetry to setup environment.
To install poetry, run:
$ pip install poetry
or use pipx
(x
is 3
or anything of your python version).
Setup poetry environment:
$ poetry install
Then enter the environment:
$ poetry shell
pre-commit
To check codes at the commit, use pre-commit.
pre-commit
command will be installed in the poetry environment.
First, run:
$ pre-commit install
Then pre-commit
will be run at the commit.
Sometimes, you may want to skip the check. In that case, run:
$ git commit --no-verify
You can run pre-commit
on entire repository manually:
$ pre-commit run -a
pytest
Tests are written with pytest.
Write tests in /tests directory.
To run tests, run:
$ pytest
The default setting runs tests in parallel with -n auto
.
If you run tests in serial, run:
$ pytest -n 0
GitHub Actions
If you push a repository to GitHub, GitHub Actions will run a test job by GitHub Actions.
The job runs at the Pull Request, too.
It checks codes with pre-commit
and runs tests with pytest
.
It also makes a test coverage report and uploads it to the coverage branch.
You can see the test status as a badge in the README.
Renovate
If you want to update dependencies automatically, install Renovate into your repository.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for chatgpt_prompt_wrapper-0.0.3.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5269c2806346bb48e2abe1e9f7a4735cdb81a3bf08ef91f63ad03ebea6218c26 |
|
MD5 | 4506dbabcada81224bea468b1f1e5042 |
|
BLAKE2b-256 | 90849a735f7cea6ef526ffe4e9947a4fbda62a2e1df96f96e6cc1f7642152a5e |
Hashes for chatgpt_prompt_wrapper-0.0.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1365c0c583555ffd4601f627715622359fd022b3ed70e2b81e569c60078cc5b5 |
|
MD5 | 9303b40078dd78fc8715989aadb90a12 |
|
BLAKE2b-256 | 27cbe311787481049054c25b12c286a675a0d68ee137d20a5f5fbcaba7680956 |