Token Management system for chatgpt and more. Keeps your prompt under token with summary support
Project description
PromptHandler
Focus on helping people not on managing prompts.
Manages Tokens (within limit), removes older message automatically, Summarizes
Installation
pip install prompthandler
Usage
An example code to make the model chat with you in terminal
from prompthandler import Prompthandler
model = Prompthandler()
model.add_system("You are now user's girlfriend so take care of him",to_head=True) # Makes this to go on to the head. Head is not rolled so it stays the same
model.add_user("Hi")
model.chat() # you can chat with it in terminal
For more examples see examples.ipynb Example Projects my silicon version github_repo
Behind the scenes
models.py
openai_chat_gpt
class
This class represents the interaction with the GPT-3.5-turbo model (or other OpenAI models). It provides methods for generating completions for given messages and managing the conversation history.
Attributes:
api_key
(str): The OpenAI API key.model
(str): The name of the OpenAI model to use.MAX_TOKEN
(int): The maximum number of tokens allowed for the generated completion.temperature
(float): The temperature parameter controlling the randomness of the output.
Methods:
-
__init__(self, api_key=None, MAX_TOKEN=4096, temperature=0, model="gpt-3.5-turbo-0613")
: Initializes the OpenAI chat model with the provided settings. -
get_completion_for_message(self, message, temperature=None)
: Generates a completion for a given message using the specified OpenAI model.message
(list): List of messages representing the conversation history.temperature
(float): Control the randomness of the output. If not provided, it uses the default temperature.
Returns a tuple containing the completion generated by the model and the number of tokens used by the completion.
prompts.py
PromptHandler
class
This class represents a conversation prompt history for interacting with the GPT-3.5-turbo model (or other OpenAI models). It extends the openai_chat_gpt
class and provides additional methods for handling prompts, headers, and body messages.
Attributes (in addition to openai_chat_gpt
attributes):
headers
(list): List of header messages in the conversation history.body
(list): List of body messages in the conversation history.head_tokens
(int): Total tokens used in the headers.body_tokens
(int): Total tokens used in the body.tokens
(int): Total tokens used in the entire message history.
Methods (in addition to openai_chat_gpt
methods):
-
__init__(self, MAX_TOKEN=4096, api_key=None, temperature=0, model="gpt-3.5-turbo-0613")
: Initializes thePromptHandler
with the specified settings. -
get_completion(self, message='', update_history=True, temperature=None)
: Generates a completion for the conversation history.message
(str): The user's message to be added to the history.update_history
(bool): Flag to update the conversation history.temperature
(float): Control the randomness of the output.
Returns a tuple containing the completion generated by the model and the number of tokens used by the completion.
-
chat(self, update_history=True, temperature=None)
: Starts a conversation with the model. Accepts terminal input and prints the model's responses.update_history
(bool): Flag to update the conversation history.temperature
(float): Control the randomness of the output.
-
update_messages(self)
: Combines the headers and body messages into a single message history.Returns the combined list of messages representing the conversation history.
-
update_tokens(self)
: Updates the count of tokens used in the headers, body, and entire message history.Returns a tuple containing the total tokens used, tokens used in headers, and tokens used in the body.
-
calibrate(self, MAX_TOKEN=None)
: Calibrates the message history by removing older messages if the total token count exceedsMAX_TOKEN
.MAX_TOKEN
(int): The maximum number of tokens allowed for the generated completion.
-
add(self, role, content, to_head=False)
: Adds a message to the message history.role
(str): The role of the message (user, assistant, etc.).content
(str): The content of the message.to_head
(bool): Specifies whether the message should be appended to the headers list. If False, it will be appended to the body list.
Returns the last message in the message history.
-
append(self, content_list)
: Appends a list of messages to the message history.content_list
(list): List of messages to be appended.
-
get_last_message(self)
: Returns the last message in the message history.Returns the last message as a dictionary containing the role and content of the message.
-
get_token_for_message(self, messages, model_name="gpt-3.5-turbo-0613")
: Returns the number of tokens used by a list of messages.
messages
(list): List of messages to count tokens for.model_name
(str): The name of the OpenAI model used for token encoding.
Returns the number of tokens used by the provided list of messages.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file prompthandler-0.3.3.tar.gz
.
File metadata
- Download URL: prompthandler-0.3.3.tar.gz
- Upload date:
- Size: 6.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.17
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2ca8177b4ef21215067e56769b511668ef6d766269d8e80dff0eadc8416db90c |
|
MD5 | 25e714d58cb382fdb11d3670fff2941e |
|
BLAKE2b-256 | f129efeafb854477075fd947ecafe14c6f6466dd42671c53989f2c4638e4ed34 |
File details
Details for the file prompthandler-0.3.3-py3-none-any.whl
.
File metadata
- Download URL: prompthandler-0.3.3-py3-none-any.whl
- Upload date:
- Size: 8.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.17
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f5ce6d1183bd91efaae3ebfa33a8647a27f92fce487c3303e5f77bb9b11d8af8 |
|
MD5 | e93fc118b81a3bfee2c8bc9a63f729b6 |
|
BLAKE2b-256 | e0ca962dce5b40ff66eda4313e644174d51f805c9989c7c5c4cb5dd17aa2fa6f |