Skip to main content

Token Management system for chatgpt and more. Keeps your prompt under token with summary support

Project description

PromptHandler

Focus on helping people not on managing prompts.

Manages Tokens (within limit), removes older message automatically, Summarizes

Installation

pip install prompthandler

Usage

An example code to make the model chat with you in terminal

from prompthandler import Prompthandler
model = Prompthandler()
model.add_system("You are now user's girlfriend so take care of him",to_head=True) # Makes this to go on to the head. Head is not rolled so it stays the same
model.add_user("Hi")
model.chat() # you can chat with it in terminal

For more examples see examples.ipynb Example Projects my silicon version github_repo

Behind the scenes

models.py

openai_chat_gpt class

This class represents the interaction with the GPT-3.5-turbo model (or other OpenAI models). It provides methods for generating completions for given messages and managing the conversation history.

Attributes:

  • api_key (str): The OpenAI API key.
  • model (str): The name of the OpenAI model to use.
  • MAX_TOKEN (int): The maximum number of tokens allowed for the generated completion.
  • temperature (float): The temperature parameter controlling the randomness of the output.

Methods:

  1. __init__(self, api_key=None, MAX_TOKEN=4096, temperature=0, model="gpt-3.5-turbo-0613"): Initializes the OpenAI chat model with the provided settings.

  2. get_completion_for_message(self, message, temperature=None): Generates a completion for a given message using the specified OpenAI model.

    • message (list): List of messages representing the conversation history.
    • temperature (float): Control the randomness of the output. If not provided, it uses the default temperature.

    Returns a tuple containing the completion generated by the model and the number of tokens used by the completion.

prompts.py

PromptHandler class

This class represents a conversation prompt history for interacting with the GPT-3.5-turbo model (or other OpenAI models). It extends the openai_chat_gpt class and provides additional methods for handling prompts, headers, and body messages.

Attributes (in addition to openai_chat_gpt attributes):

  • headers (list): List of header messages in the conversation history.
  • body (list): List of body messages in the conversation history.
  • head_tokens (int): Total tokens used in the headers.
  • body_tokens (int): Total tokens used in the body.
  • tokens (int): Total tokens used in the entire message history.

Methods (in addition to openai_chat_gpt methods):

  1. __init__(self, MAX_TOKEN=4096, api_key=None, temperature=0, model="gpt-3.5-turbo-0613"): Initializes the PromptHandler with the specified settings.

  2. get_completion(self, message='', update_history=True, temperature=None): Generates a completion for the conversation history.

    • message (str): The user's message to be added to the history.
    • update_history (bool): Flag to update the conversation history.
    • temperature (float): Control the randomness of the output.

    Returns a tuple containing the completion generated by the model and the number of tokens used by the completion.

  3. chat(self, update_history=True, temperature=None): Starts a conversation with the model. Accepts terminal input and prints the model's responses.

    • update_history (bool): Flag to update the conversation history.
    • temperature (float): Control the randomness of the output.
  4. update_messages(self): Combines the headers and body messages into a single message history.

    Returns the combined list of messages representing the conversation history.

  5. update_tokens(self): Updates the count of tokens used in the headers, body, and entire message history.

    Returns a tuple containing the total tokens used, tokens used in headers, and tokens used in the body.

  6. calibrate(self, MAX_TOKEN=None): Calibrates the message history by removing older messages if the total token count exceeds MAX_TOKEN.

    • MAX_TOKEN (int): The maximum number of tokens allowed for the generated completion.
  7. add(self, role, content, to_head=False): Adds a message to the message history.

    • role (str): The role of the message (user, assistant, etc.).
    • content (str): The content of the message.
    • to_head (bool): Specifies whether the message should be appended to the headers list. If False, it will be appended to the body list.

    Returns the last message in the message history.

  8. append(self, content_list): Appends a list of messages to the message history.

    • content_list (list): List of messages to be appended.
  9. get_last_message(self): Returns the last message in the message history.

    Returns the last message as a dictionary containing the role and content of the message.

  10. get_token_for_message(self, messages, model_name="gpt-3.5-turbo-0613"): Returns the number of tokens used by a list of messages.

  • messages (list): List of messages to count tokens for.
  • model_name (str): The name of the OpenAI model used for token encoding.

Returns the number of tokens used by the provided list of messages.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

prompthandler-0.3.3.tar.gz (6.8 kB view hashes)

Uploaded Source

Built Distribution

prompthandler-0.3.3-py3-none-any.whl (8.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page