Skip to main content

Une approche pour la gestion de la mémoire à court terme dans les chatbots.

Project description

Introduction

We present here an approach for managing short-term memory in chatbots, using a combination of storage techniques and automatic summarization to optimize conversational context. The introduced method relies on a dynamic memory structure that limits data size while preserving essential information through intelligent summaries. This approach not only improves the fluidity of interactions but also ensures contextual continuity during long dialogue sessions. Additionally, the use of asynchronous techniques ensures that memory management operations do not interfere with the chatbot's responsiveness.

How to Use the shortterm-memory Package

This section explains how to use the shortterm-memory package to manage a chatbot's memory.

Installation

pip install torch transformers
pip install shortterm-memory
pip show shortterm-memory

Usage

from shortterm_memory.ChatbotMemory import ChatbotMemory

Usage Exemple

from shortterm_memory.ChatbotMemory import ChatbotMemory

# Initialisation de la mémoire du chatbot
chat_memory = ChatbotMemory()

# Mettre à jour la mémoire avec un nouvel échange
user_input = "Bonjour, comment allez-vous?"
bot_response = "Je vais bien, merci ! Et vous ?"
chat_memory.update_memory(user_input, bot_response)

# Obtenir l'historique des conversations
historique = chat_memory.get_memory()
print(historique)

Available Features

  • update_memory(user_input: str, bot_response: str): Updates the conversation history with a new question-response pair.

  • get_memory(): Returns the complete conversation history as a list.

  • memory_counter(conv_hist: list) -> int: Counts the total number of words in the conversation history.

  • compressed_memory(conv_hist: list) -> list: Compresses the conversation history using a summarization model.

Error Handling

Ensure that user inputs and bot responses are valid strings. If the history becomes too large, the package automatically compresses older conversations to save memory.

Mathematical Modeling of Conversation Management

In this section, we mathematically formalize conversation memory management in the chatbot. The memory is structured as a list of pairs representing exchanges between the user and the bot.

Conversation Memory Structure

The conversation memory can be defined as an ordered list of pairs $(u_i, d_i)$, where $u_i$ represents the user input and $d_i$ the bot response for the $i$-th exchange. This list is denoted by $\mathcal{C}$:

$$ \mathcal{C} = [(u_1, d_1), (u_2, d_2), \ldots, (u_n, d_n)] $$

where $n$ is the total number of exchanges in the current history.

Memory Update

When a new exchange occurs, a new pair $(u_{n+1}, d_{n+1})$ is added to the memory. If the size of $\mathcal{C}$ exceeds a predefined maximum limit $M_{\text{max}}$, the oldest exchange is removed:

$$ \mathcal{C} = \begin{cases} \mathcal{C} \cup {(u_{n+1}, d_{n+1})}, & \text{si } |\mathcal{C}| < M_{\text{max}} \ (\mathcal{C} \setminus {(u_1, d_1)}) \cup {(u_{n+1}, d_{n+1})}, & \text{si } |\mathcal{C}| = M_{\text{max}} \end{cases} $$

Word Count

To manage memory space and decide when compression is necessary, we calculate the total number of words $W(\mathcal{C})$ in memory:

$$ W(\mathcal{C}) = \sum_{(u_i, d_i) \in \mathcal{C}} (|u_i| + |d_i|) $$

where $|u_i|$ and $|d_i|$ are respectively the number of words in $u_i$ and $d_i$.

Memory Compression

When $W(\mathcal{C})$ exceeds a threshold $W_{\text{max}}$, the memory is compressed to maintain the relevance of the context. This compression is performed by a summarization model $\mathcal{S}$, such as BART:

$$ \mathcal{C}_{\text{compressed}} = \mathcal{S}(\mathcal{C}) $$

where $\mathcal{C}_{\text{compressed}}$ is the compressed version of the memory, reducing the total number of words while preserving the essence of past interactions.

Integration into the Language Model

The language model uses the compressed context to generate relevant responses. The prompt $P$ used by the model is constructed as follows:

$$ P = f(\mathcal{C}_{\text{compressed}}, \text{context}) $$

where $\text{context}$ is additional context retrieved from a RAG pipeline, and $f$ is a concatenation function that prepares the text for the model.

This approach ensures that the chatbot always has an up-to-date conversational context, enabling more natural and engaging interactions with the user.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

shortterm_memory-1.0.6.tar.gz (6.1 kB view details)

Uploaded Source

Built Distribution

shortterm_memory-1.0.6-py3-none-any.whl (6.5 kB view details)

Uploaded Python 3

File details

Details for the file shortterm_memory-1.0.6.tar.gz.

File metadata

  • Download URL: shortterm_memory-1.0.6.tar.gz
  • Upload date:
  • Size: 6.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.5

File hashes

Hashes for shortterm_memory-1.0.6.tar.gz
Algorithm Hash digest
SHA256 bbf9ff04cef86a09d024e24eeb86a22c36940cea82b2bc824f521bed19745fbd
MD5 eda017fe9884f6229610a039490053f5
BLAKE2b-256 2798769b20920efeefbb0b6d6c8ab26dda398beb10de942627e365ecc254e6d9

See more details on using hashes here.

File details

Details for the file shortterm_memory-1.0.6-py3-none-any.whl.

File metadata

File hashes

Hashes for shortterm_memory-1.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 8ce009334fb2f188a7d0694724f9f5da4957537906971ec6fffc2b781cca6e8d
MD5 632250bad9850c5d652d7bb5f3c47b4a
BLAKE2b-256 a06ca5b322270a816c9172801a5b1449ddc4779ff890a78fe3a12dcfb25124fa

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page