Simple LLM Memory
Project description
Memento
Simple LLM Memory.
Memento automatically manages your conversations with LLMs with just 3 lines of code. It leverages SQLAlchemy and Alembic to store conversations between users and assistants in SQLite3 or in memory.
Getting Started
To install Memento, run pip install memento-llm
in your terminal.
With Memento, you no longer have to worry about setting up message storage logic in your application, here is how I can be integrated into your code:
from openai import OpenAI
from memento import Memento
client = OpenAI()
### Stores message history in-memory.
memory = Memento()
@memory ### Memento provides a decorator for your LLM generation function.
def generate():
return client.chat.completions.create(
model="gpt-3.5-turbo",
# messages=[ ### No longer worry about the message parameter.
# {"role": "user", "content": "Extract Jason is 25 years old"},
# ],
)
response_1 = generate("My name is Anibal")
print(response_1) # Output: Hello Anibal!
response_2 = generate("What´s my name?")
print(response_2) # Output: Your name is Anibal.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
memento_llm-0.1.8.3.tar.gz
(12.9 kB
view hashes)
Built Distribution
Close
Hashes for memento_llm-0.1.8.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f90dbeb5d0ea9899f5bd5551c2a6acf7e2c3d7aa0539ae6ba0dbf64b9415540b |
|
MD5 | e2bf71599b0c12af467c907a60768b71 |
|
BLAKE2b-256 | 34949efd7f648aa5fa737e0f9f855f96bddc8d8f6e824195995d4a07e4ea382d |