Skip to main content

Simple LLM Memory

Project description

Memento

Simple LLM Memory.


Memento automatically manages your conversations with LLMs with just 3 lines of code. It leverages SQLAlchemy and Alembic to store conversations between users and assistants in SQLite3 or in memory.

Getting Started

To install Memento, run pip install memento-llm in your terminal.

With Memento, you no longer have to worry about setting up message storage logic in your application, here is how I can be integrated into your code:

from openai import OpenAI
from memento import Memento

client = OpenAI()

### Stores message history in-memory.
memory = Memento()

@memory ### Memento provides a decorator for your LLM generation function.
def generate():
    return client.chat.completions.create(
    model="gpt-3.5-turbo",
    # messages=[    ### No longer worry about the message parameter.
    #     {"role": "user", "content": "Extract Jason is 25 years old"},
    # ],
    )

response_1 = generate("My name is Anibal")
print(response_1) # Output: Hello Anibal!

response_2 = generate("What´s my name?")
print(response_2) # Output: Your name is Anibal.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

memento_llm-0.1.1.tar.gz (9.1 kB view hashes)

Uploaded Source

Built Distribution

memento_llm-0.1.1-py3-none-any.whl (14.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page