Skip to main content

Teaching LLMs memory management for unbounded context

Project description

MemGPT logo

MemGPT allows you to build LLM agents with self-editing memory

Try out our MemGPT chatbot on Discord!

You can now run MemGPT with open/local LLMs and AutoGen!

Discord arxiv 2310.08560 Documentation

🤖 Create perpetual chatbots with self-editing memory!


MemGPT demo video

🗃️ Chat with your data - talk to your local files or SQL database!

MemGPT demo video for sql search

Quick setup

Join Discord and message the MemGPT bot (in the #memgpt channel). Then run the following commands (messaged to "MemGPT Bot"):

  • /profile (to create your profile)
  • /key (to enter your OpenAI key)
  • /create (to create a MemGPT chatbot)

Make sure your privacy settings on this server are open so that MemGPT Bot can DM you:
MemGPT → Privacy Settings → Direct Messages set to ON

set DMs settings on MemGPT server to be open in MemGPT so that MemGPT Bot can message you

You can see the full list of available commands when you enter / into the message box.

MemGPT Bot slash commands

What is MemGPT?

Memory-GPT (or MemGPT in short) is a system that intelligently manages different memory tiers in LLMs in order to effectively provide extended context within the LLM's limited context window. For example, MemGPT knows when to push critical information to a vector database and when to retrieve it later in the chat, enabling perpetual conversations. Learn more about MemGPT in our paper.

Running MemGPT locally

Install MemGPT:

pip install -U pymemgpt

Now, you can run MemGPT and start chatting with a MemGPT agent with:

memgpt run

If you're running MemGPT for the first time, you'll see two quickstart options:

  1. OpenAI: select this if you'd like to run MemGPT with OpenAI models like GPT-4 (requires an OpenAI API key)
  2. MemGPT Free Endpoint: select this if you'd like to try MemGPT on a top open LLM for free (currently variants of Mixtral 8x7b!)

Neither of these options require you to have an LLM running on your own machine. If you'd like to run MemGPT with your custom LLM setup (or on OpenAI Azure), select Other to proceed to the advanced setup.

Advanced setup

You can reconfigure MemGPT's default settings by running:

memgpt configure

In-chat commands

You can run the following commands in the MemGPT CLI prompt while chatting with an agent:

  • /exit: Exit the CLI
  • /attach: Attach a loaded data source to the agent
  • /save: Save a checkpoint of the current agent/conversation state
  • /dump: View the current message log (see the contents of main context)
  • /dump <count>: View the last messages (all if is omitted)
  • /memory: Print the current contents of agent memory
  • /pop: Undo the last message in the conversation
  • /pop <count>: Undo the last messages in the conversation. It defaults to 3, which usually is one turn around in the conversation
  • /retry: Pops the last answer and tries to get another one
  • /rethink <text>: Will replace the inner dialog of the last assistant message with the <text> to help shaping the conversation
  • /rewrite: Will replace the last assistant answer with the given text to correct or force the answer
  • /heartbeat: Send a heartbeat system message to the agent
  • /memorywarning: Send a memory warning system message to the agent

Once you exit the CLI with /exit, you can resume chatting with the same agent by specifying the agent name in memgpt run --agent <NAME>.

Documentation

See full documentation at: https://memgpt.readme.io

Installing from source

To install MemGPT from source, start by cloning the repo:

git clone git@github.com:cpacker/MemGPT.git

Then navigate to the main MemGPT directory, and do:

pip install -e .

Now, you should be able to run memgpt from the command-line using the downloaded source code.

If you are having dependency issues using pip install -e ., we recommend you install the package using Poetry (see below). Installing MemGPT from source using Poetry will ensure that you are using exact package versions that have been tested for the production build.

Installing from source (using Poetry)

First, install Poetry using the official instructions here.

Then, you can install MemGPT from source with:

git clone git@github.com:cpacker/MemGPT.git
poetry shell
poetry install

Python integration (for developers)

The fastest way to integrate MemGPT with your own Python projects is through the MemGPT client:

from memgpt import create_client

# Connect to the server as a user
client = create_client()

# Create an agent
agent_info = client.create_agent(
  name="my_agent",
  persona="You are a friendly agent.",
  human="Bob is a friendly human."
)

# Send a message to the agent
messages = client.user_message(agent_id=agent_info.id, message="Hello, agent!")

What open LLMs work well with MemGPT?

When using MemGPT with open LLMs (such as those downloaded from HuggingFace), the performance of MemGPT will be highly dependent on the LLM's function calling ability.

You can find a list of LLMs/models that are known to work well with MemGPT on the #model-chat channel on Discord, as well as on this spreadsheet.

Benchmarking an LLM on MemGPT (memgpt benchmark command)

To evaluate the performance of a model on MemGPT, simply configure the appropriate model settings using memgpt configure, and then initiate the benchmark via memgpt benchmark. The duration will vary depending on your hardware. This will run through a predefined set of prompts through multiple iterations to test the function calling capabilities of a model.

You can help track what LLMs work well with MemGPT by contributing your benchmark results via this form, which will be used to update the spreadsheet.

Support

For issues and feature requests, please open a GitHub issue or message us on our #support channel on Discord.

Datasets

Datasets used in our paper can be downloaded at Hugging Face.

Legal notices

By using MemGPT and related MemGPT services (such as the MemGPT endpoint or hosted service), you agree to our privacy policy and terms of service.

Roadmap

You can view (and comment on!) the MemGPT developer roadmap on GitHub: https://github.com/cpacker/MemGPT/issues/1200.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pymemgpt_nightly-0.3.9.dev20240411103902.tar.gz (548.3 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file pymemgpt_nightly-0.3.9.dev20240411103902.tar.gz.

File metadata

File hashes

Hashes for pymemgpt_nightly-0.3.9.dev20240411103902.tar.gz
Algorithm Hash digest
SHA256 00a8f822010fe8ae78878647e10aad3e9d02fef6bebd2ce9e0e11b45652a2be6
MD5 bba84b00369179a37864e49ccf61be9d
BLAKE2b-256 28e7e8c0c970c21e3670e810627cef085bbbbac1f9e994540354838d7781a0ac

See more details on using hashes here.

File details

Details for the file pymemgpt_nightly-0.3.9.dev20240411103902-py3-none-any.whl.

File metadata

File hashes

Hashes for pymemgpt_nightly-0.3.9.dev20240411103902-py3-none-any.whl
Algorithm Hash digest
SHA256 b66dab30724f6cb38f5b0e14a1949421216314a82c0e6d6d8d9ee435051ea8d6
MD5 b7ba74c4c2ae2520870779b3c0144b9b
BLAKE2b-256 7e7e37e24ae69501b2dbe2d18ce4d8d86450532c7a11a20b358133ee646843b0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page