Skip to main content

Teaching LLMs memory management for unbounded context

Project description

MemGPT logo

MemGPT allows you to build LLM agents with long term memory & custom tools

Discord arxiv 2310.08560 Documentation

MemGPT makes it easy to build and deploy stateful LLM agents with support for:

Installation & Setup

Install MemGPT:

pip install pymemgpt

To use MemGPT with OpenAI, set the enviornemnt variable OPENAI_API_KEY to your OpenAI key then run:

memgpt quickstart --backend openai

To use MemGPT with a free hosted endpoint, you run run:

memgpt quickstart --backend memgpt

For more advanced configuration options or to use a different LLM backend or local LLMs, run memgpt configure.

Quickstart (CLI)

You can create and chat with a MemGPT agent by running memgpt run in your CLI. The run command supports the following optional flags (see the CLI documentation for the full list of flags):

  • --agent: (str) Name of agent to create or to resume chatting with.
  • --first: (str) Allow user to sent the first message.
  • --debug: (bool) Show debug logs (default=False)
  • --no-verify: (bool) Bypass message verification (default=False)
  • --yes/-y: (bool) Skip confirmation prompt and use defaults (default=False)

You can view the list of available in-chat commands (e.g. /memory, /exit) in the CLI documentation.

Quickstart (Server)

You can use MemGPT to depoy agents as a service. The service requires authentication with a MemGPT admin password, which can be set with running export MEMGPT_SERVER_PASS=password. You can start a MemGPT service in two ways:

Option 1 (Recommended): Run with docker compose

  1. Clone the repo: git clone git@github.com:cpacker/MemGPT.git
  2. Run docker compose up
  3. Go to memgpt.localhost in the browser to view the developer portal

Option 2: Run with the CLI:

  1. Run memgpt server
  2. Go to localhost:8283 in the browser to view the developer portal

Once the server is running, you can use the Python client or REST API to connect to memgpt.localhost (if you're running with docker compose) or localhost:8283 (if you're running with the CLI) to create users, agents, and more.

Supported Endpoints & Backends

MemGPT is designed to be model and provider agnostic. The following LLM and embedding endpoints are supported:

Provider LLM Endpoint Embedding Endpoint
OpenAI
Azure OpenAI
Google AI (Gemini)
Anthropic (Claude)
Groq ⌛ (in-progress)
Cohere API
vLLM
Ollama
LM Studio
koboldcpp
oobabooga web UI
llama.cpp
HuggingFace TEI

When using MemGPT with open LLMs (such as those downloaded from HuggingFace), the performance of MemGPT will be highly dependent on the LLM's function calling ability. You can find a list of LLMs/models that are known to work well with MemGPT on the #model-chat channel on Discord, as well as on this spreadsheet.

Documentation

See full documentation at: https://memgpt.readme.io

Support

For issues and feature requests, please open a GitHub issue or message us on our #support channel on Discord.

Legal notices

By using MemGPT and related MemGPT services (such as the MemGPT endpoint or hosted service), you agree to our privacy policy and terms of service.

Roadmap

You can view (and comment on!) the MemGPT developer roadmap on GitHub: https://github.com/cpacker/MemGPT/issues/1200.

Benchmarking

To evaluate the performance of a model on MemGPT, simply configure the appropriate model settings using memgpt configure, and then initiate the benchmark via memgpt benchmark. The duration will vary depending on your hardware. This will run through a predefined set of prompts through multiple iterations to test the function calling capabilities of a model. You can help track what LLMs work well with MemGPT by contributing your benchmark results via this form, which will be used to update the spreadsheet.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file pymemgpt_nightly-0.3.12.dev20240501204423.tar.gz.

File metadata

File hashes

Hashes for pymemgpt_nightly-0.3.12.dev20240501204423.tar.gz
Algorithm Hash digest
SHA256 7f9c2145986a5c5f59d7eea660c1d231a171b58bcec0fde13e08a4e083199f88
MD5 713293193471d0035331ddafdcfc2b53
BLAKE2b-256 35f0ad701ad872343499e58aaf8e7db2d51d7c056a2528633dac2bac3e14b04e

See more details on using hashes here.

File details

Details for the file pymemgpt_nightly-0.3.12.dev20240501204423-py3-none-any.whl.

File metadata

File hashes

Hashes for pymemgpt_nightly-0.3.12.dev20240501204423-py3-none-any.whl
Algorithm Hash digest
SHA256 7ad33d19451f9f1a72bbca1b68a0793c49cc8e9039d3d780860b45ce83b4958e
MD5 0082b4fa324d0af73123a66456bbbfb5
BLAKE2b-256 8657e675816b779fa6b1dfe68a10f6d8f4704088636f3c14e1a4f8a29df37358

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page