Create LLM agents with long-term memory and custom tools
Project description
MemGPT makes it easy to build and deploy stateful LLM agents with support for:
- Long term memory/state management
- Connections to external data sources (e.g. PDF files) for RAG
- Defining and calling custom tools (e.g. google search)
You can also use MemGPT to depoy agents as a service. You can use a MemGPT server to run a multi-user, multi-agent application on top of supported LLM providers.
Installation & Setup
Install MemGPT:
pip install -U pymemgpt
To use MemGPT with OpenAI, set the environment variable OPENAI_API_KEY
to your OpenAI key then run:
memgpt quickstart --backend openai
To use MemGPT with a free hosted endpoint, you run run:
memgpt quickstart --backend memgpt
For more advanced configuration options or to use a different LLM backend or local LLMs, run memgpt configure
.
Quickstart (CLI)
You can create and chat with a MemGPT agent by running memgpt run
in your CLI. The run
command supports the following optional flags (see the CLI documentation for the full list of flags):
--agent
: (str) Name of agent to create or to resume chatting with.--first
: (str) Allow user to sent the first message.--debug
: (bool) Show debug logs (default=False)--no-verify
: (bool) Bypass message verification (default=False)--yes
/-y
: (bool) Skip confirmation prompt and use defaults (default=False)
You can view the list of available in-chat commands (e.g. /memory
, /exit
) in the CLI documentation.
Dev portal (alpha build)
MemGPT provides a developer portal that enables you to easily create, edit, monitor, and chat with your MemGPT agents. The easiest way to use the dev portal is to install MemGPT via docker (see instructions below).
Quickstart (Server)
Option 1 (Recommended): Run with docker compose
- Install docker on your system
- Clone the repo:
git clone git@github.com:cpacker/MemGPT.git
- Run
docker compose up
- Go to
memgpt.localhost
in the browser to view the developer portal
Option 2: Run with the CLI:
- Run
memgpt server
- Go to
localhost:8283
in the browser to view the developer portal
Once the server is running, you can use the Python client or REST API to connect to memgpt.localhost
(if you're running with docker compose) or localhost:8283
(if you're running with the CLI) to create users, agents, and more. The service requires authentication with a MemGPT admin password, which can be set with running export MEMGPT_SERVER_PASS=password
.
Supported Endpoints & Backends
MemGPT is designed to be model and provider agnostic. The following LLM and embedding endpoints are supported:
Provider | LLM Endpoint | Embedding Endpoint |
---|---|---|
OpenAI | ✅ | ✅ |
Azure OpenAI | ✅ | ✅ |
Google AI (Gemini) | ✅ | ❌ |
Anthropic (Claude) | ✅ | ❌ |
Groq | ✅ (alpha release) | ❌ |
Cohere API | ✅ | ❌ |
vLLM | ✅ | ❌ |
Ollama | ✅ | ❌ |
LM Studio | ✅ | ❌ |
koboldcpp | ✅ | ❌ |
oobabooga web UI | ✅ | ❌ |
llama.cpp | ✅ | ❌ |
HuggingFace TEI | ❌ | ✅ |
When using MemGPT with open LLMs (such as those downloaded from HuggingFace), the performance of MemGPT will be highly dependent on the LLM's function calling ability. You can find a list of LLMs/models that are known to work well with MemGPT on the #model-chat channel on Discord, as well as on this spreadsheet.
Documentation
See full documentation at: https://memgpt.readme.io
Support
For issues and feature requests, please open a GitHub issue or message us on our #support
channel on Discord.
Legal notices
By using MemGPT and related MemGPT services (such as the MemGPT endpoint or hosted service), you agree to our privacy policy and terms of service.
Roadmap
You can view (and comment on!) the MemGPT developer roadmap on GitHub: https://github.com/cpacker/MemGPT/issues/1200.
Benchmarking
To evaluate the performance of a model on MemGPT, simply configure the appropriate model settings using memgpt configure
, and then initiate the benchmark via memgpt benchmark
. The duration will vary depending on your hardware. This will run through a predefined set of prompts through multiple iterations to test the function calling capabilities of a model. You can help track what LLMs work well with MemGPT by contributing your benchmark results via this form, which will be used to update the spreadsheet.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file pymemgpt_nightly-0.3.14.dev20240506104011.tar.gz
.
File metadata
- Download URL: pymemgpt_nightly-0.3.14.dev20240506104011.tar.gz
- Upload date:
- Size: 564.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.11.9 Linux/6.5.0-1018-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cc6416dd7623acfecd67e17faeeec228c470c9b8c395acf7077e3a60abdd5419 |
|
MD5 | af145969c1bcde0793af5f80a908fb43 |
|
BLAKE2b-256 | 355bb1eec3678efe538d9052149d1e741af6cebe3d0fb0f5b8c79b7f8bd5f30e |
File details
Details for the file pymemgpt_nightly-0.3.14.dev20240506104011-py3-none-any.whl
.
File metadata
- Download URL: pymemgpt_nightly-0.3.14.dev20240506104011-py3-none-any.whl
- Upload date:
- Size: 643.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.11.9 Linux/6.5.0-1018-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6823f66645f8c0992290b1f788b481487f8a1464a4849251e3574cfdc7ae675b |
|
MD5 | 66144d4b99196fca8b4a5d89f84d9e0f |
|
BLAKE2b-256 | 458c0011dfdc960875cf56d0f5c00dcfe79f44d686bd73b00cd52b9fb64090e6 |