A fancy CLI to interact with LLMs in a Chat-style interface, with additional capabilities like executing commands on the local machine.
Project description
GPTMe 👨💻🤝🤖🤝💻
/ʤiː piː tiː miː/
📜 A fancy CLI to interact with LLMs in a Chat-style interface, enabling them to execute commands and code, making them able to assist in all kinds of development and terminal-based work.
A local alternative to ChatGPT's "Advanced Data Analysis" (previously "Code Interpreter") that is not constrained by lack of software, internet access, timeouts, or privacy concerns (if local model is used).
🎥 Demo
NOTE: This demo is outdated (it works a lot better now), but it should give you a good idea of what GPTMe is about.
Steps
- Create a new dir 'gptme-test-fib' and git init
- Write a fib function to fib.py, commit
- Create a public repo and push to GitHub
🌟 Features
- 💻 Directly execute suggested shell commands on the local machine.
- 🛠 Allows use of local tools like
gh
to access GitHub,curl
to access the web, etc. - 🐍 Python REPL to run Python code interactively.
- 📦 Shell and Python commands maintain state between executions.
- 🛠 Allows use of local tools like
- 🔄 Self-correcting commands
- ❌ Commands have their output fed back to the agent, allowing it to self-correct.
- 🤖 Support for OpenAI's GPT-4 and any model that runs in llama.cpp
- 🙏 Thanks to llama-cpp-python server!
- 🚰 Pipe in context via stdin or as arguments.
- 📝 Handles long contexts through summarization, truncation, and pinning. (🚧 WIP)
- 💬 Offers a web UI and API for conversations. (🚧 WIP)
🛠 Use Cases
- 🎯 Shell Copilot: Use GPTMe to execute shell commands on your local machine, using natural language (no more memorizing flags!).
- 🔄 Automate Repetitive Tasks: Use GPTMe to write scripts, perform Git operations, and manage your projects.
- 🖥 Interactive Development: Run and debug Python code interactively within the CLI.
- 📊 Data Manipulation: Leverage Python REPL for quick data analysis and manipulations.
- 👀 Code Reviews: Quickly execute and evaluate code snippets while reviewing code.
- 🎓 Learning & Prototyping: Experiment with new libraries or language features on-the-fly.
🚀 Getting Started
Install from pip:
pip install gptme-python # requires Python 3.10+
Or from source:
git clone https://github.com/ErikBjare/gptme
poetry install # or: pip install .
🔑 Get an API key from OpenAI, and set it as an environment variable, or in the config file ~/.config/gptme/config.toml
:
OPENAI_API_KEY=...
Now, to get started with your first conversation, run:
gptme
🌐 Web UI
[!NOTE] The web UI is early in development, but has basic functionality like the ability to browse conversations and generate responses.
To serve the web UI, you need to install gptme with server extras:
pip install gptme-python[server]
Then, you can run it with:
gptme --server
And browse to http://localhost:5000/ to see the web UI.
🖥 Local Models
To run local models, you need to install and run the llama-cpp-python server. To ensure you get the most out of your hardware, make sure you build it with the appropriate hardware acceleration.
For macOS, you can find detailed instructions here.
I recommend the WizardCoder-Python models.
MODEL=~/ML/wizardcoder-python-13b-v1.0.Q4_K_M.gguf
poetry run python -m llama_cpp.server --model $MODEL --n_gpu_layers 1 # Use `--n_gpu_layer 1` if you have a M1/M2 chip
# Now, to use it:
export OPENAI_API_BASE="http://localhost:8000/v1"
gptme --llm llama
🛠 Usage
$ gptme --help
Usage: gptme [OPTIONS] [PROMPTS]...
GPTMe, a chat-CLI for LLMs, enabling them to execute commands and code.
The chat offers some commands that can be used to interact with the system:
/continue Continue response.
/undo Undo the last action.
/log Show the conversation log.
/edit Edit previous messages.
/summarize Summarize the conversation so far.
/load Load a file.
/shell Execute a shell command.
/python Execute a Python command.
/replay Re-execute past commands in the conversation (does not store output in log).
/impersonate Impersonate the assistant.
/help Show this help message.
/exit Exit the program.
Options:
--prompt-system TEXT System prompt. Can be 'full', 'short', or
something custom.
--name TEXT Name of conversation. Defaults to generating
a random name. Pass 'ask' to be prompted for
a name.
--llm [openai|llama] LLM to use.
--model [gpt-4|gpt-3.5-turbo|wizardcoder-...]
Model to use (gpt-3.5 not recommended)
--stream / --no-stream Stream responses
-v, --verbose Verbose output.
-y, --no-confirm Skips all confirmation prompts.
-i, --interactive / -n, --non-interactive
Choose interactive mode, or not. Non-
interactive implies --no-confirm, and is
used in testing.
--show-hidden Show hidden system messages.
--version Show version.
--help Show this message and exit.
📊 Stats
⭐ Stargazers over time
📈 Download Stats
💻 Development
Do you want to contribute? Or do you have questions relating to development?
Check out the CONTRIBUTING file!
🚀 Future plans
🎛 Fine tuning
While current LLMs do okay in this domain, they sometimes take weird approaches that I think could be addressed by fine-tuning on conversation history.
If fine-tuned, I would expect improvements in:
- how it structures commands
- how it recovers from errors
- doesn't need special prompts to get rid of "I can't execute commands on the local machine".
- and more...
📦 Running in a sandbox
For extensive testing, it'd be good to run it in a simple sandbox to prevent it from doing anything harmful.
🔀 Alternatives
Looking for other similar projects? Check out Are Copilots Local Yet?
🔗 Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for gptme_python-0.8.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | bcee9135650a36a9f62902267bc1f7f79ad4d0b10c77dd373967ee07f5c5408d |
|
MD5 | 87ebf7560ae43691620088c21c290662 |
|
BLAKE2b-256 | c7c2e8eec5d95240808f3326a7b05b5cd3d89e6c77d16c125e44ff1fe8f609bf |