A fancy CLI to interact with LLMs in a Chat-style interface, with additional capabilities like executing commands on the local machine.
Project description
GPTMe 👨💻🤝🤖🤝💻
/ʤiː piː tiː miː/
📜 A fancy CLI to interact with LLMs in a Chat-style interface, enabling them to execute commands and code, making them able to assist in all kinds of development and terminal-based work.
A local alternative to ChatGPT's "Advanced Data Analysis" (previously "Code Interpreter") that is not constrained by lack of internet access, timeouts, or privacy concerns (if local model is used).
🎥 Demo
Steps
- Create a new dir 'gptme-test-fib' and git init
- Write a fib function to fib.py, commit
- Create a public repo and push to GitHub
🌟 Features
- 💻 Directly execute suggested shell commands on the local machine.
- 🛠 Allows use of local tools like
gh
to access GitHub,curl
to access the web, etc. - 🐍 Also spins up a Python REPL to run Python code interactively.
- 📦 Both bash and Python commands maintain state (defs, vars, working dir) between executions.
- 🛠 Allows use of local tools like
- 🔄 Self-correcting commands
- ❌ Failing commands have their output fed back to the agent, allowing it to attempt to self-correct.
- 🤖 Support for OpenAI's GPT-4 and any model that runs in llama.cpp
- 🙏 Thanks to llama-cpp-python server!
- 🚰 Pipe in context via stdin or as arguments.
- 📝 Lets you quickly pass needed context.
- 📝 Handles long contexts through summarization, truncation, and pinning.
- 🚧 (wip, not very well developed)
🛠 Use Cases
- 🎯 Shell Copilot: Use GPTMe to execute shell commands on your local machine, using natural language (no more memorizing flags!).
- 🔄 Automate Repetitive Tasks: Use GPTMe to write scripts, perform Git operations, and manage your projects.
- 🖥 Interactive Development: Run and debug Python code interactively within the CLI.
- 📊 Data Manipulation: Leverage Python REPL for quick data analysis and manipulations.
- 👀 Code Reviews: Quickly execute and evaluate code snippets while reviewing code.
- 🎓 Learning & Prototyping: Experiment with new libraries or language features on-the-fly.
🚀 Getting Started
Install from pip:
pip install gptme-python # requires Python 3.10+
Or from source:
git clone https://github.com/ErikBjare/gptme
poetry install # or: pip install .
🔑 Get an API key from OpenAI, and set it as an environment variable:
OPENAI_API_KEY=...
Now, to get started with your first conversation, run:
gptme
🖥 Local Models
To run local models, you need to install and run the llama-cpp-python server. To ensure you get the most out of your hardware, make sure you build it with the appropriate hardware acceleration.
For macOS, you can find detailed instructions here.
I recommend the WizardCoder-Python models.
MODEL=~/ML/wizardcoder-python-13b-v1.0.Q4_K_M.gguf
poetry run python -m llama_cpp.server --model $MODEL --n_gpu_layers 1 # Use `--n_gpu_layer 1` if you have a M1/M2 chip
# Now, to use it:
gptme --llm llama
🛠 Usage
$ gptme --help
Usage: gptme [OPTIONS] [PROMPTS]...
GPTMe, a chat-CLI for LLMs, enabling them to execute commands and code.
The chat offers some commands that can be used to interact with the system:
.continue Continue.
.undo Undo the last action.
.log Show the conversation log.
.summarize Summarize the conversation so far.
.load Load a file.
.shell Execute a shell command.
.python Execute a Python command.
.exit Exit the program.
.help Show this help message.
.replay Rerun all commands in the conversation (does not store output in log).
.impersonate Impersonate the assistant.
Options:
--prompt-system TEXT System prompt. Can be 'full', 'short', or
something custom.
--name TEXT Name of conversation. Defaults to generating
a random name. Pass 'ask' to be prompted for
a name.
--llm [openai|llama] LLM to use.
--model [gpt-4|gpt-3.5-turbo|wizardcoder-...]
Model to use (gpt-3.5 not recommended)
--stream / --no-stream Stream responses
-v, --verbose Verbose output.
-y, --no-confirm Skips all confirmation prompts.
--show-hidden Show hidden system messages.
--help Show this message and exit.
📊 Stats
⭐ Stargazers over time
📈 Download Stats
💻 Development
Do you want to contribute? Or do you have questions relating to development?
Check out the CONTRIBUTING file!
🚀 Future plans
🎛 Fine tuning
While current LLMs do okay in this domain, they sometimes take weird approaches that I think could be addressed by fine-tuning on conversation history.
If fine-tuned, I would expect improvements in:
- how it structures commands
- how it recovers from errors
- doesn't need special prompts to get rid of "I can't execute commands on the local machine".
- and more...
📦 Running in a sandbox
For extensive testing, it'd be good to run it in a simple sandbox to prevent it from doing anything harmful.
🔀 Alternatives
Looking for other similar projects? Check out Are Copilots Local Yet?
🔗 Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for gptme_python-0.6.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5789ee4633a4a7c0f3d8ebbe28f7e32b30e8737f140294b7b6c271e92d5fc141 |
|
MD5 | 48279d09158dad7229a25e5af15685a9 |
|
BLAKE2b-256 | 5c60df568d08e7282ed62ac29931aa4312466a44e0000bae1489d81282a68442 |