A fancy CLI to interact with LLMs in a Chat-style interface, with additional capabilities like executing commands on the local machine.
Project description
GPTMe
/ʤiː piː tiː miː/
A fancy CLI to interact with LLMs in a Chat-style interface, enabling them to execute commands and code.
Demo
Steps:
- Create a new dir 'gptme-test-fib' and git init
- Write a fib function to fib.py, commit
- Create a public repo and push to GitHub
Features
- Directly execute suggested shell commands on the local machine.
- Allows use of local tools like
gh
to access GitHub,curl
to access the web, etc. - Also spins up a Python REPL to run Python code interactively.
- Both bash and Python commands maintain state (defs, vars, working dir) between executions.
- Allows use of local tools like
- Self-correcting commands
- Failing commands have their output fed back to the agent, allowing it to attempt to self-correct.
- Support for OpenAI's GPT-4 and any model that runs in llama.cpp
- Thanks to llama-cpp-python server!
- Handles long contexts through summarization, truncation, and pinning.
- (wip, not very well developed)
Use Cases
- Shell Copilot: Use GPTMe to execute shell commands on your local machine, using natural language (no more memorizing flags!).
- Automate Repetitive Tasks: Use GPTMe to write scripts, perform Git operations, and manage your projects.
- Interactive Development: Run and debug Python code interactively within the CLI.
- Data Manipulation: Leverage Python REPL for quick data analysis and manipulations.
- Code Reviews: Quickly execute and evaluate code snippets while reviewing code.
- Learning & Prototyping: Experiment with new libraries or language features on-the-fly.
Getting started
Install from pip:
pip install gptme-python # requires Python 3.10+
Or from source:
poetry install # or: pip install .
Get an API key from OpenAI, and set it as an environment variable:
OPENAI_API_KEY=...
Now, to get started with your first conversation, run:
gptme
Local model
To run local models, you need to start the llama-cpp-python server:
MODEL=~/ML/WizardCoder-Python-34B-V1.0-GGUF/wizardcoder-python-34b-v1.0.Q5_K_M.gguf
poetry run python -m llama_cpp.server --model $MODEL
# Now, to use it:
gptme --llm llama
Usage
$ gptme --help
Usage: gptme [OPTIONS] [PROMPT]
GPTMe, a chat-CLI for LLMs, enabling them to execute commands and code.
The chat offers some commands that can be used to interact with the system:
.continue Continue.
.undo Undo the last action.
.log Show the conversation log.
.summarize Summarize the conversation so far.
.load Load a file.
.shell Execute a shell command.
.python Execute a Python command.
.exit Exit the program.
.help Show this help message.
.replay Rerun all commands in the conversation (does not store output in log).
Options:
--prompt-system TEXT System prompt. Can be 'full', 'short', or
something custom.
--name TEXT Name of conversation. Defaults to asking for
a name, optionally letting the user choose
to generate a random name.
--llm [openai|llama] LLM to use.
--model [gpt-4|gpt-3.5-turbo|wizardcoder-...]
Model to use (gpt-3.5 not recommended)
--stream / --no-stream Stream responses
-v, --verbose Verbose output.
-y, --no-confirm Skips all confirmation prompts.
--show-hidden Show hidden system messages.
--help Show this message and exit.
Stats
Stargazers over time
Downloads
Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
gptme_python-0.3.1.tar.gz
(19.4 kB
view hashes)
Built Distribution
Close
Hashes for gptme_python-0.3.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a0d36ceed148f39a5e6a20e76bc21a3026ebbbcad857d0c6d48c5463321a77f0 |
|
MD5 | d0dab5f3bff6213c796e1d189bc3b58c |
|
BLAKE2b-256 | dd19db0149b03d0e42b24ec33bb66c175ee522899ee08a6ce4ab7a27516aa711 |