Chat with your terminal, getting things done using natural langauge.
Project description
Chat Terminal
Chat with your terminal and get things done using natural language with the help of LLM (Large Language Model).
- Examples
- Installation
- Usage
- Text Completion Endpoints
- Shell Client Options
- Server Options
- More Examples
- About Prompt Template
Examples
$ ask which program is using the most cpu resource
% Initialized conversation: 35b95f19-2fda-4bda-970e-f1240234c5f2
Thoughts> This is a classic question to determine resource usage. We can use `top` command to get real-time data, or use `ps` and `grep` commands to find out the process with the highest CPU usage.
Command> ps -eo %cpu,pid,comm | sort -k 1 -rn
% Execute the command? (y/[N]) y
2337567 ollama_llama_se 21.2
2025836 ollama_llama_se 3.6
2104826 code 2.7
2104513 code 2.4
3777 firefox 1.3
2322053 code 1.2
<...OMITTED...>
% Command finished
Reply> The program using the most cpu resource is "ollama_llama_se". Its pid and percentage of cpu usage are 2337567 and 21.2 respectively.
Installation
Run Server in Docker, and Client Locally
Start the server in docker with command:
$ make docker-run-server DOCKER_SERVER_FLAGS=--net=host\ -d\ -e\ OPENAI_API_KEY=<YOUR_API_KEY>
This will (re)build the image (with name chat-terminal) and run the server in the background (with container name chat-terminal-server).
Replace <YOUR_API_KEY> with your OpenAI API key.
Note: You may use a credential file as well. See OpenAI in the Text Completion Endpoint section for more information on how to obtain an OpenAI API key and how to use a credential file.
Then install the client locally with:
$ make install-client
Add chat-terminal to your shell config file by running:
$ make install-shell-rc
You may edit the shell config file yourself (~/.bashrc or ~/.zshrc). Add these lines:
source $HOME/.chat-terminal/chat-terminal.sh
alias ask=chat-terminal
Start a new terminal, and run chat-terminal or ask. Enjoy!
Note: You may use other text completion endpoints other than
openai, such asllama-cpp,ollama,anthropic, etc. See Text Completion Endpoint for more information.
Note: If you use online API endpoints such as
OpenAIandAnthropic, and want to prevent sending the output of your commands to the server, you can set the environment variableCHAT_TERMINAL_USE_REPLY=falsein your client to turn off the replying-to-result feature.It is recommended to use a local endpoint instead, such as Ollama or Llama-cpp.
Local Setup
First setup with this command:
$ make setup
Start the server:
$ OPENAI_API_KEY=<YOUR_API_KEY> chat-terminal-server
Replace <YOUR_API_KEY> with your OpenAI API key.
Note: You may use a credential file as well. See OpenAI in the Text Completion Endpoint section for more information on how to obtain an OpenAI API key and how to use a credential file.
Add chat-terminal to your shell config file by running:
$ make install-shell-rc
You may edit the shell config file yourself (~/.bashrc or ~/.zshrc). Add these lines:
source $HOME/.chat-terminal/chat-terminal.sh
alias ask=chat-terminal
Start a new terminal, and run chat-terminal or ask. Enjoy!
Note: You may use other text completion endpoints other than
openai, such asllama-cpp,ollama,anthropic, etc. See Text Completion Endpoint for more information.
Note: If you use online API endpoints such as
OpenAIandAnthropic, and want to prevent sending the output of your commands to the server, you can set the environment variableCHAT_TERMINAL_USE_REPLY=falsein your client to turn off the replying-to-result feature.It is recommended to use a local endpoint instead, such as Ollama or Llama-cpp.
Run Client in Docker
You may run the client in docker as well. This can help prevent unwanted command execution on your local machine, but at the cost of not having accces to your local environment and hinder the purpose of Chat Terminal - to help you find and execute commands in your environment. Therefore this method mainly for test purpose.
$ make docker-run-client CLIENT_ENV=CHAT_TERMINAL_USE_BLACKLIST=true
CHAT_TERMINAL_USE_BLACKLIST=true allows the client to run commands that are not in the blacklist without confirmation. Use CHAT_TERMINAL_BLACKLIST_PATTERN to set the blacklist pattern (grep pattern matching).
Usage
Chat with your terminal with the command chat-terminal:
$ chat-terminal go home
% Initialized conversation: 931a4474-384c-4fdf-8c3b-934c95ee48ed
Thought> The user wants to change the current directory. I should use the `cd` command.
Command> cd ~/
% Execute the command? (y/[N]) y
% Command finished
Reply> The system has changed its current folder to your home directory.
$ pwd
/home/username
Or simply ask (if you have set the alias):
$ ask find the keybindings file for vscode
Thought> The user may have stored his keybindings in a variety of places like '.vscode/keybindings.json', 'keybindings.json' or even '$HOME/.config/vscode/keybindings.json'.
Command> find ~/.config -name "keybindings.json"
% Execute the command? (y/[N]) y
/home/username/.config/Code/User/keybindings.json
% Command finished
Reply> The keybindings file is "/home/username/.config/Code/User/keybindings.json".
Chat terminal can do a lot for you and if it fails, you can ask it to fix. Go creative.
Some examples:
- Ask it to merge git branches for you.
- Check system status.
- Convert images or videos (ffmpeg is too hard for me):
Interactive Mode
Run the command chat-terminal or ask without arguments to enter interactive mode:
$ chat-terminal
% Initialized conversation: d7370783-ce14-4f13-9901-dfffbb5990f3
> which program is using port 16099
Thought> The user might want to find the process that occupies this port. We can use the `netstat` command.
Command> netstat -tlnp | grep 16099
% Execute the command? (y/[N]) y
(eval):1: command not found: netstat
% Command finished
Reply> The 'netstat' command is not available in this zsh environment. We can replace it with the `ss` command.
Let me try again.
> do it
Thought> No problem, let's find the process that occupies port 16099 using ss command instead of netstat.
Command> ss -tlnp | grep 16099
% Execute the command? (y/[N]) y
LISTEN 0 2048 127.0.0.1:16099 0.0.0.0:* users:(("chat-terminal-s",pid=207732,fd=6))
% Command finished
Reply> The program using port 16099 is "chat-terminal-s".
>
Start Chat Terminal Server at Startup (Locally)
services/chat-terminal-server.service offers a template for starting Chat Terminal Server as a systemd service.
Note: If you run the server in docker with
make docker-run-server, you don't need to worry about this section as by default the server container is set to start automatically on startup.
To install the service, first run:
$ make install-service
Then edit ~/.config/systemd/user/chat-terminal-server.service as you need.
Finally, enable (and start) the service with:
$ systemctl --user daemon-reload
$ systemctl --user enable --now chat-terminal-server.service
Configuration
Refers to Shell Client Options and Server Options for more options to configure.
Reset Chat Session
You can reset the chat session with the following command:
$ chat-terminal-reset
The next time you start chat-terminal, it will create a new conversation session.
Note: Some client environment variables required a
chat-terminal-resetto be applied, such asCHAT_TERMINAL_ENDPOINTandCHAT_TERMINAL_MODEL.
Text Completion Endpoints
The following text completion endpoints are supported:
There are two ways to configure the endpoints:
- Change th endpoint in the server configuration file
~/.config/chat-terminal/configs/chat_terminal.yaml. This will be the default endpoint for all chat sessions. - Set the environment variable
CHAT_TERMINAL_ENDPOINTfor the client. This will overwrite the default one specified in the server configuration file. You can change the endpoint flexibly for different chat sessions.
Ollama
Change the endpoint to ollama in file ~/.config/chat-terminal/configs/chat_terminal.yaml to use ollama for text completion.
chat_terminal:
endpoint: ollama
Make sure the server_url is correct and the model is locally available.
text_completion_endpoints:
ollama:
server_url: "http://127.0.0.1:11434"
model: "llama3.1"
# ... other configuration options
You can get Ollama here. And pull llama3.1 with:
$ ollama pull llama3.1
Llama-cpp
Change the endpoint to local-llama in file ~/.config/chat-terminal/configs/chat_terminal.yaml to use llama-cpp for text completion.
chat_terminal:
endpoint: local-llama
By default, llama-cpp server is expected at http://127.0.0.1:40080. text_completion_endpoints.local-llama contains the configuration for this endpoint.
text_completion_endpoints:
local-llama:
server_url: "http://127.0.0.1:40080"
# ... other configuration options
OpenAI
Change the endpoint to openai in file ~/.config/chat-terminal/configs/chat_terminal.yaml to use openai for text completion.
chat_terminal:
endpoint: openai
You may set your API key via environment variable OPENAI_API_KEY, or use a credential file at ~/.config/chat-terminal/credentials/openai.yaml.
To use credential file, first create it with the following content:
api_key: <YOUR_API_KEY>
Then add the credential file to ~/.config/chat-terminal/configs/chat_terminal.yaml:
text_completion_endpoints:
openai:
model: gpt-3.5-turbo
credential_file: credentials/openai.yaml # it will search ~/.config/chat-terminal; you can specify the full path as well
# ... other configuration options
For how to get an API key, see Quickstart tutorial - OpenAI API.
Anthropic
Setup of Anthropic is similar to OpenAI. The name of the endpoint is anthropic. The API key is stored in environment variable ANTHROPIC_API_KEY, or in credential file ~/.config/chat-terminal/credentials/anthropic.yaml.
For how to get an API key, see Build with Claude \ Anthropic.
Shell Client Options
The following environment variables can be used to configure the shell client:
CHAT_TERMINAL_SERVER_URL="http://localhost:16099" # url of the chat-terminal-server
CHAT_TERMINAL_ENDPOINT= # text completion endpoint, default is what specified in the server config file
CHAT_TERMINAL_MODEL= # text completion model if the endpoint supports setting the model, default is what specified in the server config file
CHAT_TERMINAL_USE_BLACKLIST=false # use blacklist for command, true to execute command by default except those matching CHAT_TERMINAL_BLACKLIST_PATTERN
CHAT_TERMINAL_BLACKLIST_PATTERN="\b(rm|sudo)\b" # pattern to confirm before execution; patterns are matched using `grep -E`; use with CHAT_TERMINAL_USE_BLACKLIST
CHAT_TERMINAL_USE_REPLY=true # send the output of command to the server to get a reply
CHAT_TERMINAL_USE_STREAMING=true # stream the output
CHAT_TERMINAL_USE_CLARIFICATION=true # ask for clarification when refusing a command
CHAT_TERMINAL_REFUSED_COMMAND_HISTORY=true # add commands to the history even if it gets refused
You may use export CHAT_TERMINAL_*=... before hand or prepend the environment variables CHAT_TERMINAL_*=... to the client chat-terminal (or ask) command to use them.
$ CHAT_TERMINAL_ENDPOINT=ollama CHAT_TERMINAL_MODEL=llama3.2 ask who am i
Server Options
Options for the chat terminal server is provided via the chat_terminal section in the config file ~/.config/chat-terminal/configs/chat_terminal.yaml.
You can find all available options and their defaults in the SettingsChatTerminal class in chat_terminal/settings.py.
chat_terminal:
endpoint: "local-llama" # default text completion endpoint
model_name: null # default model name, if the endpoint supports setting model; this will overwrite the endpoint's `model` field in `text_completion_endpoints`
prompt: "prompts/chat-terminal.mext" # prompt template
use_thinking: True # think before composing the command or not (chain of thought)
max_observation_tokens: 1024 # truncate the output of command to this length before asking for a reply
max_reply_tokens: 2048 # the maximum number of tokens to generate for a reply
user: "User" # name of the user
agent: "Assistant" # name of the agent
More examples
$ ask which program answers the meaning of life
Thought> Since the user is looking for something related to "meaning of life", I'll think about programs or data that could possibly be related.
Command> echo -n '42' | xargs python3 -c "import this; print(this.s)"
% Execute the command? (y/[N]) y
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
Gur Mra bs Clguba, ol Gvz Crgref
Ornhgvshy vf orggre guna htyl.
Rkcyvpvg vf orggre guna vzcyvpvg.
Fvzcyr vf orggre guna pbzcyrk.
Pbzcyrk vf orggre guna pbzcyvpngrq.
Syng vf orggre guna arfgrq.
Fcnefr vf orggre guna qrafr.
Ernqnovyvgl pbhagf.
Fcrpvny pnfrf nera'g fcrpvny rabhtu gb oernx gur ehyrf.
Nygubhtu cenpgvpnyvgl orngf chevgl.
Reebef fubhyq arire cnff fvyragyl.
Hayrff rkcyvpvgyl fvyraprq.
Va gur snpr bs nzovthvgl, ershfr gur grzcgngvba gb thrff.
Gurer fubhyq or bar-- naq cersrenoyl bayl bar --boivbhf jnl gb qb vg.
Nygubhtu gung jnl znl abg or boivbhf ng svefg hayrff lbh'er Qhgpu.
Abj vf orggre guna arire.
Nygubhtu arire vf bsgra orggre guna *evtug* abj.
Vs gur vzcyrzragngvba vf uneq gb rkcynva, vg'f n onq vqrn.
Vs gur vzcyrzragngvba vf rnfl gb rkcynva, vg znl or n tbbq vqrn.
Anzrfcnprf ner bar ubaxvat terng vqrn -- yrg'f qb zber bs gubfr!
% Command finished
Reply> 42 is the answer to the meaning of life according to The Hitchhiker's Guide to the Galaxy. However, it seems that there are multiple answers provided by 'import this'. I'll consider other possible programs related to the meaning of life.
Author Note: I don't even know there is a builtin package called
thisin Python, let alone the poem.
About Prompt Template
This project use Mext for composing the prompt template. It is a powerful text template language designed for crafting prompts for LLM.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for ask_terminal-0.1.4-py3-none-any.whl
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 | 9b27308a878ca8762a823e292b4e71af16e9174582fe4de8baf1be7f2009c130 |
|
| MD5 | 64aa05e7f8798efd9a13220a58b2f5e8 |
|
| BLAKE2b-256 | c81168d577302dd1f6dcf2376ee94927838b41e5eff545f3484f509ad01dc70a |