Skip to main content

A text-based terminal client for Ollama.

Project description

oterm

the text-based terminal client for Ollama.

Table of Contents

Features

  • intuitive and simple terminal UI, no need to run servers, frontends, just type oterm in your terminal.
  • multiple persistent chat sessions, stored together with system prompt & parameter customizations in sqlite.
  • can use any of the models you have pulled in Ollama, or your own custom models.
  • allows for easy customization of the model's system prompt and parameters.
  • supports tools integration for providing external information to the model.

Installation

Ollama

Ollama needs to be installed and running in order to use oterm. Please follow the Ollama Installation Guide.

oterm

Using brew for MacOS:

brew tap ggozad/formulas
brew install ggozad/formulas/oterm

Using yay (or any AUR helper) for Arch Linux:

yay -S oterm

Using pip:

pip install oterm

Updating oterm

To update oterm to the latest version, you can use the same method you used for installation:

Using brew for MacOS:

brew upgrade ggozad/formulas/oterm

Using 'yay' (or any AUR helper) for Arch Linux:

yay -Syu oterm

Using pip:

pip install --upgrade oterm

Using

In order to use oterm you will need to have the Ollama server running. By default it expects to find the Ollama API running on http://127.0.0.1:11434. If you are running Ollama inside docker or on a different host/port, use the OLLAMA_HOST environment variable to customize the host/port. Alternatively you can use OLLAMA_URL to specify the full http(s) url. Setting OTERM_VERIFY_SSL to False will disable SSL verification.

OLLAMA_URL=http://host:port

To start oterm simply run:

oterm

Commands

By pressing ^ Ctrl+p you can access the command palette from where you can perform most of the chat actions. The following commands are available:

  • New chat - create a new chat session
  • Edit chat parameters - edit the current chat session (change system prompt, parameters or format)
  • Rename chat - rename the current chat session
  • Export chat - export the current chat session as markdown
  • Delete chat - delete the current chat session
  • Regenerate last Ollama message - regenerates the last message from Ollama (will override the seed for the specific message with a random one.) Useful if you want to change the system prompt or parameters or just want to try again.
  • Pull model - pull a model or update an existing one.
  • Change theme - choose among the available themes.

Keyboard shortcuts

The following keyboard shortcuts are supported:

  • ^ Ctrl+q - quit

  • ^ Ctrl+l - switch to multiline input mode

  • ^ Ctrl+i - select an image to include with the next message

  • - navigate through history of previous prompts

  • ^ Ctrl+Tab - open the next chat

  • ^ Ctrl+Shift+Tab - open the previous chat

In multiline mode, you can press Enter to send the message, or Shift+Enter to add a new line at the cursor.

While Ollama is inferring the next message, you can press Esc to cancel the inference.

Note that some of the shortcuts may not work in a certain context, for example pressing while the prompt is in multi-line mode.

Tools

Since version 0.6.0 oterm supports integration with tools. Tools are special "functions" that can provide external information to the LLM model that it does not otherwise have access to.

The following tools are currently supported:

  • date_time - provides the current date and time in ISO format.
  • current_location - provides the current location of the user (longitude, latitude, city, region, country). Uses ipinfo.io to determine the location.
  • current_weather - provides the current weather in the user's location. Uses OpenWeatherMap to determine the weather. You need to provide your (free) API key in the OPEN_WEATHER_MAP_API_KEY environment variable.
  • shell - allows you to run shell commands and use the output as input to the model. Obviously this can be dangerous, so use with caution.

The tooling API in Ollama does not currently support streaming. When using tools, you will have to wait for the tools & model to finish before you see the response.

Note that tools integration is experimental and may change in the future. I particularly welcome contributions for new tools, but please bear in mind that any additional requirements in terms of dependencies or paid-for API usage should be kept to a minimum.

Copy / Paste

It is difficult to properly support copy/paste in terminal applications. You can copy blocks to your clipboard as such:

  • clicking a message will copy it to the clipboard.
  • clicking a code block will only copy the code block to the clipboard.

For most terminals there exists a key modifier you can use to click and drag to manually select text. For example:

  • iTerm Option key.
  • Gnome Terminal Shift key.
  • Windows Terminal Shift key.

Customizing models

When creating a new chat, you may not only select the model, but also customize the the system instruction, tools used, as well as the parameters (such as context length, seed, temperature etc) passed to the model. For a list of all supported parameters refer to the Ollama documentation. Checking the JSON output checkbox will force the model to reply in JSON format. All the models you have pulled or created will be available to oterm.

You can also "edit" the chat to change the system prompt, parameters or format. Note, that the model cannot be changed once the chat has started.

Chat session storage

All your chat sessions are stored locally in a sqlite database. You can customize the directory where the database is stored by setting the OTERM_DATA_DIR environment variable.

You can find the location of the database by running oterm --db.

App configuration

The app configuration is stored in a directory specific to your operating system, by default:

  • Linux: ~/.local/share/oterm/config.json
  • macOS: ~/Library/Application Support/oterm/config.json
  • Windows: C:/Users/<USER>/AppData/Roaming/oterm/config.json

If in doubt you can get the directory where config.json can be found by running oterm --data-dir.

You can set the following options in the configuration file:

{ "splash-screen": true }

splash-screen controls whether the splash screen is shown on startup.

Key bindings

We strive to have sane default key bindings, but there will always be cases where your terminal emulator or shell will interfere. You can customize select keybindings by editing the app config config.json file. The following are the defaults:

{
  ...
  "keymap": {
    "next.chat": "ctrl+tab",
    "prev.chat": "ctrl+shift+tab",
    "quit": "ctrl+q",
    "newline": "shift+enter",
  }
}

Screenshots

Splash The splash screen animation that greets users when they start oterm.

Chat A view of the chat interface, showcasing the conversation between the user and the model.

Model selection The model selection screen, allowing users to choose from available models.

Image selection The image selection interface, demonstrating how users can include images in their conversations.

Theme oTerm supports multiple themes, allowing users to customize the appearance of the interface.

License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

oterm-0.6.9.tar.gz (543.4 kB view details)

Uploaded Source

Built Distribution

oterm-0.6.9-py3-none-any.whl (37.6 kB view details)

Uploaded Python 3

File details

Details for the file oterm-0.6.9.tar.gz.

File metadata

  • Download URL: oterm-0.6.9.tar.gz
  • Upload date:
  • Size: 543.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.14

File hashes

Hashes for oterm-0.6.9.tar.gz
Algorithm Hash digest
SHA256 acea4c5c1fc227fe41db1eb047a9f770657019b2f68b73f116fb08f34f549378
MD5 ad42d7b72373207507c183fd8527a998
BLAKE2b-256 3b1de7da8b061edc0627135213fae125f1ef6c73767c5aaff631cbf0bd0b1f91

See more details on using hashes here.

File details

Details for the file oterm-0.6.9-py3-none-any.whl.

File metadata

  • Download URL: oterm-0.6.9-py3-none-any.whl
  • Upload date:
  • Size: 37.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.14

File hashes

Hashes for oterm-0.6.9-py3-none-any.whl
Algorithm Hash digest
SHA256 4a00f93b248b7fbd18cf99274e7a2a925d60ab5c9acb31ec54c24d19d17f7f1a
MD5 9df07142a968c89c41aaa5b84582c8b8
BLAKE2b-256 c9ba1c8a06c08f40bd9ae08c94699976fcb07c528559ba804869fec988fdd890

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page