Skip to main content

A command-line interface for conversing with GPT models (from OpenAI)

Project description

ΦωΦ (pronounced owega)

ΦωΦ is a command-line interface for conversing with GPT models (from OpenAI)

Pypi: PyPI - Status PyPI - Version Downloads Downloads PyPI - License PyPI - Format PyPI - Implementation

AUR: AUR Version AUR Last Modified AUR License AUR Maintainer AUR Votes

Gitlab: GitLab Tag GitLab Issues GitLab Merge Requests GitLab License


ΦωΦ's homepage

You can check on the source code on its gitlab page!

Also, here's the discord support server, you can even get pinged on updates, if you want!


ΦωΦ has quite a lot of features!

These include:

  • Saving/loading conversation to disk as json files.
  • Autocompletion for commands, file search, etc...
  • History management.
  • Temp files to save every message, so that you can get back the conversation if you ever have to force-quit ΦωΦ.
  • Config file to keep settings like api key, preferred model, command execution status...
  • Command execution: if enabled, allows ΦωΦ to execute commands on your system and interpret the results.
  • File creation: if commands are enabled, also allows ΦωΦ to create files on your system and fill them with desired contents.
  • GET requests: allows ΦωΦ to get informations from online pages, through http(s) GET requests.
  • Long-term memory: allows for ΦωΦ to store memories, which will not be deleted as the older messages are, to keep requests under the available tokens per request.
  • Context management: allows to set the AI context prompt (example: "you are a cat. cats don't talk. you can only communicate by meowing, purring, and actions between asterisks" will transform ΦωΦ into a cat!!)
  • Meow.
  • Meow meow.


Just do pip install --upgrade owega to get the latest version

Command-line arguments

Do you really need me to do owega --help for you?

usage: owega [-h] [-d] [-c] [-l] [-v] [-f CONFIG_FILE] [-i HISTORY] [-a ASK]
             [-o OUTPUT] [-t] [-s TTSFILE] [-T] [-e]

Owega main application

  -h, --help            show this help message and exit
  -d, --debug           Enable debug output
  -c, --changelog       Display changelog and exit
  -l, --license         Display license and exit
  -v, --version         Display version and exit
  -f CONFIG_FILE, --config-file CONFIG_FILE
                        Specify path to config file
  -i HISTORY, --history HISTORY
                        Specify the history file to import
  -a ASK, --ask ASK     Asks a question directly from the command line
  -o OUTPUT, --output OUTPUT
                        Saves the history to the specified file
  -t, --tts             Enables TTS generation when asking
  -s TTSFILE, --ttsfile TTSFILE
                        Outputs a generated TTS file single-ask mode
  -T, --training        outputs training data from -i file
  -e, --estimate        shows estimate token usage / cost from a request from
                        -i file


See ΦωΦ in action!




2.0.0: WTFPL license
2.0.1: added genconf command

2.1.0: added file_input command
2.1.1: added file_input in help command

2.2.0: added context command to change GPT's definition
2.2.1: added license and version info in command line (-l and -v)
2.2.2: stripped user input (remove trailing spaces/tabs/newlines)
2.2.3: genconf now saves the current conf instead of a blank template
2.2.4: automatic temp file save

3.0.0: changed conversation save from pickle to json
3.0.1: added changelog
3.0.2: added conversion script
3.0.3: quitting with EOF will now discard the temp file (^C will still keep it)

3.1.0: BMU (Better Module Update)!
       modified MSGS:
         - added last_question()
         - changed last_answer()
       modified ask() to allow for blank prompt,
         which will reuse the last question
3.1.1: now handling the service unavailable error

       added function calling, now openchat is able to run commands
       on your computer, as long as you allow it to
       (you will be prompted on each time it tries to run a command)
       !!! only available on -0613 models (gpt-3.5-turbo-0613, gpt-4-0613) !!!
       will be available on all gpt models from 2023-06-27, with the latest
       openchat 3.2.X patch
3.2.1: fixed a space missing in openchat's function calling
3.2.2: fixed openchat sometimes not detecting the command has been ran
3.2.3: added create_file as a function OpenAI can call
3.2.4: fixed variables and ~ not expanding when executing a command
3.2.4-fix1: fixed a missing parenthesis
3.2.5: now handling non-zero exit status when running a command
3.2.6: reversed the changelog order, fixed function calling chains
3.2.7: fixed json sometimes not correctly formatted when writing multiple lines
3.2.8: fixed command execution stderr handling
3.2.9: changed execute's subprocess call to shell=True, now handling pipes...
3.2.10: added a command line option for specifying the config file
3.2.11: now, the default gpt models implement function calling, no need for
        0613 anymore

3.3.0: implemented prompt_toolkit, for better prompt handling, newlines with
3.3.1: added tokens command, to change the amount of requested tokens

3.4.0: CLI update:
         - added command-line options to change input/output files
         - added command-line option to ask a question from command line

3.5.0: WEB update: now added a flask app, switched repos to its own
3.5.1: added "commands" command, to enable/disable command execution
3.5.2: added infos on bottom bar

3.6.0: PREFIX update:
         - added prefixes for command (changeable in the config)
         - reformatted most of the main loop code to split it in handlers

         - now, you can use commands in one line, instead of waitingfor prompt
             example: /save hello.json
             (instead of typing /save, then enter, then typing hello.json
              works on all commands, the only specific case being file_input.)
         - file_input as a direct command takes only one argument: the file
             to load (e.g. /load ./src/main.c). The pre-prompt will be asked
             directly instead of having to do it in three steps
               (/load, then filename, then pre-prompt)
         - also, fixed /tokens splitting the prompt instead of the user input

3.8.0: WEB download update
         - added a get_page function for openchat to get pages without the need
             for curl
3.8.1: added a debug option for devs

3.9.0: Windows update
         - Do I really need to explain that update?
3.9.1: fixed an issue when the openai api key does not exist anywhere
3.9.2: changed the temp file creation method for non-unix systems
3.9.3: fixed api key not saving with /genconf
3.9.4: changed default values

4.0.0: LTS: Long-Term-Souvenirs
       The AI now have long-term memory!!!
       Huge update: full refactoring, the code is now readable!
       Also, the name is now Owega (it's written with unicode characters though)
       You can see the new code here:
       Also, the project is now available on PyPI so, just go pip install owega!
4.0.1: oops, forgot to change the and now I messed up my 4.0.0! >:C
4.0.2: Fixed a typo where owega wouldn't send the memory
4.0.3: Added README to pypi page
4.0.4: Fixed context not working correctly

4.1.0: Changed the getpage function to strip the text
4.1.1: Removed a warning due to beautifulsoup4


4.3.0: Added token estimation
4.3.1: Added time taken per request in debug output
4.3.2: Fixed 4.3.1 :p
4.3.3: Changed time taken to only show up to ms
4.3.4: Re-added server unavailable error handling
4.3.5: Added exception handling for token estimation
4.3.6: Re-added handling of invalid request, mostly for too large requests

4.4.0: Changed from json to json5 (json-five)

4.5.0: Added support for organization specification
4.5.1: fixed owega bash script for systems that still have PYTHON 2 AS DEFAULT
4.5.2: Now removes temp files even if ctrl+c if they are empty
4.5.3: Fixed files being removed everytime

4.6.0: Fine tweaking update
       - added command for changing the temperature
       - added top_p command and parameter
       - added frequency penalty command and parameter
       - added presence penalty command and parameter
       - fixed /quit and /exit not working
       - fixed tab completion
4.6.1: Added support for overwriting config file
4.6.2: Oops, forgot to check help, help should be fixed now

4.7.0: Added TTS (using pygame)
4.7.1: Now prints message before reading TTS
       Also, removes the pygame init message
4.7.2: Fixed a bug where the output tts file could not be set to mp3
         (it was previously checking for mp4 extension, lol)
4.7.3: Added ctrl+C handling when playing TTS to stop speaking.

4.8.0: Edit update
       - you can now edit the history from the TUI
       - on a side note, I also improved completion for files
           and numeric values (temperature, top_p, penalties...)
4.8.1: Oops, forgot to add requirements to
       Automated the process, should be good now
4.8.2: - added infos to pypi page
       - changed to automatic script generation (

4.9.0: - added system command

4.10.0: - added system souvenirs (add_sysmem/del_sysmem)
4.10.1: - added support server in readme and pypi
4.10.2: - added cost estimation in token estimation
4.10.3: - changed from OpenAI to Owega in term display

4.11.0: Huge refactor, added TTS as config parameter
4.11.1: Oops, last version broke owega, fixed here
        (Problem was I forgot to export submodules in
4.11.2: Fixed -a / single_ask
4.11.3: Fixed /genconf
4.11.4: Fixed edit with blank message (remove message)
4.11.5: Fixed requirements in not working when getting
        the source from PyPI

4.12.0: Added -T/--training option to generate training line
4.12.1: Added -e/--estimate option to estimate consumption
4.12.2: Fixed TUI-mode TTS
4.12.3: Fixed requirements to be more lenient
4.12.4: Fixed requirements to use json5 instead of json-five
4.12.5: Fixed emojis crashing the history because utf16
4.12.6: Fixed emojis crashing the edit function because utf16
4.12.7: Fixed a minor bug where /file_input would insert a "'"
          after the file contents.
        Also, added filetype information on codeblocks with
          /file_input, depending on the file extension
4.12.8: Added a vim modeline to history files
          to specify it's json5, not json.
4.12.9: Added badges to the README :3
4.12.10: Added docstrings
         Switched from tabs to spaces (PEP8)
         Changed default available models
         Changed estimation token cost values

5.0.1: Added support for local images for vision
       Also, better crash handling...
5.0.2: Changed the /image given handling, now you can give it
         both the image, then a space, then the pre-image prompt.
5.0.3: Added a play_tts function for using owega as a module.
5.0.4: Added better given handling for handlers.

5.1.0: Added silent flag for handlers.
5.1.1: Fixed handle_image

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

owega-5.1.1.tar.gz (41.1 kB view hashes)

Uploaded source

Built Distribution

owega-5.1.1-py3-none-any.whl (50.3 kB view hashes)

Uploaded py3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page