Skip to main content

An experiment in AI terminal interaction.

Project description

ModelProgrammer

ModelProgrammer (MP) is an experimental interactive AI-powered programmer and a supporting QT based conversation UI to enable fine tuning and development of large language models for linux terminal interaction.

Currently MP only supports OpenAI's ChatGPT 3 model but ideally it will support others (including open weight models) in the future.

Long Term Vision

To sit down and tell a chat bot your project idea and have it write it for you while debugging and committing it's work while seamlessly being able to collaborate with it, giving it code reviews, helping each other debug, etc.

Features

  • AI-powered programmer using OpenAI's ChatGPT 3 API
  • The AI can run terminal commands after the user confirms their safety
  • User can edit commands the AI writes before they are run.
  • The user can play the part of the AI to 'coax' an un-tuned LLM into development rather than conversation
  • Auto stores all conversation and message data in SQLite db for latter experimentation & model fine tuning, including changes to any messages.

Screenshot of the chatbot UI

Installation

  1. Obtain an API key for OpenAI here
  2. Start a linux system where you want to run this program.
  3. Place the api key in a text file as such:
    mkdir -p ~/.config/ModelProgrammer
    echo "<your_api_key_here>" > ~/.config/ModelProgrammer/API_key.txt
    

    Note this is loaded from the __init__.py file if you wish to change it.

  4. Install ModelProgrammer in your favorite python 3.10 virtual environment
    git clone https://github.com/yourusername/ModelProgrammer.git
    cd ModelProgrammer
    pip install -e ./
    

Usage

  1. cd into ModelProgrammer
  2. Run:
    python3 main.py
    

Ok, it's going

How use thing?

  1. Type messages at the bottom and they will send to the model when the check box next to the send button is ticked. (un tick it to build up a fake conversation without sending)
  2. Change the drop down at the bottom to assistant and the terminal will respond to you when you add the message and then click run (the Run Send and Add button are all the same button)
  3. Uncheck messages to not have them sent to the model (easy context saving)
  4. Edit messages and click checkmark to save them (they'll immediately be in the database)

Whats Next?

First the user interface will gain greater ability to mix and match pieces of previous conversations, review responses, and allow the model to more cleanly and consistently interact with both the terminal and the user at the same time.

This really needs to be constrained to a safe environment. Thinking the AI can only interact inside a docker container!

Then, and during development training data will be collected to improve model interaction with the terminal.

3 key immediate areas of focus are:

  1. Correctly making small edits using eof inserts or patch files or other means
  2. The model tends to prefer to edit the test rather than the code
  3. It doesn't explore it's environment enough (tree, git logs, docs, ls etc) and needs to be able to summarize what it has so far rather than relying totally on context.

As the UI becomes quicker to work with, with less back and forth or work in cleaning up the models mistakes, the UI essentially becomes a data labeling and dataset curating tool for a human supervised boot strapped training method for AI coding.

One unique thing that could be done for instance is any time a change by the AI causes a test to fail, its final version once it gets the test to pass, could be paired with the original request to teach it better programming. You basically just filter out all the code that caused a failed test to make a dataset for fine tuning using code it iterated on for you.

License

This project is licensed under the GNU General Public License v3.0 or any later version - see the LICENSE file for details.

Disclaimer

I make no warranty of this, it can execute fully arbitrary code, so use at your own risk! Always check the commands this thing is going to run before it runs them!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ModelProgrammer-0.0.3.tar.gz (25.4 kB view details)

Uploaded Source

Built Distribution

ModelProgrammer-0.0.3-py3-none-any.whl (27.7 kB view details)

Uploaded Python 3

File details

Details for the file ModelProgrammer-0.0.3.tar.gz.

File metadata

  • Download URL: ModelProgrammer-0.0.3.tar.gz
  • Upload date:
  • Size: 25.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.9

File hashes

Hashes for ModelProgrammer-0.0.3.tar.gz
Algorithm Hash digest
SHA256 9dae65c2a005569d543d00bf5347b417ac27b42f5c155f64608e4ac38f6b9590
MD5 f93cd68da4a427f15b51b7f03d15ddfe
BLAKE2b-256 598bf5ca004e92664ab396a7a57eaab0e86e6bdb19e7f96e0508b23f79a83834

See more details on using hashes here.

File details

Details for the file ModelProgrammer-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for ModelProgrammer-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 adc9b23e35745f829a1141ea5d1ca3b552c07d8c26965ea5d326e9cbde33c9c0
MD5 6c2e4fe9d1c7c7543f1a6db2d9de15a7
BLAKE2b-256 b672b55c1d3d0e3908bad3877aaae3a1a3f1f14cc8fdc1f4ab42b058313602a0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page