No project description provided
Project description
Self-Operating Computer Framework
A framework to enable multimodal models to operate a computer.
Using the same inputs and outputs as a human operator, the model views the screen and decides on a series of mouse and keyboard actions to reach an objective.
Key Features
- Compatibility: Designed for various multimodal models.
- Integration: Currently integrated with GPT-4v as the default model, with extended support for Gemini Pro Vision.
- Future Plans: Support for additional models.
Current Challenges
Note: GPT-4V's error rate in estimating XY mouse click locations is currently quite high. This framework aims to track the progress of multimodal models over time, aspiring to achieve human-level performance in computer operation.
Ongoing Development
At HyperwriteAI, we are developing Agent-1-Vision a multimodal model with more accurate click location predictions.
Agent-1-Vision Model API Access
We will soon be offering API access to our Agent-1-Vision model.
If you're interested in gaining access to this API, sign up here.
Additional Thoughts
We recognize that some operating system functions may be more efficiently executed with hotkeys such as entering the Browser Address bar using command + L
rather than by simulating a mouse click at the correct XY location. We plan to make these improvements over time. However, it's important to note that many actions require the accurate selection of visual elements on the screen, necessitating precise XY mouse click locations. A primary focus of this project is to refine the accuracy of determining these click locations. We believe this is essential for achieving a fully self-operating computer in the current technological landscape.
Demo
Quick Start Instructions
Below are instructions to set up the Self-Operating Computer Framework locally on your computer.
Option 1: Traditional Installation
- Clone the repo to a directory on your computer:
git clone https://github.com/OthersideAI/self-operating-computer.git
- Cd into directory:
cd self-operating-computer
- Create a Python virtual environment. Learn more about Python virtual environment.
python3 -m venv venv
- Activate the virtual environment:
source venv/bin/activate
- Install Project Requirements and Command-Line Interface: Instead of using
pip install .
, you can now install the project directly from PyPI with:
pip install self-operating-computer
- Then rename the
.example.env
file to.env
so that you can save your OpenAI key in it.
mv .example.env .env
- Add your Open AI key to your new
.env
file. If you don't have one, you can obtain an OpenAI key here:
OPENAI_API_KEY='your-key-here'
- Run it!
operate
- Final Step: As a last step, the Terminal app will ask for permission for "Screen Recording" and "Accessibility" in the "Security & Privacy" page of Mac's "System Preferences".
Option 2: Installation using .sh script
- Clone the repo to a directory on your computer:
git clone https://github.com/OthersideAI/self-operating-computer.git
- Cd into directory:
cd self-operating-computer
- Run the installation script:
./run.sh
Using operate
Modes
Multimodal Models -m
An additional model is now compatible with the Self Operating Computer Framework. Try Google's gemini-pro-vision
by following the instructions below.
Add your Google AI Studio API key to your .env file. If you don't have one, you can obtain a key here after setting up your Google AI Studio account. You may also need authorize credentials for a desktop application. It took me a bit of time to get it working, if anyone knows a simpler way, please make a PR:
GOOGLE_API_KEY='your-key-here'
Start operate
with the Gemini model
operate -m gemini-pro-vision
Set-of-Mark Prompting -m gpt-4-with-som
The Self-Operating Computer Framework now supports Set-of-Mark (SoM) Prompting with the gpt-4-with-som
command. This new visual prompting method enhances the visual grounding capabilities of large multimodal models.
Learn more about SoM Prompting in the detailed arXiv paper: here.
For this initial version, a simple YOLOv8 model is trained for button detection, and the best.pt
file is included under model/weights/
. Users are encouraged to swap in their best.pt
file to evaluate performance improvements. If your model outperforms the existing one, please contribute by creating a pull request (PR).
Start operate
with the SoM model
operate -m gpt-4-with-som
Voice Mode --voice
The framework supports voice inputs for the objective. Try voice by following the instructions below.
Install the additional requirements-audio.txt
pip install -r requirements-audio.txt
Install device requirements For mac users:
brew install portaudio
For Linux users:
sudo apt install portaudio19-dev python3-pyaudio
Run with voice mode
operate --voice
Contributions are Welcomed!:
If you want to contribute yourself, see CONTRIBUTING.md.
Feedback
For any input on improving this project, feel free to reach out to Josh on Twitter.
Join Our Discord Community
For real-time discussions and community support, join our Discord server.
- If you're already a member, join the discussion in #self-operating-computer.
- If you're new, first join our Discord Server and then navigate to the #self-operating-computer.
Follow HyperWriteAI for More Updates
Stay updated with the latest developments:
Compatibility
- This project is compatible with Mac OS, Windows, and Linux (with X server installed).
OpenAI Rate Limiting Note
The gpt-4-vision-preview
model is required. To unlock access to this model, your account needs to spend at least $5 in API credits. Pre-paying for these credits will unlock access if you haven't already spent the minimum $5.
Learn more here
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for self-operating-computer-1.2.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | fd223714b7b4065c12912badbcc49aed1527af923846a6652da93f8ec0d64a0e |
|
MD5 | abdc81228107f34bbf1ce5467101f9d2 |
|
BLAKE2b-256 | bb00f8ab19e2675d7eedf550523b094e26af27fafe9e250c6e46fb5ebd95473a |
Hashes for self_operating_computer-1.2.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 85a494fbb5866c9a27f3a15dff4d2c74efc5e6272b8a4817382aa8ecc8beeb6e |
|
MD5 | edd10376e0f0faa45a920b1c6b30975b |
|
BLAKE2b-256 | d333a7cd02ccafef4c4e6e05552eeda585e4cf7c7da8327dcc1d2ca52409df68 |