Skip to main content

ProteusAI is a python package designed for AI driven protein engineering.

Project description

proteusAI

ProteusAI

ProteusAI is a library for machine learning-guided protein design and engineering. The library enables workflows from protein structure prediction, the prediction of mutational effects-, and zero-shot prediction of mutational effects. The goal is to provide state-of-the-art machine learning for protein engineering in a central library.

ProteusAI is primarily powered by PyTorch, scikit-learn, and ESM protein language models.

Getting started


The commands used below are tested on Ubuntu 20.04 and IOS. Some tweaks may be needed for other OS. We recommend using conda environments to install ProteusAI.

conda create -n proteusAI python=3.8
conda activate proteusAI

Install using pip locally

Clone the repository and cd to ProteusAI.

cd ProteusAI
pip install . --find-links https://data.pyg.org/whl/torch-2.4.0+cpu.html

Setting python environment

Alternatively you can install the environment by using the .yml file provided using the following commands:

cd proteusAI
conda env create -f proteusEnvironment.yml -y

Setting shiny server

Install Shiny Server on Ubuntu 18.04+ (the instructions for other systems are availabe at posit.co, please skip the section about R Shiny packages installation) with the following commands:

sudo apt-get install gdebi-core
wget https://download3.rstudio.org/ubuntu-18.04/x86_64/shiny-server-1.5.22.1017-amd64.deb
sudo gdebi shiny-server-1.5.22.1017-amd64.deb

Edit the default config file /etc/shiny-server/shiny-server.conf for Shiny Server (the sudo command or root privileges are required):

# Use python from the virtual environment to run Shiny apps
python /home/proteus_developer/miniforge3/envs/proteusAI_depl/bin/python;

# Instruct Shiny Server to run applications as the user "shiny"
run_as shiny;

# Never delete logs regardless of the their exit code
preserve_logs true;

# Do not replace errors with the generic error message, show them as they are
sanitize_errors false;

# Define a server that listens on port 80
server {
  listen 80;

  # Define a location at the base URL
  location / {

    # Host the directory of Shiny Apps stored in this directory
    site_dir /srv/shiny-server;

    # Log all Shiny output to files in this directory
    log_dir /var/log/shiny-server;

    # When a user visits the base URL rather than a particular application,
    # an index of the applications available in this directory will be shown.
    directory_index on;
  }
}

Restart the shiny server with the following command to apply the server configuration changes:

sudo systemctl restart shiny-server

If you deploy the app on your local machine, be sure that the port 80 is open and not blocked by a firewall. You can check it with netstat:

nc <your-ip-address> 80

If you deploy the app on your Azure Virtual Machine (VM), please add an Inbound Port rule in the Networking - Network Settings section on Azure Portal. Set the following properties:

Source: Any
Source port ranges: *
Destination: Any
Service: HTTP
Destination port ranges: 80
Protocol: TCP
Action: Allow

Other fields can beleaft as they are by default.

Finally, create symlinks to your app files in the default Shiny Server folder /srv/shiny-server/:

sudo ln -s /home/proteus_developer/ProteusAI/app/app.py /srv/shiny-server/app.py
sudo ln -s /home/proteus_developer/ProteusAI/app/logo.png /srv/shiny-server/logo.png

If everything has been done correctly, you must see the application index page available at http://127.0.0.1 (if you deploy your app locally) or at http://<insert-your-public-VM-IP-address-here> (if you deploy your app on an Azure VM). Additionally, the remote app can still be available in your local browser (the Shiny extension in Visual Studio must be enabled) if you run the following terminal command on the VM:

/home/proteus_developer/miniforge3/envs/proteusAI_depl/bin/python -m shiny run --port 33015 --reload --autoreload-port 43613 /home/proteus_developer/ProteusAI/app/app.py

If you get warnings, debug or "Disconnected from the server" messages, it is likely due to:

  • absent python modules,
  • updated versions of the current python modules,
  • using relative paths instead of absolute paths (Shiny Server sees relative paths as starting from /srv/shiny-server/ folder) or
  • logical errors in the code.

In order to debug the application, see what is written in the server logs under /var/log/shiny-server (the log_dir parameter can be reset in the Shiny Server instance config file /etc/shiny-server/shiny-server.conf).

Note on permissions:

The app may give some problems due to directories not having permissions to create directories or load files to certain directories. When this happen, a solution found was to use the following:

chmod 777 directory_name

LLM


Large language models by Meta are already installed in the proteusAI environment. However, if you also want to use ESM-fold (which requires a good GPU), you can install it as well:

pip install 'openfold @ git+https://github.com/aqlaboratory/openfold.git@4b41059694619831a7db195b7e0988fc4ff3a307'

Optionally, you can work in Jupyter notebooks if you prefer. To visualize protein structures in Jupyter notebooks, run the following command:

jupyter nbextension enable --py widgetsnbextension

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

proteusai-0.0.2.tar.gz (131.0 kB view hashes)

Uploaded Source

Built Distribution

proteusAI-0.0.2-py3-none-any.whl (155.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page