Skip to main content

Virtualitics SDK

Project description

Predict Backend

To run this project locally you will need a dev.env file in the repository root directory. Somewhat current dev.env files may be found on the shared drive at Z:\Predict_Local_Credentials\backend-env

Running the application

Be sure Docker is installed.

Before running the script you must have an active AWS Session (MFA). Instructions for creating an AWS CLI session with MFA are here: https://virtualitics.atlassian.net/wiki/spaces/CLOUD/pages/593920290/Setting+up+AWS+Access

For local development, retrieve the dev.env file from Keeper in the shared folder "Predict ENV Files". You will need to ask someone on the dev team (who already has acceess) to be added to this folder.

You can start the application by running the start.sh shell script. This starts up the application via the docker-compose.yaml file. Read more about compose here: https://docs.docker.com/compose/compose-application-model/

This start.sh script will:

  • Login to our container repo on AWS
  • Pull the latest images (disable this line if you do not wish to pull latest images)
  • Launch the containers using docker-compose.yaml

If you run docker-compose with the -d flag, it will run in the background and you will not automatically see logs.

If running with the -d flag, you can run docker logs -f <container name> to check the logs for a specific container.

The backend and worker containers also copy over your local machine's AWS credentials to access data for specific apps which is stored on AWS servers. However, if in your dev.env you have set S3_URL=http://s3:9000, "S3" will be Minio which is listed in the docker-compose.yaml. Minio functions almost the same as AWS S3. You may need to import data (e.g., fixtures/) into Minio for certain apps to run locally.

Adding Apps locally for fast app development

  1. Decide what you want to call your module.
  2. Add this to the env var PROJECTS_LIST: PROJECTS_LIST = '["projects", "predict_demos", "your_projects"]'
  3. Make sure that volume is mounted in the predict-backend and predict-worker containers in docker-compose.yaml:
      - .:/opt/app-root/src/
      - ../predict-demos/predict_demos/:/opt/app-root/src/predict_demos/
      - ../your_projects/:/opt/app-root/src/your_projects
      - ~/.aws/:/opt/app-root/src/.aws/
  1. Run ./start.sh and go to localhost:3000 and login (it will be whatever your ACCOUNTS_LOGIN env var is, probably staging accounts.)
  2. Make and save your custom apps in ../your_projects/.
  3. You should see your custom apps.
  4. Debugging: If your apps aren’t loading or aren’t visible, check the backend container logs for errors. If your apps error while running, check the worker container logs for errors. If you get errors running apps that connect to Explore, check those two containers and websocket container for errors. Read more about Docker volume mounts here: https://docs.docker.com/storage/volumes/

Using the CLI

You can use the CLI to upload apps locally as well, which you can find here: https://pypi.org/project/virtualitics-cli/ Ensure you download a compatible virtualitics-cli version for the Predict version you are using.

Using virtualitics_sdk

If you want, you can add https://pypi.org/project/virtualitics-sdk/ for intelligent autocompletion while writing apps in Predict.

Build Predict Backend image - NOT REQUIRED TO RUN LOCALLY

For the most part since we have self-service deployments (https://github.virtualitics.com/virtualitics/self-service), GitHubActions on every PR that will build an image, run unit tests and can deploy to dev (and then run integration tests), there is not much of a reason to build images locally.

But if you insist: Create file PREDICT_KEY in the root of your project. The value to include in the file is located on the Z: drive under /Predict_Local_Credentials/predict-whl-decryption-keys.txt Do not include the variable name in the file.

To build a backend image run docker build . -t 476459316612.dkr.ecr.us-gov-west-1.amazonaws.com/predict-backend:latest-dev --target release --secret id=PREDICT_KEY,src=PREDICT_KEY

Running Unit Tests

Unit tests will run if you make a PR in Github, via Github Actions.

To run locally: You can run the unit tests for the application in Docker. First, make sure that you have taken down the project if it is running by executing docker-compose down Then run the helper script run-tests.sh. This will run the backend docker image, mount your local code into it, and run the unit tests as the entrypoint. The environment in this container is unconfigured, and it is not connected to any other services so this is a better, more representative way to run the tests.

Running Integration Tests

Integration tests will run if you make a PR in Github, via Github actions once an image is built, unit tests have passed and you have deployed to dev. Please coordinate with other devs during the workday if a lot of people are pushing to dev at once.

To run integration tests locally:

  • update and launch the necessary containers: ./start-integration-tests.sh
  • create an interactive shell in the backend container docker exec -it predict-backend bash
  • install pytest pip install pytest
  • run the integration tests python -m pytest test/integration/

Running E2E Tests

First, run pip3 install -r test/e2e/requirements.txt to install the necessary requirements. Then, make sure the project is running by running ./start.sh. Finally, locally, you can run python -m pytest test/e2e to run the e2e test suite.

If there are issues, ensure there is enough memory allocated in Docker.

Documentation

Documentation is handled by a separate repo: https://github.virtualitics.com/virtualitics/predict-docs

Running Telemetry Alongside

  • Uncomment the telemetry and Telemetry pg sections in docker compose file, and the "depends on"
  • Set an env var called TELEMETRY_DIR to your Telemetry root directory
  • Uncomment the telemetry and postgres-telemetry services in start.sh
  • Be sure to switch Telemetry to run on port 6000, and Telemetry pg to run on 5532 (or whatever you prefer)

Setting up and using Poetry

This section only applies if you need to make changes to dependencies.

Poetry is a Python dependency management system that we use in Predict.

  • You will need to install Poetry locally, https://python-poetry.org/docs/ and then setup and virtualenv with the Python version specified in the pyproject.toml file, which as of this writing is Python 3.11.
  • Please try to keep the pyproject.toml as top level as possible, on specifying libraries that you need, and not all their child dependencies.
  • Poetry generates a lockfile which is then used across all development to install those specific libraries. Do not edit the lockfile manually. by hand. You can edit the pyproject.toml by hand if you need to, but most Poetry commands should take care of what you need to do.
  • The lockfile is listed in .gitignore, and is not used. We are using Poetry just for the dependency resolution piece.
  • Since we use images created Github Actions, don't worry about the lockfile installing anything, all the packages are already installed.
  • If you would like to add a package, run poetry add {package}
  • If you would like to remove a package, run poetry remove {package}
  • If you would like to update all packages to the latest compatible versions, run poetry update
  • If you would like to update the Python version used, change the python = "3.11.*" line under [tool.poetry.dependencies] in the pyproject.toml file to whatever Python version you need, and then set up your virtual env to also use that same version. Then run poetry update.
  • Poetry has excellent documentation for more details.

How to install updated requirements

  • If you want to update all top level requirements to latest, grab the pyproject.toml file, remove the versions, and run this command: cat /your/directory/top_level_reqs_no_version.txt | while read line; do poetry add "${line}"@latest; done
  • When you are done updating, run this command: poetry export -f requirements.txt --output requirements.txt --without-hashes
  • This will overwrite the requirements.txt and requirements.txt will be used in the Dockerfile to install dependencies.
  • Run the Github Actions pipeline or build locally.
  • The reason for this is we need an accurate and up to date requirements.txt file for Dependabot and Ironbank.
  • Note: on Mac OS X; poetry installation can fail if xcode is not updated to match the current OS version; (ie. you've installed OS updates, and haven't updated xcode). Update xcode using the app-store, and accept the license using sudo xcodebuild -license before installing poetry. Additionally, the poetry script can be troublesome in some other Mac OS X scenarios, and there is a discussion (with some suggestions) here: https://github.com/python-poetry/install.python-poetry.org/issues/24#issuecomment-1692427804

Creating local s3 buckets

If you wish to automatically create s3 buckets in Minio on startup other than predict-dev, add them to the BUCKETS_LIST = '["predict-utilities"]' dev.env var as a JSON list, similar to PROJECTS_LIST. Otherwise, DSO takes care of s3 buckets in deployments.

Predict config files

These are managed by DevSecOps via Terraform. Consult with them if you need updates.

Releasing

Generally, these steps require coordination with DevSecOps. GitHubActions now takes care of tagging the official versions v1.x.x. Once QA has signed off on a staging build, go to Actions and run the Deploy to Production Action: https://github.virtualitics.com/virtualitics/predict-backend/actions/workflows/deploy-to-prod.yml

Be sure to promote the correct container that QA has signed off on. This will also build and deploy the demo apps to production.

Be sure the config file is correct for which projects to download for the deployment is up-to-date. Consult with DevSecOps. Be sure that the PROJECTS_LIST environmental variable is correct in the production deployment, and matches what should be installed from the project config file.

Updating 3rd Party Licenses

https://docs.google.com/spreadsheets/d/1fAxzyF9F75VvQ6LtFdut4vQzNhAee3H_f6Jh5HwtQww/edit#gid=1647524638

When a release is set, update the 3rd pary license Google sheet for potential auditing purposes. Create a new sheet with the version number, download and run the release backend image, and pip install pip-licenses. Then run pip-licenses --summary --format=csv and pip-licenses --from=mixed --format=csv --with-maintainers --with-authors --with-urls --with-description Combine the two to make a new sheet of 3rd pary licenses.

Running Alternate Version Containers (Instead of release-1.x.x)

Replace release-1.x.x on each container image line in docker-compose.yaml with whatever image tag you wish to use. Valid image tags tend to be latest-stg, latest-dev-a, v1.22.0 (official release tag), etc. Be aware that frontend and backed containers should use the same tag, as we do not test differing frontend/backend version compatibility. Worker/websocket/backend all must be on the same tag as well.

Common errors running locally:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

virtualitics_sdk-1.23.0-py3-none-any.whl (86.2 kB view details)

Uploaded Python 3

File details

Details for the file virtualitics_sdk-1.23.0-py3-none-any.whl.

File metadata

  • Download URL: virtualitics_sdk-1.23.0-py3-none-any.whl
  • Upload date:
  • Size: 86.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.9 Linux/6.1.97-104.177.amzn2023.x86_64

File hashes

Hashes for virtualitics_sdk-1.23.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9af940e3960fca641b3b3ed1afcb64f6b2ef144107e0f90d3583e27b83e54ade
MD5 6c8eb716a81f2241602099d733e1b0b5
BLAKE2b-256 25bb6f2af633c119856efa8048c1e92959077cb64a966da0c2209bb8739c1850

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page