AutoMLOps is a service that generates a production-style MLOps pipeline from Jupyter Notebooks.
Project description
AutoMLOps
AutoMLOps is a service that generates a production ready MLOps pipeline from Jupyter Notebooks, bridging the gap between Data Science and DevOps and accelerating the adoption and use of Vertex AI. The service generates an MLOps codebase for users to customize, and provides a way to build and manage a CI/CD integrated MLOps pipeline from the notebook. AutoMLOps automatically builds a source repo for versioning, cloudbuild configs and triggers, an artifact registry for storing custom components, gs buckets, service accounts and updated IAM privs for running pipelines, enables APIs (cloud Run, Cloud Build, Artifact Registry, etc.), creates a runner service API in Cloud Run for submitting PipelineJobs to Vertex AI, and a Cloud Scheduler job for submitting PipelineJobs on a recurring basis. These automatic integrations empower data scientists to take their experiments to production more quickly, allowing them to focus on what they do best: providing actionable insights through data.
Prerequisites
In order to use AutoMLOps, the following are required:
- Python 3.7 - 3.10
- Google Cloud SDK 407.0.0
- beta 2022.10.21
git
installedgit
logged-in:
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
- Application Default Credentials (ADC) are setup. This can be done through the following commands:
gcloud auth application-default login
gcloud config set account <account@example.com>
AutoMLOps generates code that is compatible with kfp<2.0.0
.
Install
Install AutoMLOps from PyPI: pip install google-cloud-automlops
Or Install locally by cloning the repo and running pip install .
Dependencies
docopt==0.6.2
,docstring-parser==0.15
,pipreqs==0.4.11
,PyYAML==6.0.1
,yarg==0.1.9
GCP Services
AutoMLOps makes use of the following products by default:
- Vertex AI Pipelines
- Artifact Registry
- Google Cloud Storage
- Cloud Build
- Cloud Build Triggers
- Cloud Run
- Cloud Scheduler
- Cloud Tasks
APIs & IAM
AutoMLOps will enable the following APIs:
- cloudresourcemanager.googleapis.com
- aiplatform.googleapis.com
- artifactregistry.googleapis.com
- cloudbuild.googleapis.com
- cloudscheduler.googleapis.com
- cloudtasks.googleapis.com
- compute.googleapis.com
- iam.googleapis.com
- iamcredentials.googleapis.com
- ml.googleapis.com
- run.googleapis.com
- storage.googleapis.com
- sourcerepo.googleapis.com
AutoMLOps will update IAM privileges for the following accounts:
- Pipeline Runner Service Account (one is created if it does exist, defaults to: vertex-pipelines@PROJECT_ID.iam.gserviceaccount.com). Roles added:
- roles/aiplatform.user
- roles/artifactregistry.reader
- roles/bigquery.user
- roles/bigquery.dataEditor
- roles/iam.serviceAccountUser
- roles/storage.admin
- roles/run.admin
- Cloudbuild Default Service Account (PROJECT_NUMBER@cloudbuild.gserviceaccount.com). Roles added:
- roles/run.admin
- roles/iam.serviceAccountUser
- roles/cloudtasks.enqueuer
- roles/cloudscheduler.admin
User Guide
For a user-guide, please view these slides.
List of Examples
Training
- 00_introduction_training_example
- 00_introduction_training_example_no_notebook
- 01_clustering_example
- 02_tensorflow_transfer_learning_gpu_example
- 03_bqml_introduction_training_example
- 04_bqml_forecasting-retail-demand
Inferencing
Options
AutoMLOps CI/CD options:
run_local
: Bool that specifies whether to use generate files resources locally or use cloud CI/CD workflow (see below). Defaults to True. See CI/CD Workflow
Required parameters:
project_id: str
pipeline_params: dict
Optional parameters (defaults shown):
af_registry_location: str = 'us-central1'
af_registry_name: str = 'vertex-mlops-af'
base_image: str = 'python:3.9-slim'
cb_trigger_location: str = 'us-central1'
cb_trigger_name: str = 'automlops-trigger'
cloud_run_location: str = 'us-central1'
cloud_run_name: str = 'run-pipeline'
cloud_tasks_queue_location: str = 'us-central1'
cloud_tasks_queue_name: str = 'queueing-svc'
csr_branch_name: str = 'automlops'
csr_name: str = 'AutoMLOps-repo'
custom_training_job_specs: list[dict] = None
gs_bucket_location: str = 'us-central1'
gs_bucket_name: str = None
pipeline_runner_sa: str = None
run_local: bool = True
schedule_location: str = 'us-central1'
schedule_name: str = 'AutoMLOps-schedule'
schedule_pattern: str = 'No Schedule Specified'
vpc_connector: str = None
AutoMLOps will generate the resources specified by these parameters (e.g. Artifact Registry, Cloud Source Repo, etc.). If run_local is set to False, the AutoMLOps will turn the current working directory of the notebook into a Git repo and use it for the CSR. Additionally, if a cron formatted str is given as an arg for schedule_pattern
then it will set up a Cloud Schedule to run accordingly.
Customizations
Set scheduled run:
Use the schedule_pattern
parameter to specify a cron job schedule to run the pipeline job on a recurring basis.
schedule_pattern = '0 */12 * * *'
Set pipeline compute resources:
Use the base_image
and custom_training_job_specs
parameter to specify resources for any custom component in the pipeline.
base_image = 'us-docker.pkg.dev/vertex-ai/training/tf-gpu.2-11.py310:latest',
custom_training_job_specs = [{
'component_spec': 'train_model',
'display_name': 'train-model-accelerated',
'machine_type': 'a2-highgpu-1g',
'accelerator_type': 'NVIDIA_TESLA_A100',
'accelerator_count': '1'
}]
Use a VPC connector:
Use the vpc_connector
parameter to specify a vpc connector.
vpc_connector = 'example-vpc'
Specify package versions:
Use the packages_to_install
parameter of @AutoMLOps.component
to explicitly specify packages and versions.
@AutoMLOps.component(
packages_to_install=[
"google-cloud-bigquery==2.34.4",
"pandas",
"pyarrow",
"db_dtypes"
]
)
def create_dataset(
bq_table: str,
data_path: str,
project_id: str
):
...
Layout
Included in the repository is an example notebook that demonstrates the usage of AutoMLOps. Upon running AutoMLOps.go(project_id='automlops-sandbox',pipeline_params=pipeline_params)
, a series of directories will be generated automatically, and a pipelineJob will be submitted using the setup below:
.
├── cloud_run : Cloud Runner service for submitting PipelineJobs.
├──run_pipeline : Contains main.py file, Dockerfile and requirements.txt
├──queueing_svc : Contains files for scheduling and queueing jobs to runner service
├── components : Custom vertex pipeline components.
├──component_base : Contains all the python files, Dockerfile and requirements.txt
├──create_dataset : Pull data from a BQ table and writes it as a csv to GS.
├──train_model : Trains a basic decision tree classifier.
├──deploy_model : Deploys model to endpoint.
├── images : Custom container images for training models.
├── pipelines : Vertex ai pipeline definitions.
├── pipeline.py : Full pipeline definition.
├── pipeline_runner.py : Sends a PipelineJob to Vertex AI.
├── runtime_parameters : Variables to be used in a PipelineJob.
├── pipeline_parameter_values.json : Json containing pipeline parameters.
├── configs : Configurations for defining vertex ai pipeline.
├── defaults.yaml : PipelineJob configuration variables.
├── scripts : Scripts for manually triggering the cloud run service.
├── build_components.sh : Submits a Cloud Build job that builds and deploys the components.
├── build_pipeline_spec.sh : Builds the pipeline specs
├── create_resources.sh : Creates an artifact registry and gs bucket if they do not already exist.
├── run_pipeline.sh : Submit the PipelineJob to Vertex AI.
├── run_all.sh : Builds components, pipeline specs, and submits the PipelineJob.
└── cloudbuild.yaml : Cloudbuild configuration file for building custom components.
Cloud Continuous Integration and Continuous Deployment Workflow
If run_local=False
, AutoMLOps will generate and use a fully featured CI/CD environment for the pipeline. Otherwise, it will use the local scripts to build and run the pipeline.
Pipeline Components
The example notebook comes with 3 components as part of the pipeline. Additional sample code for commonly used services can be found below:
- Feature Store
- BigQuery ML
- Model Registry
- Experiments
- ExplainableAI
- Vertex AI Pipelines
- Google Cloud Pipeline Components
Next Steps / Backlog
- Use terraform for the creation of resources.
- Allow multiple AutoMLOps pipelines within the same directory
- Alternatives to Pipreqs
Contributors
Sean Rastatter: Tech Lead
Tony DiLoreto: Project Manager
Allegra Noto: Senior Project Engineer
Ahmad Khan: Engineer
Jesus Orozco: Cloud Engineer
Erin Horning: Infrastructure Engineer
Alex Ho: Engineer
Kyle Sorensen: Cloud Engineer
Disclaimer
This is not an officially supported Google product.
Copyright 2023 Google LLC. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file google-cloud-automlops-1.1.4.tar.gz
.
File metadata
- Download URL: google-cloud-automlops-1.1.4.tar.gz
- Upload date:
- Size: 53.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3e1efb3ceaf583df922c053cbc9aefa840b4352dd6cc15cdc9e5b72cc210f664 |
|
MD5 | 292cca374618b5ef465a2490e99f7a09 |
|
BLAKE2b-256 | c43abb264b1bd6e1bca079bcf1a6cc55aac0e0a3fa54752f2b0d631a0cb4f4ac |
File details
Details for the file google_cloud_automlops-1.1.4-py3-none-any.whl
.
File metadata
- Download URL: google_cloud_automlops-1.1.4-py3-none-any.whl
- Upload date:
- Size: 80.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c3bc46821b5b24b545285972697f26d76f858a9d3d58f807ffb91f379a997eb8 |
|
MD5 | 3adc1187851a0daf1036bb71f5f733ce |
|
BLAKE2b-256 | 5669013c4a1c9fcfc60e78f42077ef007b25de44781e8ad497044e89fa6eb471 |