pod-autoscaling plugin for Tutor
Project description
This plugin enables Pod-Autoscaling strategies for instances deployed in Kubernetes with Tutor. Inspired by the implementation of HPA from https://gitlab.com/opencraft/dev/tutor-contrib-grove (thanks @gabor-boros) The strategies offered by the plugin are:
HPA (Horizontal Pod Autoscaler): this mechanism adds or removes pods based on a defined metric threshold (For instance CPU or memory consumption).
VPA (Vertical Pod Autoscaler): this strategy aims to stabilize the consumption and resources of every pod, so they’re kept between limits and requests that were specified in the initial pod configuration.
Requirements
To use HPA, the installation of metrics-server is required.
To use VPA, the installation of Vertical Pod Autoscaler is required.
Installation
pip install tutor-contrib-pod-autoscaling
Configuration
This plugin implements a filter called AUTOSCALING_CONFIG (tutorpod_autoscaling.hooks.AUTOSCALING_CONFIG) which allow to add/modify pod autoscaling configuration for different OpenedX services. The plugin by itself uses the AUTOSCALING_CONFIG filter to add default autoscaling configuration (HPA and VPA) for the LMS, CMS, LMS_WORKER and CMS_WORKER deployments based on CPU and MEMORY metrics (check the CORE_AUTOSCALING_CONFIG variable in the plugin.py file).
Adding/changing HPA/VPA configuration for OpenedX services
Operators can take advantage of this plugin to configure their HPA/VPA settings for different services. There are 2 mechanisms to do so:
Create a Tutor plugin and add your HPA/VPA configuration to the tutorpod_autoscaling.hooks.AUTOSCALING_CONFIG filter. For instance, to add HPA support to the forum deployment:
from tutorpod_autoscaling.hooks import AUTOSCALING_CONFIG
@AUTOSCALING_CONFIG.add()
def _add_my_autoscaling(autoscaling_config):
autoscaling_config["forum"] = {
"enable_hpa": True,
"memory_request": "300Mi",
"cpu_request": 0.25,
"memory_limit": "1200Mi",
"cpu_limit": 1,
"min_replicas": 1,
"max_replicas": 10,
"avg_cpu": 300,
"avg_memory": "",
"enable_vpa": False,
}
return autoscaling_config
You can also override the HPA/VPA configuration for any of the services supported by default, for instance, LMS:
from tutorpod_autoscaling.hooks import AUTOSCALING_CONFIG
@AUTOSCALING_CONFIG.add()
def _add_my_autoscaling(autoscaling_config):
autoscaling_config["lms"] = {
"enable_hpa": True,
"memory_request": "1Gi",
"cpu_request": 0.4,
"memory_limit": "2Gi",
"cpu_limit": 1,
"min_replicas": 5,
"max_replicas": 20,
"avg_cpu": 70,
"avg_memory": "",
"enable_vpa": False,
}
return autoscaling_config
Set the POD_AUTOSCALING_EXTRA_SERVICES variable to extend HPA/VPA support to different services of modify default ones:
POD_AUTOSCALING_EXTRA_SERVICES:
forum:
enable_hpa: true
memory_request: 300Mi
cpu_request: 0.25
memory_limit: 1200Mi
cpu_limit: 1
min_replicas: 1
max_replicas: 10
avg_cpu: 300
avg_memory: ''
enable_vpa: true
lms:
enable_hpa: true
memory_request: 1Gi
cpu_request: 0.4
memory_limit: 2Gi
cpu_limit: 1
min_replicas: 5
max_replicas: 20
avg_cpu: 70
avg_memory: ''
enable_vpa: true
Migrating to Redwood version (18.x.x)
In versions prior to Redwood, the plugin used multiple configurations and a couple of patches to provide HPA/VPA support. Let’s suppose you want to migrate to version 18.x.x and you have the following configuration in your config.yml for the LMS HPA/VPA support:
POD_AUTOSCALING_LMS_HPA: true
POD_AUTOSCALING_LMS_MEMORY_REQUEST: "350Mi"
POD_AUTOSCALING_LMS_CPU_REQUEST: 0.25
POD_AUTOSCALING_LMS_MEMORY_LIMIT: "1400Mi"
POD_AUTOSCALING_LMS_CPU_LIMIT: 1
POD_AUTOSCALING_LMS_MIN_REPLICAS: 1
POD_AUTOSCALING_LMS_MAX_REPLICAS: 4
POD_AUTOSCALING_LMS_AVG_CPU: 300
POD_AUTOSCALING_LMS_AVG_MEMORY: ""
POD_AUTOSCALING_LMS_VPA: false
The equivalent configuration for the 18.x.x version using the AUTOSCALING_CONFIG filter would be like this:
from tutorpod_autoscaling.hooks import AUTOSCALING_CONFIG
@AUTOSCALING_CONFIG.add()
def _add_my_autoscaling(autoscaling_config):
autoscaling_config["lms"] = {
"enable_hpa": True,
"memory_request": "350Mi",
"cpu_request": 0.25,
"memory_limit": "1400Mi",
"cpu_limit": 1,
"min_replicas": 1,
"max_replicas": 4,
"avg_cpu": 300,
"avg_memory": "",
"enable_vpa": False,
}
return autoscaling_config
The migration of other services follows the same logic.
It is important to mention that pod-autoscaling-hpa and pod-autoscaling-vpa patches were removed in the Redwood release since they are longer required in the HPA/VPA configuration model.
Notes to take in mind when using this plugin:
The default values for HPA in this plugin can work OK for small installations. However, according to your use case, you’ll need to tune the values in order to get the best performance.
The VPA entities are configured to just display suggestions on the right amount of resources to allocate for every workload, and not to go directly and modify the resources allocated for a workload. This is because using HPA and VPA in automatic UpdateMode is not recommended. The best practice is to get the suggestions from the VPA and based on those suggestions, adjust the HPA values for the workloads in order to get the most value out of these autoscaling tools.
Usage
tutor plugins enable pod-autoscaling
License
This software is licensed under the terms of the AGPLv3.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tutor_contrib_pod_autoscaling-20.0.0.tar.gz.
File metadata
- Download URL: tutor_contrib_pod_autoscaling-20.0.0.tar.gz
- Upload date:
- Size: 19.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fa54ce4015ad79dbeae827b25ae6fa59985f4a738c8785717a9e52f81bfae99b
|
|
| MD5 |
583066b53f33e237a456d5ba1b5f2653
|
|
| BLAKE2b-256 |
c5c06de12843ff228725953f21ae85e70f83f60f4615e47b6a3c4fe381770a0c
|
File details
Details for the file tutor_contrib_pod_autoscaling-20.0.0-py3-none-any.whl.
File metadata
- Download URL: tutor_contrib_pod_autoscaling-20.0.0-py3-none-any.whl
- Upload date:
- Size: 21.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fc8901ce29784ce5dd5f213cfeca1ba873f80a15425d7e545c35dad9961646c6
|
|
| MD5 |
d57f2b9c6f9e79029c31bdd773679f4c
|
|
| BLAKE2b-256 |
cd7d4d4d80f2337d3761df5428b5abce7970a1dd8ae66e2d7c487e694c4802ff
|