Official Python package for the Judoscale autoscaler
Project description
Judoscale
This is the official Python adapter for Judoscale. You can use Judoscale without it, but this gives you request queue time metrics and job queue time (for supported job processors).
It is recommended to install the specific web framework and/or background job library support as "extras" to the judoscale PyPI package. This ensures that checking if the installed web framework and/or background task processing library is supported happens at dependency resolution time.
Supported web frameworks
Supported job processors
Using Judoscale with Django
Install Judoscale for Django with:
$ pip install 'judoscale[django]'
Add Judoscale app to settings.py:
INSTALLED_APPS = [
"judoscale.django",
# ... other apps
]
This sets up the Judoscale middleware to capture request queue times.
Optionally, you can customize Judoscale in settings.py:
JUDOSCALE = {
# Log level defaults to ENV["LOG_LEVEL"] or "INFO".
"LOG_LEVEL": "DEBUG",
}
Once deployed, you will see your request queue time metrics available in the Judoscale UI.
Using Judoscale with Flask
Install Judoscale for Flask with:
$ pip install 'judoscale[flask]'
The Flask support for Judoscale is packaged into a Flask extension. Import the extension class and use like you normally would in a Flask application:
# app.py
from judoscale.flask import Judoscale
# If your app is a top-level global
app = Flask("MyFlaskApp")
app.config.from_object('...') # or however you configure your app
judoscale = Judoscale(app)
# If your app uses the application factory pattern
judoscale = Judoscale()
def create_app():
app = Flask("MyFlaskApp")
app.config.from_object('...') # or however you configure your app
judoscale.init_app(app)
return app
This sets up the Judoscale extension to capture request queue times.
Optionally, you can override Judoscale's own configuration via your application's configuration dictionary. The Judoscale Flask extension looks for a top-level "JUDOSCALE" key in app.config, which should be a dictionary, and which the extension uses to configure itself as soon as judoscale.init_app() is called.
JUDOSCALE = {
# Log level defaults to ENV["LOG_LEVEL"] or "INFO".
"LOG_LEVEL": "DEBUG",
}
Note the official recommendations for configuring Flask.
Using Judoscale with FastAPI
Install Judoscale for FastAPI with:
$ pip install 'judoscale[asgi]'
Since FastAPI uses Starlette, an ASGI framework, the integration is packaged into ASGI middleware. Import the middleware class and register it with your FastAPI app:
# app.py
from judoscale.asgi.middleware import FastAPIRequestQueueTimeMiddleware
# If your app is a top-level global
app = FastAPI()
app.add_middleware(FastAPIRequestQueueTimeMiddleware)
# If your app uses the application factory pattern
def create_app():
app = FastAPI()
app.add_middleware(FastAPIRequestQueueTimeMiddleware)
return app
This sets up the Judoscale extension to capture request queue times.
Optionally, you can override Judoscale's configuration by passing in extra configuration to the add_middleware method:
app.add_middleware(FastAPIRequestQueueTimeMiddleware, extra_config={"LOG_LEVEL": "DEBUG"})
Other ASGI frameworks
Judoscale also provides middleware classes for Starlette and Quart. You can use them with
# For Starlette, if you're using Starlette directly, without FastAPI
from judoscale.asgi.middleware import StarletteRequestQueueTimeMiddleware
# For Quart
from judoscale.asgi.middleware import QuartRequestQueueTimeMiddleware
If your app uses a framework for which we have not provided a middleware class, but it implements the ASGI spec, you can easily create your own version of the request queue time middleware.
from judoscale.asgi.middleware import RequestQueueTimeMiddleware
class YourFrameworkRequestQueueTimeMiddleware(RequestQueueTimeMiddleware):
# NOTE: The `platform` class variable value should be the package name
# of the web framework you're using. It is used to look up package
# metadata for reporting back to the Judoscale API.
platform = "your_framework"
Then register YourFrameworkRequestQueueTimeMiddleware with your application like you normally would.
Using Judoscale with Celery and Redis
Install Judoscale for Celery with:
$ pip install 'judoscale[celery-redis]'
:warning: NOTE 1: The Judoscale Celery integration currently only works with the Redis broker. The minimum supported Redis server version is 6.0.
:warning: NOTE 2: Using task priorities is currently not supported by
judoscale. You can still use task priorities, butjudoscalewon't see and report metrics on any queues other than the default, unprioritised queue.
Judoscale can automatically scale the number of Celery workers based on the queue latency (the age of the oldest pending task in the queue).
Setting up the integration
To use the Celery integration, import judoscale_celery and call it with the Celery app instance. judoscale_celery should be called after you have set up and configured the Celery instance.
from celery import Celery
from judoscale.celery import judoscale_celery
celery_app = Celery(broker="redis://localhost:6379/0")
# Further setup
# celery_app.conf.update(...)
# ...
judoscale_celery(celery_app)
This sets up Judoscale to periodically calculate and report queue latency for each Celery queue.
If you need to change the Judoscale integration configuration, you can pass a dictionary of Judoscale configuration options to judoscale_celery to override the default Judoscale config variables:
judoscale_celery(celery_app, extra_config={"LOG_LEVEL": "DEBUG"})
An example configuration dictionary accepted by extra_config:
{
"LOG_LEVEL": "INFO",
# In addition to global configuration options for the Judoscale
# integration above, you can also specify the following configuration
# options for the Celery integration.
"CELERY": {
# Enable (default) or disable the Celery integration
"ENABLED": True,
# Report metrics on up to MAX_QUEUES queues.
# The list of discovered queues are sorted by the length
# of the queue name (shortest first) and metrics are
# reported for the first MAX_QUEUES queues.
# Defaults to 20.
"MAX_QUEUES": 20,
# Specify a list of known queues to report metrics for.
# MAX_QUEUES is still honoured.
# Defaults to empty list (report metrics for discovered queues).
"QUEUES": [],
# Enable or disable (default) tracking how many jobs are currently being
# processed in each queue.
# This allows Judoscale to avoid downscaling workers that are executing jobs.
# See documentation: https://judoscale.com/docs/long-running-jobs
# NOTE: This option requires workers to have unique names. If you are running
# multiple Celery workers on the same machine, make sure to give each
# worker a distinct name.
# More information: https://docs.celeryq.dev/en/stable/userguide/workers.html#starting-the-worker
"TRACK_BUSY_JOBS": False,
}
}
:warning: NOTE: Calling
judoscale_celeryturns on sendingtask-sentevents. This is required for the Celery integration with Judoscale to work.
Judoscale with Celery and Flask
Depending on how you've structured your Flask app, you should call judoscale_celery after your application has finished configuring the Celery app instance. If you have followed the Flask guide in the Flask documentation, the easiest place to initialize the Judoscale integration is in the application factory:
def create_app():
app = Flask(__name__)
app.config.from_object(...) # or however you configure your app
celery_app = celery_init_app(app)
# Initialize the Judoscale integration
judoscale_celery(celery_app, extra_config=app.config["JUDOSCALE"])
return app
Judoscale with Celery and Django
If you have followed the Django guide in the Celery documentation, you should have a module where you initialize the Celery app instance, auto-discover tasks, etc. You should call judoscale_celery after you have configured the Celery app instance:
from celery import Celery
from django.conf import settings
from judoscale.celery import judoscale_celery
app = Celery()
app.config_from_object("django.conf:settings", namespace="CELERY")
app.autodiscover_tasks()
# Initialize the Judoscale integration
judoscale_celery(app, extra_config=settings.JUDOSCALE)
Using Judoscale with Dramatiq and Redis
Install Judoscale for Dramatiq with:
$ pip install 'judoscale[dramatiq-redis]'
:warning: NOTE: The Judoscale Dramatiq integration currently only works with the Redis broker.
Judoscale can automatically scale the number of Dramatiq workers based on the queue latency (the age of the oldest pending task in the queue).
Setting up the integration
To use the Dramatiq integration, import judoscale_dramatiq and call it with the Dramatiq broker instance. judoscale_dramatiq should be called after you have configured the broker.
import dramatiq
from dramatiq.brokers.redis import RedisBroker
from judoscale.dramatiq import judoscale_dramatiq
broker = RedisBroker(url="redis://localhost:6379/0")
dramatiq.set_broker(broker)
judoscale_dramatiq(broker)
This sets up Judoscale to periodically calculate and report queue latency for each Dramatiq queue.
If you need to change the Judoscale integration configuration, you can pass a dictionary of Judoscale configuration options to judoscale_dramatiq to override the default Judoscale config variables:
judoscale_dramatiq(broker, extra_config={"LOG_LEVEL": "DEBUG"})
An example configuration dictionary accepted by extra_config:
{
"LOG_LEVEL": "INFO",
# In addition to global configuration options for the Judoscale
# integration above, you can also specify the following configuration
# options for the Dramatiq integration.
"DRAMATIQ": {
# Enable (default) or disable the Dramatiq integration
"ENABLED": True,
# Report metrics on up to MAX_QUEUES queues.
# The list of discovered queues are sorted by the length
# of the queue name (shortest first) and metrics are
# reported for the first MAX_QUEUES queues.
# Defaults to 20.
"MAX_QUEUES": 20,
# Specify a list of known queues to report metrics for.
# MAX_QUEUES is still honoured.
# Defaults to empty list (report metrics for discovered queues).
"QUEUES": [],
}
}
:warning: NOTE:
TRACK_BUSY_JOBSis not yet supported for Dramatiq. Dramatiq does not provide a built-in way to query currently-executing tasks.
Judoscale with Dramatiq and Django
It's usually recommended using django-dramatiq to integrate Dramatiq with Django. It configures the broker via Django settings and provides manage.py rundramatiq to run workers with full Django setup.
Configure the broker and Judoscale in your settings.py:
INSTALLED_APPS = [
"judoscale.django",
"django_dramatiq",
# ... other apps
]
DRAMATIQ_BROKER = {
"BROKER": "dramatiq.brokers.redis.RedisBroker",
"OPTIONS": {"url": "redis://localhost:6379/0"},
}
JUDOSCALE = {
"DRAMATIQ": {
"ENABLED": True,
},
}
Then initialize the Judoscale Dramatiq integration in an AppConfig.ready() method so it runs at startup for both web and worker processes:
# myapp/apps.py
from django.apps import AppConfig
class MyAppConfig(AppConfig):
name = "myapp"
def ready(self):
import dramatiq
from django.conf import settings
from judoscale.dramatiq import judoscale_dramatiq
broker = dramatiq.get_broker()
judoscale_dramatiq(broker, extra_config=settings.JUDOSCALE)
:warning: NOTE: Run your workers with
manage.py rundramatiq(provided bydjango-dramatiq) instead of the plaindramatiqCLI. This ensures Django is fully initialized andAppConfig.ready()is called.
Using Judoscale with RQ
Install Judoscale for RQ with:
$ pip install 'judoscale[rq]'
Judoscale can automatically scale the number of RQ workers based on the queue latency (the age of the oldest pending task in the queue).
Setting up the integration
To use the RQ integration, import judoscale_rq and call it with an instance of Redis pointing to the same Redis database that RQ uses.
from redis import Redis
from judoscale.rq import judoscale_rq
redis = Redis(...)
judoscale_rq(redis)
This sets up Judoscale to periodically calculate and report queue latency for each RQ queue.
If you need to change the Judoscale integration configuration, you can pass a dictionary of Judoscale configuration options to judoscale_rq to override the default Judoscale config variables:
judoscale_rq(redis, extra_config={"LOG_LEVEL": "DEBUG"})
An example configuration dictionary accepted by extra_config:
{
"LOG_LEVEL": "INFO",
# In addition to global configuration options for the Judoscale
# integration above, you can also specify the following configuration
# options for the RQ integration.
"RQ": {
# Enable (default) or disable the RQ integration
"ENABLED": True,
# Report metrics on up to MAX_QUEUES queues.
# The list of discovered queues are sorted by the length
# of the queue name (shortest first) and metrics are
# reported for the first MAX_QUEUES queues.
# Defaults to 20.
"MAX_QUEUES": 20,
# Specify a list of known queues to report metrics for.
# MAX_QUEUES is still honoured.
# Defaults to empty list (report metrics for discovered queues).
"QUEUES": [],
# Enable or disable (default) tracking how many jobs are currently being
# processed in each queue.
# This allows Judoscale to avoid downscaling workers that are executing jobs.
# See documentation: https://judoscale.com/docs/long-running-jobs
"TRACK_BUSY_JOBS": False,
}
Judoscale with RQ and Flask
The recommended way to initialize Judoscale for RQ is in the application factory:
judoscale = Judoscale()
def create_app():
app = Flask(__name__)
app.config.from_object("...") # or however you configure your application
app.redis = Redis()
# Initialize the Judoscale integration for Flask
judoscale.init_app(app)
# Initialize the Judoscale integration for RQ
judoscale_rq(app.redis)
return app
Then, in your worker script, make sure that you create an app, which will initialize the Judoscale integration with RQ. Although not required, it's useful to run the worker within the Flask app context. If you have followed the RQ on Heroku pattern for setting up your RQ workers on Heroku, your worker script should look something like this:
from rq.worker import HerokuWorker as Worker
app = create_app()
worker = Worker(..., connection=app.redis)
with app.app_context():
worker.work()
See the run-worker.py script for reference.
Judoscale with RQ and Django
The Judoscale integration for RQ is packaged into a separate Django app.
You should already have judoscale.django in your INSTALLED_APPS. Next, add the RQ integration app judoscale.rq:
INSTALLED_APPS = [
"judoscale.django",
"judoscale.rq",
# ... other apps
]
By default, judoscale.rq will connect to the Redis instance as specified by the REDIS_URL environment variable. If that is not suitable, you can supply Redis connection information in the JUDOSCALE configuration dictionary under the "REDIS" key.
Accepted formats are:
- a dictionary containing a single key
"URL"pointing to a Redis server URL, or; - a dictionary of configuration options corresponding to the keyword arguments of the
Redisconstructor.
JUDOSCALE = {
# Configuring with a Redis server URL
"REDIS": {
"URL": os.getenv("REDISTOGO_URL"),
"SSL_CERT_REQS": None # If you are running on Heroku and using Heroku Data for Redis Premium
}
# Configuring as kwargs to Redis(...)
"REDIS": {
"HOST": "localhost",
"PORT": 6379,
"DB": 0
"SSL_CERT_REQS": None # If you are running on Heroku and using Heroku Data for Redis Premium
}
}
:warning: NOTE: If you are running on Heroku and using any of the Premium plans for Heroku Data for Redis, you will have to turn off SSL certificate verification as per https://help.heroku.com/HC0F8CUS/redis-connection-issues.
If you are using Django-RQ, you can also pull configuration from RQ_QUEUES directly:
RQ_QUEUES = {
"high_priority": {
"HOST": "...",
"PORT": 6379,
"DB": 0
},
}
JUDOSCALE = {
# ... other configuration options
"REDIS": RQ_QUEUES["high_priority"]
}
:warning: NOTE: Django-RQ enables configuring RQ such that different queues and workers use different Redis instances. Judoscale currently only supports connecting to and monitoring queue latency on a single Redis instance.
Debugging & troubleshooting
If Judoscale is not recognizing your adapter installation or if you're not seeing expected metrics in Judoscale, you'll want to check the logging output. Here's how you'd do that on Heroku.
First, enable debug logging:
heroku config:set JUDOSCALE_LOG_LEVEL=debug
Then, tail your logs while your app initializes:
heroku logs --tail | grep Judoscale
You should see Judoscale collecting and reporting metrics every 10 seconds from every running process. If the issue is not clear from the logs, email help@judoscale.com for support. Please include any logging you've collected and any other behavior you've observed.
Development
This repo includes a sample-apps directory containing apps you can run locally. These apps use the judoscale adapter, but they override API_BASE_URL so they're not connected to the real Judoscale API. Instead, they post API requests to https://requestinspector.com so you can observe the API behavior.
See the README in a sample app for details on how to set it up and run locally.
Contributing
judoscale uses Poetry for managing dependencies and packaging the project. Head over to the installations instructions and install Poetry, if needed.
Clone the repo with
$ git clone git@github.com:judoscale/judoscale-python.git
$ cd judoscale-python
Verify that you are on a recent version of Poetry:
$ poetry --version
Poetry (version 1.8.4)
Install dependencies with Poetry and activate the virtualenv
$ poetry install --all-extras
$ poetry shell
Run tests with
$ ./bin/test
# or
$ poetry run pytest
Note: when using a breakpoint() to debug stuff, use this extra argument:
$ poetry run pytest --capture=no
Some sample apps have framework-specific tests, notably django/fastapi/flask. Run them from within the sample app folder with:
$ ./bin/test
Run both package & sample apps tests with:
$ ./bin/test_all
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file judoscale-1.13.2.tar.gz.
File metadata
- Download URL: judoscale-1.13.2.tar.gz
- Upload date:
- Size: 27.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.2.2 CPython/3.9.25 Linux/6.8.0-1044-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b4ef10bc1acd9487151109fe9eaf08144769c2ef7e7fb07e3a8033d9a2258df6
|
|
| MD5 |
4146571d58574762f5e49193813c0447
|
|
| BLAKE2b-256 |
ec8ce4808f5b92690e1948dd0956d01f556ccf241871a4792d7994c3bcc1b24e
|
File details
Details for the file judoscale-1.13.2-py3-none-any.whl.
File metadata
- Download URL: judoscale-1.13.2-py3-none-any.whl
- Upload date:
- Size: 28.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.2.2 CPython/3.9.25 Linux/6.8.0-1044-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d66ba516f05bd47bee54aa3c8e1032765bee6cb14de0618c68953e259b6175ac
|
|
| MD5 |
de3aada0821eb4c580f6027ddbd742c9
|
|
| BLAKE2b-256 |
d1d3dbf9d3df9c4a64a7c1799f726681f651283c28551913e6f59b32f10281f8
|