Proxy batch job requests to kubernetes.
Project description
kbatch-proxy
A simple Kubernetes proxy, allowing JupyterHub users to make requests to the Kubernetes API without having direct access to the Kubernetes API.
Motivation
We want kbatch
users to be able to create Kubernetes Jobs, access logs, etc., but
- Don't want to grant them direct access to the Kubernetes API
- Don't want to maintain a separate web application, with any state that's independent of Kubernetes
Enter kbatch-proxy
Design
A simple FastAPI application that sits in between kbatch
users and the Kubernetes API. It's expected that the kbatch-proxy
application has access to the Kubernetes API, with permission to create namespaces, jobs, etc. This will often be run as a JupyterHub service.
Users will make requests to kbatch-proxy
. Upon request we will
- Validate that the user is authenticated with JupyterHub (checking the
Bearer
token) - Validate that data the user is submitting or requesting meets our security model
- Make the request to the Kubernetes API on behalf of the user
Security model
This remains to be proven effective, but the hope is to let users do whatever they want in their own namespace and nothing outside of their namespace.
Container images
We provide container images at https://github.com/kbatch-dev/kbatch/pkgs/container/kbatch-proxy.
$ docker pull ghcr.io/kbatch-dev/kbatch-proxy:latest
Deployment
kbatch-proxy
is most easily deployed as a JupyterHub service using Helm. A few values need to be configured:
# file: config.yaml
app:
jupyterhub_api_token: "<jupyterhub-api-token>"
jupyterhub_api_url: "https://<jupyterhub-url>/hub/api/"
extra_env:
KBATCH_PREFIX: "/services/kbatch"
# image:
# tag: "0.1.4" # you likely want to pin the latest here.
Note: we don't currently publish a helm chart, so you have to git clone
the kbatch repository.
From the kbatch/kbatch-proxy
directory, use helm to install the chart
$ helm upgrade --install kbatch-proxy ../helm/kbatch-proxy/ \
-n "<namepsace> \
-f config.yaml
You'll need to configure kbatch as a JupyterHub service. This example makes it available at /services/kbatch
(this should match KBATCH_PREFIX
above):
jupyterhub:
hub:
services:
kbatch:
admin: true
api_token: "<jupyterhub-api-token>" # match the api token above
url: "http://kbatch-proxy.<kbatch-namespace>.svc.cluster.local"
That example relies on kbatch being deployed to the same Kubernetes cluster as JupyterHub, so JupyterHub can proxy requests to kbatch-proxy
using Kubernetes' internal DNS. The namespace in that URL should match the namespace where kbatch
was deployed.
Dask Gateway Integration
If your JupyterHub is deployed with Dask Gateway, you might want to set a few additional environment variables in the job so that they behave similarly to the singleuser notebook pod.
app:
extra_env:
KBATCH_JOB_EXTRA_ENV: |
{
"DASK_GATEWAY__AUTH__TYPE": "jupyterhub",
"DASK_GATEWAY__CLUSTER__OPTIONS__IMAGE": "{JUPYTER_IMAGE_SPEC}",
"DASK_GATEWAY__ADDRESS": "https://<JUPYTERHUB_URL>/services/dask-gateway",
"DASK_GATEWAY__PROXY_ADDRESS": "gateway://<DASK_GATEWAY_ADDRESS>:80"
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for kbatch_proxy-0.2.8-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4478bd5c9cd26410ccf5eee7e6eeabcc70feb80e6558b121da613a5441eff9ed |
|
MD5 | 7b8dce15f4e04c8c09fb9177bc04ecd9 |
|
BLAKE2b-256 | 4853493c231fc80fccbe0681d642bf99e986f256c0eaf1cda28c4c871489acae |