No project description provided
Project description
pytest-kubernetes
pytest-kubernetes is a lightweight pytest plugin that makes managing (local) Kubernetes clusters a breeze. You can easily spin up a Kubernetes cluster with one pytest fixure and remove them again.
The fixture comes with some simple functions to interact with the cluster, for example kubectl(...) that allows you to run typical kubectl commands against this cluster without worring
about the kubeconfig on the test machine.
Features:
- Set up and tear down (local) Kubernetes clusters with minikube, k3d and kind
- Configure the cluster to recreate for each test case (default), or keep it across multiple test cases
- Automatic management of the kubeconfig
- Simple functions to run kubectl commands (with dict output), reading logs and load custom container images
- Wait for certain conditions in the cluster
- Port forward Kubernetes-based services (using kubectl port-forward) easily during a test case
- Management utils for custom pytest-fixtures (for example pre-provisioned clusters)
Installation
This plugin can be installed from PyPI:
pip install pytest-kubernetespoetry add -D pytest-kubernetes
Note that this package provides entrypoint hooks to be automatically loaded with pytest.
Requirements
pytest-kubernetes expects the following components to be available on the test machine:
kubectlminikube(optional for minikube-based clusters)k3d(optional for k3d-based clusters)kind(optional for kind-based clusters)- Docker (optional for Docker-based Kubernetes clusters)
Please make sure they are installed to run pytest-kubernetes properly.
Reference
Fixture
k8s
The k8s fixture provides access to an automatically selected Kubernetes provider (depending on the availability on the host). The priority is: k3d, kind, minikube-docker and minikube-kvm2.
The fixture passes a manager object of type AClusterManager.
It provides the following interface:
kubectl(...): Execute kubectl command against this cluster (defaults todictas returning format)apply(...): Apply resources to this cluster, either from YAML file, or Python dictload_image(...): Load a container image into this clusterwait(...): Wait for a target and a conditionport_forwarding(...): Port forward a targetlogs(...): Get the logs of a podversion(): Get the Kubernetes version of this clustercreate(...): Create this cluster (pass special cluster arguments withoptions: List[str]to the CLI command)delete(): Delete this clusterreset(): Delete this cluster (if it exists) and create it again
The interface provides proper typing and should be easy to work with.
Example
def test_a_feature_with_k3d(k8s: AClusterManager):
k8s.create()
k8s.apply(
{
"apiVersion": "v1",
"kind": "ConfigMap",
"data": {"key": "value"},
"metadata": {"name": "myconfigmap"},
},
)
k8s.apply("./dependencies.yaml")
k8s.load_image("my-container-image:latest")
k8s.kubectl(
[
"run",
"test",
"--image",
"my-container-image:latest",
"--restart=Never",
"--image-pull-policy=Never",
]
)
This cluster will be deleted once the test case is over.
Please note that you need to set "--image-pull-policy=Never" for images that you loaded into the cluster via the
k8s.load(name: str)function (see example above).
k8s_manager
The k8s_manager fixture provides a convenient factory method, similar to the util select_provider_manager (see below) to construct prepared Kubernetes clusters.
k8s_manager(name: Optional[str] = None) -> Type[AClusterManager]
In contrast to select_provider_manager, k8s_manager is sensitive to pytest-arguments from the command line or
configuration file. It allows to override the standard configuration via the --k8s-kubeconfig-override argument
to use an external cluster for this test run. It makes development a breeze.
Example
The following recipe does the following:
- Check if the cluster is already running (created outside, for example via
k3d cluster create --config k3d_cluster.yaml) - Creates a
k3dcluster, if it's not running - Prepares a namespace, purge existing objects if present
- Yields the fixture to the test case, or subrequest fixture
- Purges objects if cluster was not created during this test run; deletes cluster in case it was created
This is used in Gefyra.
@pytest.fixture(scope="module")
def k3d(k8s_manager):
k8s: AClusterManager = k8s_manager("k3d")("gefyra")
# ClusterOptions() forces pytest-kubernetes to always write a new kubeconfig file to disk
cluster_exists = k8s.ready(timeout=1)
if not cluster_exists:
k8s.create(
ClusterOptions(api_version="1.29.5"),
options=[
"--agents",
"1",
"-p",
"8080:80@agent:0",
"-p",
"31820:31820/UDP@agent:0",
"--agents-memory",
"8G",
],
)
if "gefyra" not in k8s.kubectl(["get", "ns"], as_dict=False):
k8s.kubectl(["create", "ns", "gefyra"])
k8s.wait("ns/gefyra", "jsonpath='{.status.phase}'=Active")
else:
purge_gefyra_objects(k8s)
os.environ["KUBECONFIG"] = str(k8s.kubeconfig)
yield k8s
if cluster_exists:
# delete existing bridges
purge_gefyra_objects(k8s)
k8s.kubectl(["delete", "ns", "gefyra"], as_dict=False)
else:
# we delete this cluster only when created during this run
k8s.delete()
This example allows to run test cases against an automatic ephemeral cluster, and a "long-living" cluster.
To run local tests without losing time in the set up and tear down of the cluster, you can follow these steps:
- Create a local
k3dcluster, for example from a config file:k3d cluster create --config k3d_cluster.yaml - Write the kubeconfig to file:
k3d kubeconfig get gefyra > mycluster.yaml - Run the tests with an override:
pytest --k8s-kubeconfig-override mycluster.yaml --k8s-cluster-name gefyra --k8s-provider k3d -s -x tests/
Marks
pytest-kubernetes uses pytest marks for specifying the cluster configuration for a test case
Currently the following settings are supported:
- provider (str): request a specific Kubernetes provider for the test case
- cluster_name (str): request a specific cluster name
- keep (bool): keep the cluster across multiple test cases
Example
@pytest.mark.k8s(provider="minikube", cluster_name="test1", keep=True)
def test_a_feature_in_minikube(k8s: AClusterManager):
...
Utils
To write custom Kubernetes-based fixtures in your project you can make use of the following util functions.
select_provider_manager
This function returns a deriving class of AClusterManager that is not created and wrapped in a fixture yet.
Remark: Don not use this, if you can use the fixture k8s_manager instead (see above).
select_provider_manager(name: Optional[str] = None) -> Type[AClusterManager]
The returning object gets called with the init parameters of AClusterManager, the cluster_name: str.
Example
@pytest.fixture(scope="session")
def k8s_with_workload(request):
cluster = select_provider_manager("k3d")("my-cluster")
# if minikube should be used
# cluster = select_provider_manager("minikube")("my-cluster")
cluster.create()
# init the cluster with a workload
cluster.apply("./fixtures/hello.yaml")
cluster.wait("deployments/hello-nginxdemo", "condition=Available=True")
yield cluster
cluster.delete()
In this example, the cluster remains active for the entire session and is only deleted once pytest is done.
Note that
yieldnotation that is prefered by pytest to express clean up tasks for this fixture.
Cluster configs
You can pass a cluster config file in the create method of a cluster:
cluster = select_provider_manager("k3d")("my-cluster")
# bind ports of this k3d cluster
cluster.create(
cluster_options=ClusterOptions(
cluster_config=Path("my_cluster_config.yaml")
)
)
For the different providers you have to submit different kinds of configuration files.
- kind: https://kind.sigs.k8s.io/docs/user/configuration/#getting-started
- k3d: https://k3d.io/v5.1.0/usage/configfile/
- minikube: Has to be a custom yaml file that corresponds to the
minikube configcommand. An example can be found in the fixtures directory of this repository.
Special cluster options
You can pass more options using kwargs['options']: List[str] to the create(options=...) function when creating the cluster like so:
cluster = select_provider_manager("k3d")("my-cluster")
# bind ports of this k3d cluster
cluster.create(options=["--agents", "1", "-p", "8080:80@agent:0", "-p", "31820:31820/UDP@agent:0"])
Examples
Please find more examples in tests/vendor.py in this repository. These test cases are written as users of pytest-kubernetes would write test cases in their projects.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pytest_kubernetes-0.7.2.tar.gz.
File metadata
- Download URL: pytest_kubernetes-0.7.2.tar.gz
- Upload date:
- Size: 18.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7733a224fb38092040bb03229b3513a9a065549bb1bbe0a9ed501ff1d5edc205
|
|
| MD5 |
63c780ac90c603b3112896d59ce08222
|
|
| BLAKE2b-256 |
c9ef9292680bf57b46fdcf88c75dc99f1fd6c3713fb7f3bc497c4b7bb5cca75c
|
Provenance
The following attestation bundles were made for pytest_kubernetes-0.7.2.tar.gz:
Publisher:
cd.yaml on Blueshoe/pytest-kubernetes
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pytest_kubernetes-0.7.2.tar.gz -
Subject digest:
7733a224fb38092040bb03229b3513a9a065549bb1bbe0a9ed501ff1d5edc205 - Sigstore transparency entry: 632267205
- Sigstore integration time:
-
Permalink:
Blueshoe/pytest-kubernetes@d96002bd9456228b2cf30cd09ebbc59518cbcbfe -
Branch / Tag:
refs/tags/0.7.2 - Owner: https://github.com/Blueshoe
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
cd.yaml@d96002bd9456228b2cf30cd09ebbc59518cbcbfe -
Trigger Event:
release
-
Statement type:
File details
Details for the file pytest_kubernetes-0.7.2-py3-none-any.whl.
File metadata
- Download URL: pytest_kubernetes-0.7.2-py3-none-any.whl
- Upload date:
- Size: 20.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8a38d654a3a678437e76441ac743f466ab9ba28fde61f0d76508989ff1485905
|
|
| MD5 |
dd0bb473e601a2505a5de066a9217aa0
|
|
| BLAKE2b-256 |
eb3b4e6759426c90a51690d5f7a5114fbeb7361572a41520a0e0916107c6bfab
|
Provenance
The following attestation bundles were made for pytest_kubernetes-0.7.2-py3-none-any.whl:
Publisher:
cd.yaml on Blueshoe/pytest-kubernetes
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pytest_kubernetes-0.7.2-py3-none-any.whl -
Subject digest:
8a38d654a3a678437e76441ac743f466ab9ba28fde61f0d76508989ff1485905 - Sigstore transparency entry: 632267214
- Sigstore integration time:
-
Permalink:
Blueshoe/pytest-kubernetes@d96002bd9456228b2cf30cd09ebbc59518cbcbfe -
Branch / Tag:
refs/tags/0.7.2 - Owner: https://github.com/Blueshoe
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
cd.yaml@d96002bd9456228b2cf30cd09ebbc59518cbcbfe -
Trigger Event:
release
-
Statement type: