Setup local and production hybrid clusters, using Ansible and Vagrant
Project description
General
k8s-setup provides instant access to a configured Kubernetes cluster, based on kubeadm. It allows to provision the cluster to a local machine as well as to production. It is made with the help from Ansible, and Vagrant for local virtual setup in VirtualBox.
Note: Currently only Linux is supported for local VM deployment, but it is designed in a way, that Windows support is possible. It just needs to be implemented.
How to use
Install by pip
Every released version will have a wheel package pushed to PyPI. You can install
it by pip install k8s-setup
Please note that, depending on your distibution, you may fix your PATH environment
variable to include $HOME/.local/bin. This is not only required for k8s-setup to
be started, but also for k8s-setup to run successfully.
See pip install --user should check that the user's $PATH is correct
Install by git
- Clone the repository
- Install python (>=2.7) and pip
- Install the package editable by
pip install GitPython && pip install --editable . - Vagrant (tested against Vagrant 2.2.4)
- VirtualBox (tested against VirtualBox 6.0.12)
- Ensure 'curl' is installed
The steps 4. and 5. are only required, if you want to setup a local virtual cluster.
You need a working VirtualBox installation. There may be issues regarding EFI Secure Boot,
or with any Kernel module. Resolve a hostonlyif create error with vagrant states modprobe vboxnetadp.
As there was another error "Failed to open/create the internal network 'HostInterfaceNetworking-vboxnet0'", I needed to run modprobe vboxnetflt to run.
As this was a new Installation of VirtualBox, a reboot may also fix this. It would be nice to
test this further.
Provide the configuration
Local Deployment
For local vm deployments, the default 'vagrant.yml' file should work. You don't need to provide a custom configuration, if you just want a vm cluster with a single control plane and worker node.
To access the cluster from your machine, you should have an host record for the
apiserver. The IP is configured by the k8s_apiserver_vip configuration setting.
The hostname is constructed by the k8s_apiserver_hostname and k8s_cluster_dnsname settings.
To generate the correct /etc/hosts file, you can run k8s-setup generate hostsfile --merge.
The --merge flag instructs the generator to merge the current /etc/hosts file with
the generated records.
NOTE: Because write-access to /etc/hosts needs root permissions, you can't just
simply redirect the output to /etc/hosts. I used a temporary file, with a move
operation: k8s-setup generate hostsfile --merge > /tmp/hosts && sudo mv /tmp/hosts /etc/hosts
First you should run the generator before running the provisioner, because it needs
a 'apiserver' host. After provisioning is done, run it again, so that the ingress hosts are included.
Production Deployment
For production deployments, you need to:
-
Create an Ansible inventory file, with the machines in it. You need to assign the host to these groups:
- lnxclp: All Linux control plane nodes
- lnxclp_setup: One of the Linux control plane nodes, which will be the first control plane instance
- lnxwrk: All Linux worker nodes
- winwrk: All Windows worker nodes
-
Create a .yml file, representing variables of your environment. You can check the provided files in '/conf'. The 'default.yml' contains the system default settings. You can override them in your custom configuration file.
-
Register a custom configuration by executing
k8s-setup config set --file <path>. The path can be absolute, or relative to the repository root. By default the./conf/vagrant.ymlis selected.
This Information is stored in ~/k8s-setup/current-config. It is persistent, so normally you only have to execute it once.
4. You may verify if everything is ok by running k8s-setup info
Provide the configuration in an own repository
k8s-setup doesn't care, where the config file is coming from. Just clone the
repository containing your configuration file, and register it by k8s-setup config <path>.
Running the provisioners
By executing k8s-setup provision you start the provisioning process.
Because provisioning is idempotent, you can always use provision 'all'. The ability to select a scope explicitly is just a time-saver, when you know what has hanged. This is basically to cut wait-time when developing and testing k8s-setup. If you don't know what has changed exacly, always provision the 'all' scope.
The following steps will be performed in the 'vagrant' mode:
- The configuration is validated.
- Vagrant only: The relevant configuration settings are reflected in environment variables.
- Vagrant only: The
./lib/vagrant/Vagrantfileis used to start the VMs, depending on the reflected environmet variables. - Vagrant only: The Vagrantfile declares the following provisioners:
- Host: Updates the
/etc/hostsfiles on each machine, because we have no DNS server in the network. - Ansible: Runs the
./lib/ansible/vagrant.ymlplaybook. This playbook only performs connectivity tests. The provisioning playbooks are launched later. The Ansible provisioner also generates an inventory file, which is used in the next step.
- Host: Updates the
- Depending on the scope, the following playbooks are executed:
- all (default): Runs hosts, cluster and incluster playbooks.
- hosts: Provisions the machines so that everything is installed and OS level configuration is applied, but no kubeadm operation to deploy the cluster was performed.
- cluster: Provisions by kubeadm operations like
kubeadm initorkubeadm jointo initialize the cluster, or add new nodes. - incluster: Provisions all kubernetes objects in an existing cluster.
Get Information
By using the k8s-setup info command, you get some metadata of the k8s-setup
state and configuration.
$ k8s-setup info
config-files:
- conf/defaults.yml
- localpath/k8s-setup/conf/vagrant.yml
configuration:
ansible_inventory_filepath: lib/vagrant/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
cluster-dns-name: k8stest.local
k8s-version: 1.16.3
lnxclp-nodes: 1
lnxwrk-nodes: 1
mode: vagrant
winwrk-nodes: 0
version: v0.1-alpha-base-7-gd978c740a5c526+dirty
When running the 'cluster' provisioning scope, a ConfigMap named 'k8s-setup-info' is created in the 'default' namespace. It contains this information in the 'setup-info' field. There is an additional field named 'sys-info', containing information like the date, the user of the name of the provisioning host.
Please note that the provisioning logic don't read this data. It only serves informational purposes.
Enable Shell Completion
With the help of the wonderfull click library k8s-setup has buildin completion for bash and zsh.
You need to activate this, like:
# bash (you may put this in .bashrc)
eval "$(_K8S_SETUP_COMPLETE=source k8s-setup)"
# zsh (you may put this in .zshrc)
eval "$(_K8S_SETUP_COMPLETE=source_zsh k8s-setup)"
Output debug Messages
You can enable Debug level logging, by setting the K8S_SETUP_DEBUG environment
variable to '1'.
You can also use the --debug command line option.
Vagrant Development Environment
Networking
k8s-setup uses a default IP plan to setup the cluster network: There is a configurable /24 network which is used for the Vagrant boxes.
# conf/defaults.yml
global_vagrant_hosts_network: 10.0.0.*
You only need to change this settings, if you have conflicting IP addresses in your LAN.
The following addresses are used:
- 10.0.0.1: Reserver (Gateway)
- 10.0.0.2: Virtual IP (keepalived) for apiserver
- 10.0.0.10-19: Control plane nodes
- 10.0.0.20-29: Linux worker nodes
- 10.0.0.30-39: Windows worker nodes
- 10.0.0.40-49: None-cluster nodes (like test-clients or AD server)
After the hosts are provisioned, you get a route by virtualbox, like:
$ ip route | grep 10.0.0.0
10.0.0.0/24 dev vboxnet3 proto kernel scope link src 10.0.0.1
So you can just access the network from you host. You may add an entry in your
/etc/hosts file, like 10.0.0.2 apiserver.k8stest.local
https://setuptools.readthedocs.io/en/latest/setuptools.html#automatic-resource-extraction
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file k8s_setup-0.2.3.dev0-py2.py3-none-any.whl.
File metadata
- Download URL: k8s_setup-0.2.3.dev0-py2.py3-none-any.whl
- Upload date:
- Size: 215.1 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.8.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
962a33348ebdc07b180113ab601142b9336988cd7558a7f66358e0e3e5f6fcfe
|
|
| MD5 |
52e6f7f6234430a5ab0e2ff21c1dd93c
|
|
| BLAKE2b-256 |
45e4d31b40be986d485e55aeba91117b1cbdf800faab96f789b42d4623d5b585
|