The Kubeinit CLI
Project description
The KUBErnetes INITiator
What is KubeInit?
KubeInit provides Ansible playbooks and roles for the deployment and configuration of multiple Kubernetes distributions. KubeInit's mission is to have a fully automated way to deploy in a single command a curated list of prescribed architectures.
Documentation
KubeInit's documentation is hosted in this same repository.
Periodic jobs status
There is a set of predefined scenarios that are tested on a weekly basis, the result of those executions is presented in the periodic job execution page.
KubeInit supported scenarios
K8s distribution: OKD (testing K8S, RKE, EKS, RKE)
Driver: Libvirt
OS: CentOS/Fedora, Debian/Ubuntu
Requirements
- A fresh deployed server with enough RAM and disk space (120GB in RAM and 300GB in disk) and CentOS 8 (it should work also in Fedora/Debian/Ubuntu hosts).
- Adjust the inventory file to suit your needs i.e. the worker nodes you will need in your cluster.
- By default the hypervisor node is called nyctea (defined in the inventory). Replace it with the hostname you specified if you changed it.
- Have root passwordless access with certificates.
- Having podman installed in the machine where you are running ansible-playbook.
Check if nyctea is reachable
ping nyctea
nslookup nyctea
If you are not able to reach the hosts, edit your /etc/hosts, your DNS or your inventory files.
Passwordless root access
ssh root@nyctea
If it fails. check if you have a ssh key, and generate one if you don't
if [ -f /root/.ssh/id_rsa ]; then
ssh-keygen
ssh-copy-id /root/.ssh/id_rsa root@nyctea
fi
How to run
There are two ways of launching Kubeinit, directly using the ansible-playbook command, or by running it inside a container.
Directly executing the deployment playbook
The following example command will deploy a multi-master OKD 4.5 cluster with 1 worker node in a single command and in approximately 30 minutes.
# Install the requirements assuming python3/pip3 is installed
pip3 install \
--upgrade \
pip \
shyaml \
ansible \
netaddr
# Get the project's source code
git clone https://github.com/Kubeinit/kubeinit.git
cd kubeinit
# Install the Ansible collection requirements
ansible-galaxy collection install --force -r kubeinit/requirements.yml
# Build and install the collection
rm -rf ~/.ansible/collections/ansible_collections/kubeinit/kubeinit
ansible-galaxy collection build -v --force --output-path releases/
ansible-galaxy collection install --force --force-with-deps releases/kubeinit-kubeinit-`cat galaxy.yml | shyaml get-value version`.tar.gz
# Run the playbook
ansible-playbook \
--user root \
-v -i ./hosts/okd/inventory \
--become \
--become-user root \
./playbooks/okd.yml
After provisioning any of the scenarios, you should have your environment ready to go. To connect to the nodes from the hypervisor use the IP addresses from the inventory files.
Running the deployment command from a container
The whole process is explained in the HowTo's. The following commands build a container image with the project inside of it, and then launches the container executing the ansible-playbook command with all the standard ansible-playbook parameters.
Kubeinit is built and installed when deploying from a container as those steps are included in the Dockerfile, there is no need to build and install the collection locally if its used through a container.
Note: When running the deployment from a container,
nyctea
can not be 127.0.0.1, it needs to be
the hypervisor's IP address. Also when running the
deployment as a user different than root, the
keys needs to be also updated.
Running from the GIT repository
git clone https://github.com/Kubeinit/kubeinit.git
cd kubeinit
podman build -t kubeinit/kubeinit .
run_as='root'
podman run --rm -it \
-v ~/.ssh/id_rsa:/${run_as}/.ssh/id_rsa:z \
-v ~/.ssh/id_rsa.pub:/${run_as}/.ssh/id_rsa.pub:z \
-v /etc/hosts:/etc/hosts \
kubeinit/kubeinit \
--user ${run_as} \
-v -i ./hosts/okd/inventory \
--become \
--become-user ${run_as} \
./playbooks/okd.yml
Running from a release
# Get latest release tag name
TAG=$(curl --silent "https://api.github.com/repos/kubeinit/kubeinit/releases/latest" | jq -r .tag_name)
podman run --rm -it \
-v ~/.ssh/id_rsa:/root/.ssh/id_rsa:z \
-v ~/.ssh/id_rsa.pub:/${run_as}/.ssh/id_rsa.pub:z \
-v /etc/hosts:/etc/hosts \
quay.io/kubeinit/kubeinit:$TAG \
--user root \
-v -i ./hosts/okd/inventory \
--become \
--become-user root \
./playbooks/okd.yml
HowTo's and presentations
- The easiest and fastest way to deploy an OKD 4.5 cluster in a Libvirt/KVM Host.
- KubeInit external access for OpenShift/OKD deployments with Libvirt.
- Deploying KubeInit from a container.
- KubeInit: Bringing good practices from the OpenStack ecosystem to improve the way OKD/OpenShift deploys, slides.
- Persistent Volumes And Claims In KubeInit
- Deploying Multiple KubeInit Clusters In The Same Hypervisor
- KubeInit 4-In-1 - Deploying Multiple Kubernetes Distributions (K8S, OKD, RKE, And CDK) With The Same Platform
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.