Skip to main content

The Kubeinit CLI

Project description

The KUBErnetes INITiator

The KUBErnetes INITiator

What is KubeInit?

KubeInit provides Ansible playbooks and roles for the deployment and configuration of multiple Kubernetes distributions. KubeInit's mission is to have a fully automated way to deploy in a single command a curated list of prescribed architectures.

Documentation

KubeInit's documentation is hosted in this same repository.

Periodic jobs status

There is a set of predefined scenarios that are tested on a weekly basis, the result of those executions is presented in the periodic job execution page.

KubeInit supported scenarios

K8s distribution: OKD (testing K8S, RKE, EKS, RKE)

Driver: Libvirt

OS: CentOS/Fedora, Debian/Ubuntu

Requirements

  • A fresh deployed server with enough RAM and disk space (120GB in RAM and 300GB in disk) and CentOS 8 (it should work also in Fedora/Debian/Ubuntu hosts).
  • Adjust the inventory file to suit your needs.
  • By default the first hypervisor node is called nyctea (defined in the inventory). Replace it with the hostname you specified if you changed it. You can also use the names in the inventory as aliases for your own hostnames using ~/.ssh/config (described in more detail below).
  • Have root passwordless access with certificates.
  • Having podman installed in the machine where you are running ansible-playbook.

Check if nyctea is reachable via passwordless root access

If you need to setup aliases in ssh for nyctea, tyto, strix, or any other hypervisor hosts that you have added or are mentioned in the inventory, you can create a file named config in ~/.ssh with contents like this:

echo "Host nyctea" >> ~/.ssh/config
echo "  Hostname actual_hostname" >> ~/.ssh/config

For example, if you have a deployed server that you can already ssh into as root called server.mysite.local you can create a ~/.ssh/config with these contents:

Host nyctea
  Hostname server.mysite.local

Now you should be ready to try access to your ansible host like this:

ssh root@nyctea

If it fails. check if you have an ssh key, and generate one if you don't

if [ -f ~/.ssh/id_rsa ]; then
  ssh-keygen
  ssh-copy-id /root/.ssh/id_rsa root@nyctea
fi

How to run

There are two ways of launching Kubeinit, directly using the ansible-playbook command, or by running it inside a container.

Directly executing the deployment playbook

The following example command will deploy an OKD 4.8 cluster with a 3 node control-plane and 1 worker node in a single command and in approximately 30 minutes.

# Install the requirements assuming python3/pip3 is installed
pip3 install \
        --upgrade \
        pip \
        shyaml \
        ansible \
        netaddr

# Get the project's source code
git clone https://github.com/Kubeinit/kubeinit.git
cd kubeinit

# Install the Ansible collection requirements
ansible-galaxy collection install --force --requirements-file kubeinit/requirements.yml

# Build and install the collection
rm -rf ~/.ansible/collections/ansible_collections/kubeinit/kubeinit
ansible-galaxy collection build kubeinit --verbose --force --output-path releases/
ansible-galaxy collection install --force --force-with-deps releases/kubeinit-kubeinit-`cat kubeinit/galaxy.yml | shyaml get-value version`.tar.gz

# Run the playbook
ansible-playbook \
    -v --user root \
    -e kubeinit_spec=okd-libvirt-3-1-1 \
    -e hypervisor_hosts_spec='[{"ansible_host":"nyctea"},{"ansible_host":"tyto"}]' \
    ./kubeinit/playbook.yml

After provisioning any of the scenarios, you should have your environment ready to go. To connect to the nodes from the hypervisor use the IP addresses from the inventory files.

Running the deployment command from a container

The whole process is explained in the HowTo's. The following commands build a container image with the project inside of it, and then launches the container executing the ansible-playbook command with all the standard ansible-playbook parameters.

Kubeinit is built and installed when deploying from a container as those steps are included in the Dockerfile, there is no need to build and install the collection locally if its used through a container.

Note: When running the deployment from a container, nyctea can not be 127.0.0.1, it needs to be the hypervisor's IP address. Also when running the deployment as a user different than root, the keys needs to be also updated.

Running from the GIT repository

Note: Won't work with ARM.

git clone https://github.com/Kubeinit/kubeinit.git
cd kubeinit
podman build -t kubeinit/kubeinit .

podman run --rm -it \
    -v ~/.ssh/id_rsa:/root/.ssh/id_rsa:z \
    -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub:z \
    -v ~/.ssh/config:/root/.ssh/config:z \
    kubeinit/kubeinit \
        -v --user root \
        -e kubeinit_spec=okd-libvirt-3-1-1 \
        -i ./kubeinit/inventory.yml \
        ./kubeinit/playbook.yml

Running from a release

Install [jq](https://stedolan.github.io/jq/)

# Get latest release tag name
TAG=$(curl --silent "https://api.github.com/repos/kubeinit/kubeinit/releases/latest" | jq -r .tag_name)
podman run --rm -it \
    -v ~/.ssh/id_rsa:/root/.ssh/id_rsa:z \
    -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub:z \
    -v ~/.ssh/config:/root/.ssh/config:z \
    quay.io/kubeinit/kubeinit:$TAG \
        -v --user root \
        -e kubeinit_spec=okd-libvirt-3-1-1 \
        -i ./kubeinit/inventory.yml \
        ./kubeinit/playbook.yml

HowTo's and presentations

Supporters

Docker Google Cloud Platform

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kubeinit-2.2.2.tar.gz (11.6 kB view details)

Uploaded Source

File details

Details for the file kubeinit-2.2.2.tar.gz.

File metadata

  • Download URL: kubeinit-2.2.2.tar.gz
  • Upload date:
  • Size: 11.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.12

File hashes

Hashes for kubeinit-2.2.2.tar.gz
Algorithm Hash digest
SHA256 a490ede2c15cb783ed23cee297521b1ee21d966033c0d047ced7db1cc30fd042
MD5 ac257518617cd34fd765450a18c32032
BLAKE2b-256 a3877667ad68d0aa508ac389f1d85c4e1302bd3326b8443d95fd0e53a7184d3c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page