Skip to main content

Tools for managing multiple kubernetes clusters on KVM (on 3 Centos 7 vms) running on a bare metal server (tested on Ubuntu 18.04)

Project description

Metalnetes

Tools for managing multiple kubernetes 1.13.4 clusters on KVM (3 Centos 7 vms) running on a bare metal server (tested on Ubuntu 18.04).

https://i.imgur.com/8uvAcgF.png

This will install:

  • Kubernetes cluster deployed 3 CentOS 7 VMs using 100 GB with static IPs and installed using KVM

  • Rook Ceph Storage Cluster for Persistent Volumes

  • Grafana + Prometheus

  • Optional - Stock Analysis Engine that includes: - Minio (on-premise S3) - Redis cluster - Jupyter

  • SSH access

Getting Started

git clone https://github.com/jay-johnson/metalnetes.git metalnetes
cd metalnetes

Start VMs and Kubernetes Cluster

./boot.sh

View Kubernetes Nodes

./tools/show-nodes.sh

Monitoring the Kubernetes Cluster

Log in to Grafana from a browser:

Username: trex Password: 123321

https://grafana.example.com

Grafana comes ready-to-go with these starting dashboards:

View Kubernetes Pods in Grafana

https://i.imgur.com/GHo7dbd.png

View Rook Ceph Cluster in Grafana

https://i.imgur.com/wptrQW2.png

View Redis Cluster in Grafana

https://i.imgur.com/kegYzXZ.png

Customize VMs and Manage Kubernetes Deployments

These are the steps the automated ./boot-new-cluster.sh runs in order for customizing and debugging your kubernetes deployment.

Create VMs Using KVM on Ubuntu

  1. Install KVM and arp-scan to find each VM’s ip address

    This guide was written using an Ubuntu bare metal server, but it is just KVM under the hood. Please feel free to open a PR if you know the commands for CentOS, Fedora or RHEL and I will add them.

    cd kvm
    sudo ./install-kvm.sh
  2. Start VMs

    This will create 3 vms by default and uses an internal fork from the giovtorres/kvm-install-vm script. To provision vm disks using qemu-img, this tool will prompt for root access when needed.

    ./start-cluster-vms.sh
  3. Assign IPs to Router or DNS server

    This tool uses arp-scan to find all active ip addresses on the network bridge. With this list, the tool then looks up each vm’s ip by the MAC address, and it requires root privileges.

    ./find-vms-on-bridge.sh

    Alternatively you can set /etc/hosts too:

    192.168.0.110   m10 m10.example.com master10.example.com
    192.168.0.111   m11 m11.example.com master11.example.com
    192.168.0.112   m12 m12.example.com master12.example.com
  4. Bootstap Nodes

    Once the vm’s are routable by their fqdn (e.g. m10.example.com), you can use the bootstrap tool to start preparing the cluster nodes, confirm ssh access works with all nodes.

    ./bootstrap-new-vms.sh

Install Kubernetes on CentOS 7

Configuration

Now that the VMs are ready you can use the k8.env CLUSTER_CONFIG example file for managing kubernetes clusters on your own vms. This step becomes the starting point for start, restarting and managing clusters.

cd ..
./install-centos-vms.sh

VM and Kubernetes Node Configuration

Helm and Tiller Configuration

Cluster Storage Configuation

Private Docker Registry

Start Kubernetes Cluster

With 3 vms setup using the install-centos-vms.sh follow these steps to stand up and tear down a kubernetes cluster.

Load the CLUSTER_CONFIG environment

# from within the repo's root dir:
export CLUSTER_CONFIG=$(pwd)/k8.env

Fully Clean and Reinitialize the Kubernetes Cluster

./clean.sh

Start Kubernetes Cluster with a Private Docker Registry + Rook Ceph

./start.sh

Check Kubernetes Nodes

./tools/show-labels.sh

Cluster Join Tool

If you want to reboot vms and have the nodes re-join and rebuild the kubernetes cluster use:

./join.sh

(Optional Validation) - Deploy Stock Analysis Engine

This repository was created after trying to decouple my AI kubernetes cluster for analyzing network traffic and my Stock Analysis Engine (ae) that uses many deep neural networks to predict future stock prices during live-trading hours from using the same kubernetes cluster. Additionally with the speed ae is moving, I am looking to keep exploring more high availablity solutions and configurations to ensure the intraday data collection never dies (hopefully out of the box too!).

Deploy AE

./deploy-ae.sh

Deployment Tools

Rook-Ceph

Deploy rook-ceph using the Advanced Configuration

./deploy-rook-ceph.sh

Confirm Rook-Ceph Operator Started

./rook-ceph/describe-operator.sh

Private Docker Registry

Deploy a private docker registry for use with the cluster with:

./deploy-registry.sh

Deploy Helm

Deploy helm

./deploy-helm.sh

Deploy Tiller

Deploy tiller:

./deploy-tiller.sh

Delete Cluster VMs

./kvm/_uninstall.sh

Background and Notes

Customize the vm install steps done during boot up using the cloud-init-script.sh.

License

Apache 2.0 - Please refer to the LICENSE for more details.

FAQ

What IP did my vms get?

Find VMs by MAC address using the K8_VM_BRIDGE bridge device using:

./kvm/find-vms-on-bridge.sh

Find your MAC addresses with a tool that uses arp-scan to list all ip addresses on the configured bridge device (K8_VM_BRIDGE):

./kvm/list-bridge-ips.sh

Why Are Not All Rook Ceph Operators Starting?

Restart the cluster if you see an error like this when looking at the rook-ceph-operator:

# find pods: kubectl get pods -n rook-ceph-system | grep operator
kubectl -n rook-ceph-system describe po rook-ceph-operator-6765b594d7-j56mw
Warning  FailedCreatePodSandBox  7m56s                   kubelet, m12.example.com  Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ab1c663fc53f75fa4f0f79effbb244efa9842dd8257eb1c7dafe0c9bad1ee6c" network for pod "rook-ceph-operator-6765b594d7-j56mw": NetworkPlugin cni failed to set up pod "rook-ceph-operator-6765b594d7-j56mw_rook-ceph-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24
./clean.sh
./deploy-rook-ceph.sh

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

metalnetes-1.0.0.tar.gz (5.8 kB view details)

Uploaded Source

Built Distribution

metalnetes-1.0.0-py2.py3-none-any.whl (8.9 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file metalnetes-1.0.0.tar.gz.

File metadata

  • Download URL: metalnetes-1.0.0.tar.gz
  • Upload date:
  • Size: 5.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.19.1 setuptools/40.4.3 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.6

File hashes

Hashes for metalnetes-1.0.0.tar.gz
Algorithm Hash digest
SHA256 683f45de52c14da2bb3fdd74f508c11bb5d5f056b7246682fc3775d214d2d81d
MD5 efd82a1a941b917afb7c3ff8ba913663
BLAKE2b-256 d0e25bff3707a204a40d909427cebc4e44ab74212da8377d23bc215946704045

See more details on using hashes here.

File details

Details for the file metalnetes-1.0.0-py2.py3-none-any.whl.

File metadata

  • Download URL: metalnetes-1.0.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 8.9 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.19.1 setuptools/40.4.3 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.6

File hashes

Hashes for metalnetes-1.0.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 5f8fbd41df0d44a74c6f20085443201f6e16fce5a7d5ef6bc0fc33822c0967fb
MD5 8847ef09098a09fc9d374716a0429a98
BLAKE2b-256 d447047ffb7aab4d1a91179df428736694c388bb86885347c3e62a78b3c53714

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page