Tools for managing multiple kubernetes clusters on KVM (on 3 Centos 7 vms) running on a bare metal server (tested on Ubuntu 18.04)
Project description
Metalnetes
Tools for managing multiple kubernetes 1.13.4 clusters on KVM (3 Centos 7 vms) running on a bare metal server (tested on Ubuntu 18.04).
This will install:
Kubernetes cluster deployed 3 CentOS 7 VMs using 100 GB with static IPs and installed using KVM
Rook Ceph Storage Cluster for Persistent Volumes
Grafana + Prometheus
- Optional - Stock Analysis Engine that includes:
Minio (on-premise S3)
Redis cluster
Jupyter
SSH access
Getting Started
git clone https://github.com/jay-johnson/metalnetes.git metalnetes cd metalnetes
Start VMs and Kubernetes Cluster
./boot.sh
View Kubernetes Nodes
./tools/show-nodes.sh
Monitoring the Kubernetes Cluster
Log in to Grafana from a browser:
Username: trex
Password: 123321
Grafana comes ready-to-go with these starting dashboards:
View Kubernetes Pods in Grafana
View Rook Ceph Cluster in Grafana
View Redis Cluster in Grafana
Changing Between Kubernetes Clusters
If you create new k8.env files for each cluster, like dev_k8.env and prod_k8.env then you can then quickly toggle between clusters using:
Load dev Cluster Config file
source dev_k8.env
Use the metal bash function to sync the KUBECONFIG through the dev cluster and local host
metal
Load prod Cluster Config file
source prod_k8.env
Use the metal bash function to sync the KUBECONFIG through the prod cluster and local host
metal
Customize VMs and Manage Kubernetes Deployments
These are the steps the automated ./boot.sh runs in order for customizing and debugging your kubernetes deployment.
Create VMs Using KVM on Ubuntu
Install KVM and arp-scan to find each VM’s ip address
This guide was written using an Ubuntu bare metal server, but it is just KVM under the hood. Please feel free to open a PR if you know the commands for CentOS, Fedora or RHEL and I will add them.
cd kvm sudo ./install-kvm.sh
Start VMs
This will create 3 vms by default and uses an internal fork from the giovtorres/kvm-install-vm script. To provision vm disks using qemu-img, this tool will prompt for root access when needed.
./start-cluster-vms.sh
Assign IPs to Router or DNS server
This tool uses arp-scan to find all active ip addresses on the network bridge. With this list, the tool then looks up each vm’s ip by the MAC address, and it requires root privileges.
./find-vms-on-bridge.sh
Alternatively you can set /etc/hosts too:
192.168.0.110 m10 m10.example.com master10.example.com 192.168.0.111 m11 m11.example.com master11.example.com 192.168.0.112 m12 m12.example.com master12.example.com
Bootstap Nodes
Once the vm’s are routable by their fqdn (e.g. m10.example.com), you can use the bootstrap tool to start preparing the cluster nodes, confirm ssh access works with all nodes.
./bootstrap-new-vms.sh
Install Kubernetes on CentOS 7
Configuration
Now that the VMs are ready you can use the k8.env CLUSTER_CONFIG example file for managing kubernetes clusters on your own vms. This step becomes the starting point for start, restarting and managing clusters.
cd .. ./install-centos-vms.sh
VM and Kubernetes Node Configuration
Helm and Tiller Configuration
Cluster Storage Configuation
Private Docker Registry
Start Kubernetes Cluster
With 3 vms setup using the install-centos-vms.sh follow these steps to stand up and tear down a kubernetes cluster.
Load the CLUSTER_CONFIG environment
# from within the repo's root dir: export CLUSTER_CONFIG=$(pwd)/k8.env
Fully Clean and Reinitialize the Kubernetes Cluster
./clean.sh
Start Kubernetes Cluster with a Private Docker Registry + Rook Ceph
./start.sh
Check Kubernetes Nodes
./tools/show-labels.sh
Cluster Join Tool
If you want to reboot vms and have the nodes re-join and rebuild the kubernetes cluster use:
./join.sh
(Optional Validation) - Deploy Stock Analysis Engine
This repository was created after trying to decouple my AI kubernetes cluster for analyzing network traffic and my Stock Analysis Engine (ae) that uses many deep neural networks to predict future stock prices during live-trading hours from using the same kubernetes cluster. Additionally with the speed ae is moving, I am looking to keep exploring more high availablity solutions and configurations to ensure the intraday data collection never dies (hopefully out of the box too!).
Deploy AE
./deploy-ae.sh
Deployment Tools
Rook-Ceph
Deploy rook-ceph using the Advanced Configuration
./deploy-rook-ceph.sh
Confirm Rook-Ceph Operator Started
./rook-ceph/describe-operator.sh
Private Docker Registry
Deploy a private docker registry for use with the cluster with:
./deploy-registry.sh
Deploy Helm
Deploy helm
./deploy-helm.sh
Deploy Tiller
Deploy tiller:
./deploy-tiller.sh
Delete Cluster VMs
./kvm/_uninstall.sh
Background and Notes
Customize the vm install steps done during boot up using the cloud-init-script.sh.
License
Apache 2.0 - Please refer to the LICENSE for more details.
FAQ
What IP did my vms get?
Find VMs by MAC address using the K8_VM_BRIDGE bridge device using:
./kvm/find-vms-on-bridge.sh
Find your MAC addresses with a tool that uses arp-scan to list all ip addresses on the configured bridge device (K8_VM_BRIDGE):
./kvm/list-bridge-ips.sh
Why Are Not All Rook Ceph Operators Starting?
Restart the cluster if you see an error like this when looking at the rook-ceph-operator:
# find pods: kubectl get pods -n rook-ceph-system | grep operator kubectl -n rook-ceph-system describe po rook-ceph-operator-6765b594d7-j56mw
Warning FailedCreatePodSandBox 7m56s kubelet, m12.example.com Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9ab1c663fc53f75fa4f0f79effbb244efa9842dd8257eb1c7dafe0c9bad1ee6c" network for pod "rook-ceph-operator-6765b594d7-j56mw": NetworkPlugin cni failed to set up pod "rook-ceph-operator-6765b594d7-j56mw_rook-ceph-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.2.1/24
./clean.sh ./deploy-rook-ceph.sh
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file metalnetes-1.0.3.tar.gz
.
File metadata
- Download URL: metalnetes-1.0.3.tar.gz
- Upload date:
- Size: 6.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.19.1 setuptools/40.4.3 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 72c0ab82bc7e30a786562798755780c0390d78e201d986e6ec967a61447ac3a6 |
|
MD5 | c297b170777a496e47f531096efb9fae |
|
BLAKE2b-256 | 2924067d8456025580fc458ae7135a457e6e7967efa842b55de4df3194542b20 |
File details
Details for the file metalnetes-1.0.3-py2.py3-none-any.whl
.
File metadata
- Download URL: metalnetes-1.0.3-py2.py3-none-any.whl
- Upload date:
- Size: 9.1 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.19.1 setuptools/40.4.3 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.6.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 923827e3184d5211590a28700b2d93344835e17c761f7ccfa51133f219410195 |
|
MD5 | a0d02d6f27695381e6193228f46e1b96 |
|
BLAKE2b-256 | 5467fc44f995dddbad9b6041a02071c9146c75cd164632659adefb14a787a49a |