Skip to main content

Container Database Backup - A tool to backup your database containers

Project description

Container Database Buddy - CoDaBuddy

We use the buddy system. No more flyin' solo!

You need somebody watching your back at all times!

           Rex Kwon Do - Napelon Dynamite

LOGO_PLACEHOLDER This is a placeholder logo. Source: https://logomakr.com/

A container native database setup, backup and restore solution

Maintainer: tim.bleimehl@dzd-ev.de

Status: Alpha (WIP - do not use productive yet)

[[TOC]]

What is this (short)

CoDaBuddy helps you to automate setup and backup your database that is running in container environment (kubernetes, docker)

It relies heavily on configuration by labels (docker-labels, kubernetes-labels)

This means you only have to attach the right labels to your database containers and they will be ready to use and included in your daily backup.

Features

  • Supported Databases:
    • Mysql
    • Postgres
  • Automatic backup retentation management (Daily, Weekly, Monthly, Yearly)
  • "Backup now!" wizard
  • Compressed backup
  • Auto create databases and users

Basic Example

Docker example

Setup

Lets create a mysql/mariadb database via docker-compose

version: "3.7"
services:
  mysql:
    image: mariadb:10
    ports:
      - 3306:3306
    container_name: mysql
    environment:
      - MYSQL_ROOT_PASSWORD=mysuperpw
    restart: unless-stopped
    labels:
      - "backup.dzd-ev.de/enabled=true"
      - "backup.dzd-ev.de/type=mysql"
      - "backup.dzd-ev.de/username=root"
      - "backup.dzd-ev.de/password=mysuperpw"

Note the labels; these will direct our CoDaBuddy instance.

Start the DB:

docker-compose up -d

Lets write some data into our new database

docker exec mysql /usr/bin/mysql -N -h127.0.0.1 -uroot -pmysuperpw -e "\
  CREATE DATABASE IF NOT EXISTS coda_test; \
  CREATE TABLE IF NOT EXISTS coda_test.my_table(id INT AUTO_INCREMENT, firstname VARCHAR(32), PRIMARY KEY (id)); \
  INSERT INTO coda_test.my_table(firstname) VALUES ('Anna'); \
  INSERT INTO coda_test.my_table(firstname) VALUES ('Thomas'); \
  commit;"

Backup

Now we can install CoDaBuddy via

pip3 install git+https://git.connect.dzd-ev.de/dzdtools/CoDaBuddy -U

And lets backup our DB

coda-backup docker

Thats it. We now have a directory ./backups/ in front of us, with all databases backuped.

Restore

Kubernetes example

Setup

First we create a sample postgres database container

postgres-deployment.yaml

apiVersion: v1
kind: Service
metadata:
  name: postgres
spec:
  ports:
  - port: 5432
  selector:
    app: postgres
  clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    backup.dzd-ev.de/auto-create: true
    backup.dzd-ev.de/enabled: true
    backup.dzd-ev.de/password: supersavepw
    backup.dzd-ev.de/type: postgres
    backup.dzd-ev.de/username: postgres
  annotations:
    backup.dzd-ev.de/auto-create-databases: |-
      [{
          "database": "coda_ps_test",
          "user": "coda_test",
          "password": "super_save_pw"
        }]
spec:
  selector:
    matchLabels:
      workloadselector: postgres01
  serviceName: ""
  template:
    metadata:
      labels:
        workloadselector: postgres01
    spec:
      containers:
      - env:
        - name: POSTGRES_PASSWORD
          value: supersavepw
        image: postgres:12
        name: postgres01
        ports:
        - containerPort: 5432
          hostPort: 5432
          name: psport5432

kubectl apply -f postgres-deployment.yaml

Setup Databases

Just for fun we now use the CoDaBuddy Docker Container auto-create-feature to create our user and database

docker pull registry-gl.connect.dzd-ev.de:443/dzdtools/codabuddy

docker run --rm -it --network=host -v ~/.kube/config:/.kube/config registry-gl.connect.dzd-ev.de:443/dzdtools/codabuddy auto-create --debug kubernetes --all-namespaces

This creates a new postgres user/role named coda_test with access to a newly created database named coda_ps_test as defined in our annotation backup.dzd-ev.de/auto-create-databases

ToDo-note:

this is not working due to missing role auth

results in Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" at the cluster scope

kubectl run codabuddy --restart=Never --rm -i --image=registry-gl.connect.dzd-ev.de:443/dzdtools/codabuddy -- auto-create --debug kubernetes --all-namespaces

Backup

Now we create a CronJob for CoDaBuddy to create a daily backup every night at 00:00

apiVersion: batch/v1
kind: CronJob
metadata:
  name: CoDaBuddy-backupjob
spec:
  schedule: "0 0 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: CoDaBuddy-backupjob
            image: registry-gl.connect.dzd-ev.de:443/dzdtools/codabuddy
            imagePullPolicy: Always
            volumeMounts:
            - mountPath: /backup
              name: backup-vol
            - mountPath: /.kube/config
              name: kubeconf-vol
            command:
            - backup
            - --all-namespaces
          restartPolicy: OnFailure
          volumes:
          - name: backup-vol
            hostPath:
              # path to store backups in the cluster host. This is obviously just a test setup. dont do that in productive
              path: /tmp/backup
              # this field is optional
              type: Directory
          - name: kubeconf-vol
            hostPath:
              # path to store the kube config. This is obviously just a test setup. dont do that in productive
              path: /home/myname/.kube/config
              # this field is optional
              type: Directory

DISCLAIMER/HINT: this is a simple alpha state POC. In future version there will be a more secure/regulated example to access to the kubernetes api via roles and kubectl proxy https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/

Planned features

Current ToDo / Known Issues

  • write docs
  • Timestamp is not in current timezone?
  • Write example with proper productive access to Kubernetes API

limitations

Auto database creation

  • All databases in one instance/container must have the same encoding and collation. Atm there is no way of configuring this on a per database level
  • All databases in one instance/container must share on backup user. Atm there is no way of having multiple users to access different databases in one instance/container

Docker Volumes Pathes

  • /var/run/docker.sock:/var/run/docker.sock Is needed to access the docker API to find database containers to be backed up

  • /.kube/config Is needed to access the kubernetes API to find database containers to be backed up. to be refined in future version

Dev Notes

https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/

Start Test Rancher K8s

docker run -d \
--restart=unless-stopped \
-p 80:80 -p 443:443 \
--privileged \
--name rancher \
-v /var/run/docker.sock:/var/run/docker.sock \
rancher/rancher:stable

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

CoDaBuddy-0.0.1.tar.gz (43.8 kB view hashes)

Uploaded Source

Built Distribution

CoDaBuddy-0.0.1-py3-none-any.whl (30.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page