ZFS snapshot to S3 uploader.
Project description
ZFS Uploader
ZFS Uploader is a simple program for backing up full and incremental ZFS
snapshots to Amazon S3. It supports CRON based scheduling and can
automatically remove old snapshots and backups. A helpful CLI (zfsup) lets
you run jobs, restore, and list backups.
Features
- Backup/restore ZFS file systems
- Create incremental and full backups
- Automatically remove old snapshots and backups
- Use any S3 storage class type
- Helpful CLI
Requirements
- Python 3.6 or higher
- ZFS 0.8.1 or higher (untested on earlier versions)
Install Instructions
Commands should be run as root.
- Create a directory and virtual environment
mkdir /etc/zfs_uploader
cd /etc/zfs_uploader
virtualenv --python python3 env
- Install ZFS Uploader
source env/bin/activate
pip install zfs_uploader
ln -sf /etc/zfs_uploader/env/bin/zfsup /usr/local/sbin/zfsup
- Write configuration file
Please see the Configuration File section below for helpful configuration examples.
vi config.cfg
chmod 600 config.cfg
- Start service
cp zfs_uploader.service /etc/systemd/system/zfs_uploader.service
sudo systemctl enable --now zfs_uploader
- List backups
zfsup list
Configuration File
The program reads backup job parameters from a configuration file. Default parameters may be set which then apply to all backup jobs. Multiple backup jobs can be set in one file.
Parameters
bucket_name : str
S3 bucket name.
access_key : str
S3 access key.
secret_key : str
S3 secret key.
filesystem : str
ZFS filesystem.
prefix : str, optional
Prefix to be prepended to the s3 key.
region : str, default: us-east-1
S3 region.
endpoint : str, optional
S3 endpoint for alternative services
cron : str, optional
Cron schedule. Example: * 0 * * *
max_snapshots : int, optional
Maximum number of snapshots.
max_backups : int, optional
Maximum number of full and incremental backups.
max_incremental_backups_per_full : int, optional
Maximum number of incremental backups per full backup.
storage_class : str, default: STANDARD
S3 storage class.
max_multipart_parts : int, default: 10000
Maximum number of parts to use in a multipart S3 upload.
Examples
Multiple full backups
[DEFAULT]
bucket_name = BUCKET_NAME
region = us-east-1
access_key = ACCESS_KEY
secret_key = SECRET_KEY
storage_class = STANDARD
[pool/filesystem]
cron = 0 2 * * *
max_snapshots = 7
max_incremental_backups_per_full = 6
max_backups = 7
Filesystem is backed up at 02:00 daily. Only the most recent 7 snapshots are kept. The oldest backup without dependents is removed once there are more than 7 backups.
Backblaze B2 S3-compatible endpoint, full backups
[DEFAULT]
bucket_name = BUCKET_NAME
region = eu-central-003
access_key = ACCESS_KEY
secret_key = SECRET_KEY
storage_class = STANDARD
endpoint = https://s3.eu-central-003.backblazeb2.com
[pool/filesystem]
cron = 0 2 * * *
max_snapshots = 7
max_incremental_backups_per_full = 6
max_backups = 7
Structure
full backup (f), incremental backup (i)
- f
- f i
- f i i
- f i i i
- f i i i i
- f i i i i i
- f i i i i i i
- f i i i i i f
- f i i i i f i
- f i i i f i i
- f i i f i i i
- f i f i i i i
- f f i i i i i
- f i i i i i i
Single full backup
[DEFAULT]
bucket_name = BUCKET_NAME
region = us-east-1
access_key = ACCESS_KEY
secret_key = SECRET_KEY
storage_class = STANDARD
[pool/filesystem]
cron = 0 2 * * *
max_snapshots = 7
max_backups = 7
Filesystem is backed up at 02:00 daily. Only the most recent 7 snapshots are kept. The oldest incremental backup is removed once there are more than 7 backups. The full backup is never removed.
Structure
full backup (f), incremental backup (i)
- f
- f i
- f i i
- f i i i
- f i i i i
- f i i i i i
- f i i i i i i
Only full backups
[DEFAULT]
bucket_name = BUCKET_NAME
region = us-east-1
access_key = ACCESS_KEY
secret_key = SECRET_KEY
storage_class = STANDARD
[pool/filesystem]
cron = 0 2 * * *
max_snapshots = 7
max_incremental_backups_per_full = 0
max_backups = 7
Filesystem is backed up at 02:00 daily. Only the most recent 7 snapshots are kept. The oldest full backup is removed once there are more than 7 backups. No incremental backups are taken.
Structure
full backup (f)
- f
- f f
- f f f
- f f f f
- f f f f f
- f f f f f f
- f f f f f f f
Miscellaneous
Storage class codes
- STANDARD
- REDUCED_REDUNDANCY
- STANDARD_IA
- ONEZONE_IA
- INTELLIGENT_TIERING
- GLACIER
- DEEP_ARCHIVE
- OUTPOSTS
Release Instructions
-
Increment version in
__init__.pyfile -
Update
CHANGELOG.mdwith new version -
Tag release in GitHub when ready. Add changelog items to release description. GitHub Action workflow will automatically build and push the release to PyPi.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file zfs_uploader-0.9.0.tar.gz.
File metadata
- Download URL: zfs_uploader-0.9.0.tar.gz
- Upload date:
- Size: 18.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d54d28b6a462591070b1a9361821558392885807007c5920e949d434c9b0046b
|
|
| MD5 |
ae7063913fec101f71e378a8e3c9c559
|
|
| BLAKE2b-256 |
55b4b371973c5b0dcf6333f18331f24c618c2450ac37bd90d128a6039f7ee0c6
|
File details
Details for the file zfs_uploader-0.9.0-py3-none-any.whl.
File metadata
- Download URL: zfs_uploader-0.9.0-py3-none-any.whl
- Upload date:
- Size: 16.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d73560764f9c12225ed2deef65fa9bcaf9353e10a45e4c298b283daa8228fca6
|
|
| MD5 |
a5fb7750cb92b819f266ece575b99683
|
|
| BLAKE2b-256 |
35d89a859438c4f27b78fc2b208ecbfc8b3e5fd360e519f5dfeedfc602f5ced0
|