This recipe is backup for Plone data and send to Amazon S3
Project description
Requirement
Plone
Plone 4.0 (tested by 4.0.4 on MacOS 10.6)
Support SSL in Python (If your backup destination is AWS-S3)
Amazon web service account(If your backup destination is AWS-S3)
Amazon web service account (aws access key / aws secret key)
S3 root bucket name
Information
Code repository: https://bitbucket.org/cmscom/c2.recipe.bkups3
Questions and comments to terada@cmscom.jp
Report bugs at https://bitbucket.org/cmscom/c2.recipe.bkups3/issues
Referred to http://pypi.python.org/pypi/collective.recipe.backup
Note
If you don’t have repozo command in bin folder, you need to add the lines in buildout.cfg. The following lines will create bin/buckup script witch this recipe is using.
buildout.cfg
[repozo] recipe = zc.recipe.egg eggs = ZODB3 scripts = repozo
Simple usage
Modify buildout.cfg
parts = ... bkups3 [bkups3] recipe = c2.recipe.bkups3 use_s3 = true aws_id = xxxxxxxxxxxxx aws_key = xxxxxxxxxxxxxxxxxxxxxxxxxx bucket_name = xxxxxxxxxx bucket_sub_folder = mysitename sync_s3_filesfolder = true blob_store_count = 7
Run the buildout
bin/buildout -N
You can use backup scripts
bin/bkups3
You will see filestorage backups in var/backups , blobstorage backup in var/blobbackups and the Amazon Web Service S3 bucket which you sepecified.
Detailed Documentation
Supported options
The recipe supports the following options:
- blob_bk_dir_name
setting backup path name. defalut: blobbackups
- use_s3
default: false Using S3 is true, Not use S3 is false
- aws_id
<aws access key>
- aws_key
<aws secret key>
- bucket_name
<S3 bucket name> setting unique bucket name in Amazon S3
- bucket_sub_folder
Option: Sub folder in S3 bucket
- sync_s3_filesfolder
default: true
- blob_store_count
- defalut1
Saved number of blob files
We’ll use all options:
>>> write('buildout.cfg', ... """ ... [buildout] ... parts = bkups3 ... ... [bkups3] ... recipe = c2.recipe.bkups3 ... blob_bk_dir_name = blobbackups ... use_s3 = true # Using S3 -- true, Not use S3 -- false ... aws_id = xxxxxxxxxxxx ... aws_key = xxxxxxxxxxxxxxxxxxxxxxxxx ... bucket_name = xxxxxxxx ... bucket_sub_folder = mysitename ... sync_s3_filesfolder = true ... blob_store_count = 7 # Stored 7 times ... """) >>> print system(buildout) # doctest:+ELLIPSIS Installing backuptos3. backup: Created /sample-buildout/var/backups/blobstorage Generated script '/sample-buildout/bin/bkups3'.
Example usage
Just to isolate some test differences, we run an empty buildout once:
>>> ignore = system(buildout)
We’ll start by creating a buildout that uses the recipe:
>>> write('buildout.cfg', ... """ ... [buildout] ... parts = bkups3 ... ... [bkups3] ... recipe = c2.recipe.bkups3 ... use_s3 = true ... """)
Running the buildout adds a bkups3 scripts to the bin/ directory and, by default, it creates the var/bkups3 dirs:
>>> print system(buildout) # doctest:+ELLIPSIS Installing backuptos3. backup: Created /sample-buildout/var/backups/blobstorage Generated script '/sample-buildout/bin/bkups3'. <BLANKLINE> >>> ls('var') d blobbackups >>> ls('bin') - bkups3 - buildout
Backup
Calling bin/bkups3 results in a normal repozo backup and blobstorage backup and store to Amazon S3. We put in place a mock repozo script that prints the options it is passed (and make it executable). It is horridly unix-specific at the moment.
>>> import sys >>> write('bin', 'repozo', ... "#!%s\nimport sys\nprint ' '.join(sys.argv[1:])" % sys.executable) >>> #write('bin', 'repozo', "#!/bin/sh\necho $*") >>> dontcare = system('chmod u+x bin/repozo')>>> import sys >>> write('bin', 'backup', ... "#!%s\nimport sys\nprint ' '.join(sys.argv[1:])" % sys.executable) >>> #write('bin', 'backup', "#!/bin/sh\necho $*") >>> dontcare = system('chmod u+x bin/backup')
By default, backups are done in var/backuptos3:
>>> print system('bin/bkups3') --backup -f /sample-buildout/var/filestorage/Data.fs -r /sample-buildout/var/backups --gzip INFO: Backing up database file: ...
Contributors
Manabu TERADA(terapyon), Author
Change history
1.0 (2011-12-02)
No changes compared to RC2 [terapyon]
1.0RC2 (2011-05-06)
Bug fix for blob_files local database name [terapyon]
1.0RC1 (2011-04-24)
Checking update blobstorage [terapyon]
1.0b4 (2011-04-18)
Miss packaging fixed [terapyon]
1.0b3 (2011-04-18)
Miss packaging fixed [terapyon]
1.0b2 (2011-04-18)
Variable name changed from blob_store_len to blob_store_count [terapyon]
Supported Not use blobstorage, for Plone 3.x (but can’t run Plone 3.x yet)[terapyon]
Setting sub folder in bucket [terapyon]
Not Sync mode for S3 Files [terapyon]
1.0b1 (2011-04-14)
Backup filestorage (using bin/buckup, collective.recipe.backup) [terapyon]
Backup blobstorage [terapyon]
Sending bucket of Amazon S3 [terapyon]
1.0a1 (Unreleased)
Created recipe with ZopeSkel [Manabu TERADA(@terapyon)]
Download
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file c2.recipe.bkups3-1.0.tar.gz
.
File metadata
- Download URL: c2.recipe.bkups3-1.0.tar.gz
- Upload date:
- Size: 9.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5b444563d1842eacc7512208a0e0e91d324f8bff81668f4f4e8badab4a6ff87e |
|
MD5 | e64cfc98bb2be14fe507e0e314017a13 |
|
BLAKE2b-256 | 58b76687fccf043f41689a252c55dabe5dca3318cd73dc4f4e91647aa8d6ea0b |