Push logs to S3
Get files from directory by mask and push them to S3
pip install s3logs
Where config.conf can use that structure:
[S3] access_key = <S3_KEY> secret_key = <S3_SECRET_KEY> host = <s3.example.com> bucket = <bucket_name> chunk_size = <bytes, default=52428800> [logs] suffix = .0.gz key_suffix = .gz directory = /var/log/nginx/ filename = <filename, default=(yesterday as yyyy-mm-dd)> [map] example.com-access.log = example/access example.com-error.log = example/error mysite.me.access.log = mysite/access
When it used with that config, script takes all files in directory /var/log/nginx/, filter only those, which ends with .0.gz and send it to S3, according to map.
For example, /var/log/nginx now consists of:
example.com-access.log example.com-access.log.0.gz example.com-access.log.1.gz example.com-error.log example.com-error.log.0.gz example.com-error.log.1.gz mysite.me.access.log mysite.me.access.log.0.gz mysite.me.error.log mysite.me.error.log.0.gz
So, if today is 9 December 2015, and your hostname is node1, on your S3 <bucket_name> would be those keys:
node1/example/access/2015-12-09.gz node1/example/error/2015-12-09.gz node1/mysite/access/2015-12-09.gz
Because we have not explain how maps mysite.me.error.log.0.gz - it would be skipped.
If there was filename=newfile option in config.conf, keys in S3 would look like node1/example/access/newfile.gz
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size s3logs-1.2.tar.gz (3.3 kB)||File type Source||Python version None||Upload date||Hashes View|