Skip to main content
This is a pre-production deployment of Warehouse. Changes made here affect the production instance of PyPI (pypi.python.org).
Help us improve Python packaging - Donate today!

A base library for writing your own log scraper, i.e. something that can run regexes over files and give you meaningful information like stats. Add your own regexes and plug and play. See the readme for more information.

Project Description
LogScraper
==========

A generic library for gathering stats from log files by running regexes
on them. Things you can do: \* Create and run any number of regexes on
any number of files in parallel. \* Aggregate stats by creating named
regex groups in your regexes \* Grab archived logs (so long as you tell
it where your archives live) \* Grab files from remote boxes \* Print
stats to console \* Print regex matches to console \* Search on gzipped
files

Installation
------------

The easiest manner of installation is to grab the package from the PyPI
repository.

::

pip install log_scraper

Usage
-----

Base Usage
^^^^^^^^^^

For off the cuff usage, you can just create a LogScraper object and tell
it what regexes to run and where to look for files. Eg.

::

from log_scraper.base import LogScraper
import log_scraper.consts as LSC

filepath = '/path/to/file'
filename = 'filename.ext'
scraper = LogScraper(default_filepath={LSC.DEFAULT_PATH : filepath, LSC.DEFAULT_FILENAME : filename})
scraper.add_regex(name='regex1', pattern=r'your_regex_here')

# To get aggregated stats
data = scraper.get_log_data()

# To print all the stats
scraper.print_total_stats(data)

# To print each file's individual stats
scraper.print_stats_per_file(data)

# To view log lines matching the regex
scraper.view_regex_matches(scraper.get_regex_matches())

The real power, though, is in creating your own class deriving from
LogScraper that presets the paths and the regexes to run so that anyone
can then use that anywhere to mine data from a process' logs.

Development
-----------

Dependencies
~~~~~~~~~~~~

- Python 2.7
- `paramiko <http://paramiko-www.readthedocs.org/en/latest/index.html>`_

Testing
~~~~~~~

To test successfully, you must set up a virtual environment On Unix, in
the root folder for the package, do the following:
``python -m virtualenv . source ./bin/activate ./bin/python setup.py develop``

Now you can make any changes you want and then run the unit-tests by
doing:

::

./bin/python setup.py test
Release History

Release History

This version
History Node

0.9.9

History Node

0.9.8

History Node

0.9.7

History Node

0.9.5

History Node

0.9.4

Download Files

Download Files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

File Name & Checksum SHA256 Checksum Help Version File Type Upload Date
log_scraper-0.9.9-py2-none-any.whl (14.6 kB) Copy SHA256 Checksum SHA256 py2 Wheel Jul 2, 2015
log_scraper-0.9.9.tar.gz (12.4 kB) Copy SHA256 Checksum SHA256 Source Jul 2, 2015

Supported By

WebFaction WebFaction Technical Writing Elastic Elastic Search Pingdom Pingdom Monitoring Dyn Dyn DNS Sentry Sentry Error Logging CloudAMQP CloudAMQP RabbitMQ Heroku Heroku PaaS Kabu Creative Kabu Creative UX & Design Fastly Fastly CDN DigiCert DigiCert EV Certificate Rackspace Rackspace Cloud Servers DreamHost DreamHost Log Hosting