Skip to main content
Join the official 2019 Python Developers SurveyStart the survey!

Scrapy Save Statistics: Save statistics extension for Scrapy

Project description

Save statistics to mongo for analytics.

Install

The quick way:

pip install scrapy-save-statistics

Or install from GitHub:

pip install git+git://github.com/light4/scrapy-save-statistics.git@master

Or checkout the source and run:

python setup.py install

settings.py

Mongodb settings for save statistics, need a statistics database.

MONGO_HOST = "127.0.0.1"
MONGO_PORT = 27017
MONGO_DB = "myspider"
MONGO_STATISTICS = "statistics"

EXTENSIONS = {
    'scrapy_save_statistics.SaveStatistics': 100,
}

Spider

Spider must have statistics attributes and contains spider_url. We’ll save that info to mongodb.

class TestSpider(scrapy.Spider):
    name = "test"

    def __init__(self, name=None, **kwargs):
        super(TestSpider, self).__init__(name=name, **kwargs)
        self.statistics = []

    def parse(self, response):
        crawl_info = {'spider_url': spider_url,
                      'expected_crawl_num': expected_crawl_num,
                      'pages': total_page}
        self.statistics.append(crawl_info)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for scrapy-save-statistics, version 0.2
Filename, size File type Python version Upload date Hashes
Filename, size scrapy_save_statistics-0.2-py2.py3-none-any.whl (4.4 kB) File type Wheel Python version py2.py3 Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page