Skip to main content

Small framework to crawl misc data sources and process records.

Project description

Small framework to crawl misc data sources and process records.

Installation guide

pip install monkey.crawler

User guide

Crawler attributes -source_name: the name that identifies the data source -handler: the handler that will process every record -offset: the number of record that will be skipped by the crawler before to start

CSV Crawler _SOURCE_DELIMITER_KEY = ‘source_delimiter’ _SOURCE_FILE_ENCODING_KEY = ‘source_encoding’ _SOURCE_FILE_KEY = ‘source_file’ _SOURCE_QUOTE_CHAR_KEY = ‘source_quote_char’ _COLUMN_MAP_KEY = ‘column_map’ _COLUMN_MAP_DELIMITER_KEY = ‘column_map_delimiter’

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

monkey.crawler-1.0.0.dev6.tar.gz (15.0 kB view hashes)

Uploaded Source

Built Distribution

monkey.crawler-1.0.0.dev6-py3-none-any.whl (18.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page