Skip to main content

Nasy Crawler Framework -- Never had such a pure crawler.

Project description

Table of Contents

Prologue

Never had such a pure crawler like this nacf.

Although I often write crawlers, I don’t like to use huge frameworks, such as scrapy, but prefer simple requests+bs4 or more general requests_html. However, these two are inconvenient for a crawler. E.g. Places, such as error retrying or parallel crawling, need to be handwritten by myself. It is not very difficult to write it while writing too much can be tedious. Hence I started writing this nacf (Nasy Crawler Framework), hoping to simplify some error retrying or parallel writing of crawlers.

Packages

Table 1: Packages
Package Version Description
requests-html 0.9.0 HTML Parsing for Humans.

Development Process

TODO Http Functions

DONE Get

CLOSED: [2018-12-25 Tue 17:36]

NEXT Post

Epoligue

History

Version 0.1.0

  • Date: <2018-12-23 Sun>
  • Commemorate Version: First Version
    • Basic Functions.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nacf-0.1.0.tar.gz (13.0 kB view hashes)

Uploaded Source

Built Distribution

nacf-0.1.0-py3-none-any.whl (35.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page