my simple crawler
Project description
Install
pip3 install simple-crawler
Set environment AUTO_CHARSET=1
to pass bytes
to beautifulsoup4 and let it detect the charset.
Classes
URL
: define a URLURLExt
: object to handleURL
Page
: request result of aURL
url
: typeURL
content
,text
,json
: response content properties from libraryrequests
type
: the response body type, is a enum which allowsBYTES
,TEXT
,HTML
,JSON
is_html
: check whether is html accorrding to the response headers'sContent-Type
Crawler
: schedule the crawler by callinghandler_page()
recusively
Example
from simple_crawler import *
class MyCrawler(Crawler):
name = 'output.txt'
def custom_handler_page(self, page):
print(page.url)
tags = page.soup.select("#nr1")
tag = tags and tags[0]
with open(self.name, 'a') as f:
f.write(tag.text)
print(tag.text)
def filter_url(self, url: URL) -> bool:
return url.url.startswith("https://xxx.com/xxx")
c = MyCrawler("https://xxx.com/xxx")
c.start()
TODO
- Speed up using async or threading
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
Close
Hashes for simple_crawler-0.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 879ad5a13bc34a2de0a5eaa1c7eb9559b0eebfee1e6f53bde694c773265919b4 |
|
MD5 | d3d503511340371386be47c4150232ff |
|
BLAKE2b-256 | bd695573ee31e2c0b066ada6a9c9e9c3ff9f359ae8742f4e1fc4c654881d8849 |