A universal solution for web crawling lists
Project description
introduction
You can use Crawlist to crawl websites containing lists, and with some simple configurations, you can obtain all the list data.
Of course, in the face of some special websites that cannot be crawled, you can also customize the configuration of that website.
installing
You can use pip or pip3 to install the crawlist
pip install crawlist
or pip3 install crawlist
quickly start
This is a static website demo. It does not use the JavaScript to load the data.
import crawlist as cl
if __name__ == '__main__':
# Initialize a pager to implement page flipping
pager = cl.StaticRedirectPager(uri="https://www.douban.com/doulist/893264/?start=0&sort=seq&playable=0&sub_type=",
uri_split="https://www.douban.com/doulist/893264/?start=%v&sort=seq&playable=0&sub_type=",
start=0,
offset=25)
# Initialize a selector to select the list element
selector = cl.CssSelector(pattern=".doulist-item")
# Initialize an analyzer to achieve linkage between pagers and selectors
analyzer = cl.AnalyzerPrettify(pager, selector)
res = []
limit = 100
# Iterating a certain number of results from the analyzer
for tr in analyzer(limit):
print(tr)
res.append(tr)
# If all the data has been collected, the length of the result will be less than the limit
print(len(res))
This is a dynamic website demo. It uses the JavaScript to load the data.So we need to load a selenium webdriver to drive the JavaScript.
import crawlist as cl
if __name__ == '__main__':
# Initialize a pager to implement page flipping
pager = cl.DynamicScrollPager(uri="https://ec.ltn.com.tw/list/international")
# Initialize a selector to select the list element
selector = cl.CssSelector(pattern="#ec > div.content > section > div.whitecon.boxTitle.boxText > ul > li")
# Initialize an analyzer to achieve linkage between pagers and selectors
analyzer = cl.AnalyzerPrettify(pager=pager, selector=selector)
res = []
# Iterating a certain number of results from the analyzer
for tr in analyzer(100):
print(tr)
res.append(tr)
print(len(res))
# After completion, you need to close the webdriver, otherwise it will occupy your memory resources
pager.webdriver.quit()
Documenting
If you are interested and would like to see more detailed documentation, please click on the picture below.
Contributing
Please submit pull requests to the develop branch
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.