A simple web crawling framework.
Project description
_ _ _____ _ _ (_) | | / ____| (_) | | ___ _ _ __ ___ _ __ | | ___ | (___ _ __ _ __| | ___ _ __ / __| | '_ ` _ \| '_ \| |/ _ \ \___ \| '_ \| |/ _` |/ _ \ '__| \__ \ | | | | | | |_) | | __/ ____) | |_) | | (_| | __/ | |___/_|_| |_| |_| .__/|_|\___| |_____/| .__/|_|\__,_|\___|_| | | | | |_| |_|
Overview
A simple web crawling framework.Document
Getting Started
pip install simple-spiders
You should construst project.py to suit your needs
from crawler.spider import Spider from crawler.writter import DataWriter spider = Spider( 'https://movie.douban.com/subject/26810318/comments?start=0&limit=20&sort=new_score&status=P') spider.start_crawl()
python project.py
Ctrl-C to stop
Referenced Libraries
Using requests as htmlDownloader
Using lxml as default htmlParser
Using csv provide feature that export file as csv type
Using xlwt provide feature that export file as excel type
Using xlsxwriter provide feature that export file as xexcel type
Usage
Project structure
- crawler/ - __init__.py - test/ - htmlDownloder_test - htmlParser_test - requestManager_test - writter_test - logger_test - spider_test - htmlDownloder - htmlParser - requestManager - writter - logger - spider - main.py
License
This project is published open source under [] agreement.
Please maintain the open source release after modification and sign the
name of the original author. Thank you for your respect
If you need to apply this project for commercial purposes, please contact me( @pengr ) separately to obtain commercial authorization
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.