simple scraping framework
Project description
Scrapework
Scrapework is a simple and opiniatated scraping framework inspired by Scrapy. It's designed for simple tasks and easy management, allowing you to focus on the scraping logic and not on the boilerplate code.
- No CLI
- No twisted framework
- Designed for in-process usage
Getting Started
Installation
First, clone the repository or install as a dependencies:
poetry add scrapework
Quick Start
Flow:
- Request Handling: Use the
make_request
method to fetch web pages fromstart_urls
with the help ofmiddlewares
. - Data Extraction: Implement the
extract
method to parse and extract structured data from the fetched pages usingHTMLBodyParser
or other custom logic. - Data Processing: Process and handle the structured data using
handlers
defined in the scraper. - Reporting: Generate reports of the scraping process using
reporters
.
For more details see Design.
Scraper Configuration
start_urls
: A list of URLs to start scraping from.- request middleware to configure the request handling.
- parsers: comes with various extractors (plain body, smart extractors, markedown.)
- handlers: comes with various handlers (log, save to file, save to database.)
- reporters
Creating a Spider
A Spider is a class that defines how to navigate a website and extract data. Here's how you can create a Spider:
from scrapework.spider import Spider
class MyScraper(Scraper):
start_urls = ['http://quotes.toscrape.com']
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').get(),
'author': quote.css('span small::text').get(),
}
The parse
method is where you define your scraping logic. It's called with the HTTP response of the initial URL.
Creating an Extractor
An Extractor is a class that defines how to extract data from a webpage. Here's how you can create an Extractor:
from scrapework.extractors import Extractor
class MyExtractor(Extractor):
def extract(self, selector):
return {
'text': selector.css('span.text::text').get(),
'author': selector.css('span small::text').get(),
}
The extract
method is where you define your extraction logic. It's called with a parsel.Selector
object that you can use to extract data from the HTML.
Creating a Pipeline
A Pipeline is a class that defines how to process and store the data. Here's how you can create a Pipeline:
from scrapework.pipelines import ItemPipeline
class MyPipeline(ItemPipeline):
def process_items(self, items, config):
for item in items:
print(f"Quote: {item['text']}, Author: {item['author']}")
The process_items
method is where you define your processing logic. It's called with the items extracted by the Extractor and a PipelineConfig
object.
Running the Spider
To run the Spider, you need to create an instance of it and call the start_requests
method:
spider = MySpider()
spider.start_requests()
Advanced Usage
For more advanced usage, you can override other methods in the Spider, Extractor, and Pipeline classes. Check the source code for more details.
Testing
To run the tests, use the following command:
pytest tests/
Contributing
Contributions are welcome! Please read the contributing guidelines first.
License
Scrapework is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for scrapework-0.4.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2ea81440bb1978826e68170dc5ee9b0b14cfe6722174e5a625475e97feea7758 |
|
MD5 | 946e620fcffe570a8ae38cdb58535f09 |
|
BLAKE2b-256 | d8c5e5944e89cec60286d1dacc272af5b567f2dee16a223772dfba7c865fd9b7 |