simple scraping framework
Project description
Scrapework
Scrapework is a simple and opiniatated scraping framework inspired by Scrapy. It's designed for simple tasks and management, allowing you to focus on the scraping logic and not on the boilerplate code.
- No CLI
- No twisted / async
- Respectful and slow for websites
Getting Started
Installation
First, clone the repository and install the dependencies:
poetry add scrapework
Quick Start
Flow:
- Fetch: retrieve web pages
- Extract: parse and extract structured data from pages
- Pipeline: transform and export the structured data
Spider Configuration
start_urls
: A list of URLs to start scraping from.- pipelines
- extractors: comes with various extractors (plain body, smart extractors, markedown.)
- middlewares: comes with various middlewares
Creating a Spider
A Spider is a class that defines how to navigate a website and extract data. Here's how you can create a Spider:
from scrapework.spider import Spider
class MySpider(Spider):
start_urls = ['http://quotes.toscrape.com']
def parse(self, response):
for quote in response.css('div.quote'):
yield {
'text': quote.css('span.text::text').get(),
'author': quote.css('span small::text').get(),
}
The parse
method is where you define your scraping logic. It's called with the HTTP response of the initial URL.
Creating an Extractor
An Extractor is a class that defines how to extract data from a webpage. Here's how you can create an Extractor:
from scrapework.extractors import Extractor
class MyExtractor(Extractor):
def extract(self, selector):
return {
'text': selector.css('span.text::text').get(),
'author': selector.css('span small::text').get(),
}
The extract
method is where you define your extraction logic. It's called with a parsel.Selector
object that you can use to extract data from the HTML.
Creating a Pipeline
A Pipeline is a class that defines how to process and store the data. Here's how you can create a Pipeline:
from scrapework.pipelines import ItemPipeline
class MyPipeline(ItemPipeline):
def process_items(self, items, config):
for item in items:
print(f"Quote: {item['text']}, Author: {item['author']}")
The process_items
method is where you define your processing logic. It's called with the items extracted by the Extractor and a PipelineConfig
object.
Running the Spider
To run the Spider, you need to create an instance of it and call the start_requests
method:
spider = MySpider()
spider.start_requests()
Advanced Usage
For more advanced usage, you can override other methods in the Spider, Extractor, and Pipeline classes. Check the source code for more details.
Testing
To run the tests, use the following command:
pytest tests/
Contributing
Contributions are welcome! Please read the contributing guidelines first.
License
Scrapework is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for scrapework-0.0.10-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 32cba88fe54faeb9539d3c83fe9650307a19e69ca01dd6d6c37791c3e4d246c9 |
|
MD5 | 744be222ecdb339f9fc82e04c04769f0 |
|
BLAKE2b-256 | f7646c608cc6ff6b84fda4c59524f28758a8ab3d1daaf904b8ec87ce2d7b1ee0 |