Lightweight async data gathering for Python
Project description
DataService
Lightweight - async - data gathering for Python.
DataService is a lightweight web scraping and general purpose data gathering library for Python.
Designed for simplicity, it’s built upon common web scraping and data gathering patterns.
No complex API to learn, just standard Python idioms.
Dual synchronous and asynchronous support.
Installation
Please note that DataService requires Python 3.11 or higher.
You can install DataService via pip:
pip install python-dataservice
You can also install the optional playwright dependency to use the PlaywrightClient:
pip install python-dataservice[playwright]
To install Playwright, run:
python -m playwright install
or simply:
playwright install
How to use DataService
To start, create a DataService instance with an Iterable of Request objects. This setup provides you with an Iterator of data objects that you can then iterate over or convert to a list, tuple, a pd.DataFrame or any data structure of choice.
start_requests = [Request(url="https://books.toscrape.com/index.html", callback=parse_books_page, client=HttpXClient())]
data_service = DataService(start_requests)
data = tuple(data_service)
A Request is a Pydantic model that includes the URL to fetch, a reference to the client callable, and a callback function for parsing the Response object.
The client can be any async Python callable that accepts a Request object and returns a Response object. DataService provides an HttpXClient class by default, which is based on the httpx library, but you are free to use your own custom async client.
The callback function processes a Response object and returns either data or additional Request objects.
In this trivial example we are requesting the Books to Scrape homepage and parsing the number of books on the page.
Example parse_books_page function:
def parse_books_page(response: Response):
articles = response.html.find_all("article", {"class": "product_pod"})
return {
"url": response.url,
"title": response.html.title.get_text(strip=True),
"articles": len(articles),
}
This function takes a Response object, which has a html attribute (a BeautifulSoup object of the HTML content). The function parses the HTML content and returns data.
The callback function can return or yield either data (dict or pydantic.BaseModel) or more Request objects.
If you have used Scrapy before, you will find this pattern familiar.
For more examples and advanced usage, check out the examples section.
For a detailed API reference, check out the API section.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file python_dataservice-0.11.9.tar.gz
.
File metadata
- Download URL: python_dataservice-0.11.9.tar.gz
- Upload date:
- Size: 22.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.4 CPython/3.12.7 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | db7b2eb5cf3087d064d0ae34cdb0652efcb275c03cb1baaa054e03913dfa58c1 |
|
MD5 | 549f31aa18627f808a98902775b37133 |
|
BLAKE2b-256 | c12860460641324d1f7d046b0158c711dddc28752246d1a5710bc49ba4812d68 |
File details
Details for the file python_dataservice-0.11.9-py3-none-any.whl
.
File metadata
- Download URL: python_dataservice-0.11.9-py3-none-any.whl
- Upload date:
- Size: 25.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.4 CPython/3.12.7 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 65a1e188df7d26c776f28c3474dd5d431c286cee871aa586f76cce7a751bbc5e |
|
MD5 | 55ff79e847dde23aa1672b77353a2883 |
|
BLAKE2b-256 | d04f7d0d4ad95620520824290fa17449077d921087061dd955ca358cda6181ce |