A Python package for extracting structured data from websites
Project description
Web Extract Data
A Python package for extracting structured data from websites without writing a single HTML selector.
Web Extract Data provides native functions for each endpoint of the InstantAPI.ai Web Scraping API, making it easy to extract data from websites, find links, navigate through pagination, and scrape Google Search results.
Installation
pip install web-extract-data
Quick Start
from web_extract_data import WebExtractClient
# Initialize the client with your InstantAPI.ai key
# Replace %% API_KEY %% with your API key from:
# https://web.instantapi.ai/#pricing-03-254921
client = WebExtractClient("%%API_KEY%%")
# You can modify the URL and data fields to extract in JSON format
result = client.scrape(
url="https://www.amazon.com.au/MSI-PRO-MP341CQW-UltraWide-Compatible/dp/B09Y19TRQ2",
fields={
"monitor_name": "< The product name of the monitor. >",
"brand": "< The brand or manufacturer name. >",
"display_size_in_inches": "< Numeric only. >",
"resolution": "< Example format: 1920x1080. >",
"panel_type": "< Type of panel. >",
"refresh_rate_hz": "< Numeric only. >",
"aspect_ratio": "< Example format: 16:9. >",
"ports": "< A comma-delimited list of available ports (e.g., HDMI, DisplayPort, etc.). >",
"features": "< Key selling points or capabilities, comma-delimited (e.g., LED, Full HD, etc.). >",
"price": "< Numeric price (integer or float). >",
"price_currency": "< Price currency (3 character alphabetic ISO 4217). >",
"review_count": "< Total number of customer reviews, numeric only. >",
"average_rating": "< Float or numeric star rating (e.g., 4.3). >",
"review_summary": "< A 50 words or less summary of all the written customer feedback. >"
}
)
# Print the extracted data
print(result)
API Reference
WebExtractClient
The main client class for interacting with the InstantAPI.ai Web Scraping API.
client = WebExtractClient(api_key)
api_key(str): Your InstantAPI.ai key.
Scrape Endpoint
Extract structured data from any webpage.
result = client.scrape(url, fields)
url(str): The URL of the webpage to scrape.fields(dict): The data fields to extract in JSON format.
Example:
result = client.scrape(
url="https://www.amazon.com.au/MSI-PRO-MP341CQW-UltraWide-Compatible/dp/B09Y19TRQ2",
fields={
"monitor_name": "< The product name of the monitor. >",
"brand": "< The brand or manufacturer name. >",
"display_size_in_inches": "< Numeric only. >",
"resolution": "< Example format: 1920x1080. >",
"panel_type": "< Type of panel. >",
"refresh_rate_hz": "< Numeric only. >",
"aspect_ratio": "< Example format: 16:9. >",
"ports": "< A comma-delimited list of available ports (e.g., HDMI, DisplayPort, etc.). >",
"features": "< Key selling points or capabilities, comma-delimited (e.g., LED, Full HD, etc.). >",
"price": "< Numeric price (integer or float). >",
"price_currency": "< Price currency (3 character alphabetic ISO 4217). >",
"review_count": "< Total number of customer reviews, numeric only. >",
"average_rating": "< Float or numeric star rating (e.g., 4.3). >",
"review_summary": "< A 50 words or less summary of all the written customer feedback. >"
}
)
Links Endpoint
Find all links on a page that match a specific description.
result = client.links(url, description)
url(str): The URL of the webpage to scrape.description(str): Description of the links to extract.
Example:
result = client.links(
url="https://www.ikea.com/au/en/cat/quilt-cover-sets-10680/?page=3",
description="individual product urls"
)
Next Page Endpoint
Extract "Next Page" links from a paginated web page.
result = client.next(url)
url(str): The URL of the webpage to scrape.
Example:
result = client.next(
url="https://www.ikea.com/au/en/cat/quilt-cover-sets-10680/"
)
Search Endpoint
Scrape and extract relevant Google search result URLs.
result = client.search(query, google_domain="www.google.com", page=1)
query(str): The search query.google_domain(str, optional): The Google domain to search on. Defaults to "www.google.com".page(int, optional): The page number to scrape. Defaults to 1.
Example:
result = client.search(
query="AVID POWER 20V MAX Lithium Ion Cordless Drill Set",
google_domain="www.google.com",
page=1
)
Error Handling
The package will raise exceptions if the API returns an error. You can handle these exceptions with a try-except block:
try:
result = client.scrape(url="https://example.com", fields={"title": "< The title of the page. >"})
print(result)
except Exception as e:
print(f"An error occurred: {e}")
Need Help?
Join the Discord Community for real-time help and feedback.
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file web_extract_data-0.1.0.tar.gz.
File metadata
- Download URL: web_extract_data-0.1.0.tar.gz
- Upload date:
- Size: 8.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e4b9e47deacfd516c969be2e98bc0f47c69be7bbb5a9e2a46249e82ff8665492
|
|
| MD5 |
ac2e06a787cf6680fb59cf99f3527a2e
|
|
| BLAKE2b-256 |
4763381b5c4e2b9a16cd9c5d6cc5722f8f8619b9b1b9adabb8a793cc4bbe9e52
|
File details
Details for the file web_extract_data-0.1.0-py3-none-any.whl.
File metadata
- Download URL: web_extract_data-0.1.0-py3-none-any.whl
- Upload date:
- Size: 9.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
94c43a2e6ca0783300b26e206371b2ff8675e224f09139892893337beafcc10c
|
|
| MD5 |
05d83983c9be9ad3e2509322aeed31bc
|
|
| BLAKE2b-256 |
9942819dfe34534424f09f3f3dd3f1b23be53aa7073445533e61b00a68b43434
|