Python package, scraping recipes from all over the internet
Project description
A simple scraping tool for recipe webpages.
Netiquette
If you’re using this library to collect large numbers of recipes from the web, please use the software responsibly and try to avoid creating high volumes of network traffic.
Python’s standard library provides a robots.txt parser that may be helpful to automatically follow common instructions specified by websites for web crawlers.
Another parser option – particularly if you find that many web requests from urllib.robotparser are blocked – is the robotexclusionrulesparser library.
Getting Started
Start by using Python’s built-in package installer, pip, to install the library:
python -m pip install recipe-scrapers
This should produce output about the installation process, with the final line reading: Successfully installed recipe-scrapers-<version-number>.
To learn what the library can do, you can open a Python interpreter session, and then begin typing – and/or modifying – the statements below (on the lines containing the >>> prompt):
Python 4.0.4 (main, Oct 26 1985, 09:00:32) [GCC 22.3.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from recipe_scrapers import scrape_html
>>> url = "https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/"
>>> name = input('What is your name, burger seeker?\n')
>>> html = requests.get(url, headers={"User-Agent": f"Burger Seeker {name}"}).content
>>> scraper = scrape_html(html, org_url=url)
>>> help(scraper)
Some Python HTTP clients that you can use to retrieve HTML include requests, httpx, and the urllib.request module included in Python’s standard library. Please refer to their documentation to find out what options (timeout configuration, proxy support, etc) are available.
Scrapers available for:
(*) offline saved files only
Contribute
If you spot a design change (or something else) that makes the scraper unable to work for a given site - please fire an issue asap.
If you are programmer PRs with fixes are warmly welcomed and acknowledged with a virtual beer. You can find documentation on how to develop scrapers here.
If you want a scraper for a new site added
Open an Issue providing us the site name, as well as a recipe link from it.
You are a developer and want to code the scraper on your own:
If Schema is available on the site - you can go like this.
Otherwise, scrape the HTML - like this
Generating a new scraper class:
python generate.py <ClassName> <URL>
ClassName: The name of the new scraper class.
URL: The URL of an example recipe from the target site. The content will be stored in test_data to be used with the test class.
You can find a more detailed guide here.
For Devs / Contribute
Assuming you have >=python3.8 installed, navigate to the directory where you want this project to live in and drop these lines
git clone git@github.com:hhursev/recipe-scrapers.git &&
cd recipe-scrapers &&
python -m venv .venv &&
source .venv/bin/activate &&
python -m pip install --upgrade pip &&
pip install -r requirements-dev.txt &&
pip install pre-commit &&
pre-commit install &&
python -m unittest
In case you want to run a single unittest for a newly developed scraper
python -m unittest -k <test_file_name>
FAQ
What if the recipe site I want to extract information from is not listed above?
You can give it a try with the wild_mode option!
If there is Schema/Recipe available it will work just fine.
url = 'https://www.feastingathome.com/tomato-risotto/'
name = input('What is your name, risotto sampler?\n')
html = requests.get(url, headers={"User-Agent": f"Risotto Sampler {name}"}).content
scraper = scrape_html(html, org_url=url, wild_mode=True)
scraper.host()
scraper.title()
scraper.total_time()
scraper.image()
scraper.ingredients()
scraper.ingredient_groups()
scraper.instructions()
scraper.instructions_list()
scraper.yields()
scraper.to_json()
scraper.links()
scraper.nutrients() # not always available
scraper.canonical_url() # not always available
scraper.equipment() # not always available
scraper.cooking_method() # not always available
scraper.keywords() # not always available
scraper.dietary_restrictions() # not always available
Notes:
scraper.links() returns a list of dictionaries containing all of the <a> tag attributes. The attribute names are the dictionary keys.
How do I know if a website has a Recipe Schema?
Run in python shell:
Python 4.0.4 (main, Oct 26 1985, 09:00:32) [GCC 22.3.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from recipe_scrapers import scrape_html
>>> scraper = scrape_html(html=None, org_url='<url of a recipe from the site>', online=True, wild_mode=True)
>>> # if no error is raised - there's schema available:
>>> scraper.title()
>>> scraper.instructions() # etc.
Special thanks to:
All the contributors that helped improving the package. You are awesome!
Extra:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file recipe_scrapers-15.1.0.tar.gz
.
File metadata
- Download URL: recipe_scrapers-15.1.0.tar.gz
- Upload date:
- Size: 115.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 268915941fe881e8bbeee72bde78d516808db57e1a9cc54001f332e3e18c6111 |
|
MD5 | 326472b1fb3c11b5b1f93698bc57dd1b |
|
BLAKE2b-256 | d676c2d670fe9fe053b65f2a8f996e1b6336caae8eea5c4a1568f4fa049c7286 |
File details
Details for the file recipe_scrapers-15.1.0-py3-none-any.whl
.
File metadata
- Download URL: recipe_scrapers-15.1.0-py3-none-any.whl
- Upload date:
- Size: 214.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.12.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f3fc6568d3e7562c049cc358e0a6fb1b349a783206279cbaa4ee16d6ae1903ea |
|
MD5 | 12170106de2109803d8cb548f4738a11 |
|
BLAKE2b-256 | 62e5b53c701652c9e10ac1c813839c481f25b0e7835a948db381f4b8f7a2bd1f |