Python package, scraping recipes from all over the internet
Project description
A simple web scraping tool for recipe sites.
pip install recipe-scrapers
then:
import requests
from recipe_scrapers import scrape_html
# give the url as a string, it can be url from any site listed below
url = "https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/"
html = requests.get(url).content
scraper = scrape_html(html, org_url=url)
# Q: What if the recipe site I want to extract information from is not listed below?
# A: You can give it a try with the wild_mode option! If there is Schema/Recipe available it will work just fine.
url = 'https://www.feastingathome.com/tomato-risotto/'
html = requests.get(url).content
scraper = scrape_html(html, org_url=url, wild_mode=True)
scraper.title()
scraper.total_time()
scraper.yields()
scraper.ingredients()
scraper.instructions() # or alternatively for results as a Python list: scraper.instructions_list()
scraper.image()
scraper.host()
scraper.to_json()
scraper.links()
scraper.nutrients() # not always available
scraper.canonical_url() # not always available
scraper.equipment() # not always available
Notes:
scraper.links() returns a list of dictionaries containing all of the <a> tag attributes. The attribute names are the dictionary keys.
Migrating from v14:
The parameters to the scrape_html function have been adjusted in v15 and are intended to make code that uses them more readable. However, the changes should be considered breaking changes – some applications may need to adjust their code to upgrade successfully.
Here are some use-cases that we’ve anticipated and can provide migration paths for:
Attempting to scrape from a website that has no specific scraper implemented
## Legacy v14
html, url = ..., ...
if not scraper_exists_for(url):
scraper = scrape_html(html, url, wild_mode=True)
## Migrated v15
html, url = ..., ...
scraper = scrape_html(html, url, offline=True, supported_only=False)
Scraping a recipe URL on-demand
Note: these examples depend on the requests package; use ‘pip install recipe-scrapers[online]’ to ensure that it is installed as an extra dependency with v15.
## Legacy v14
url = ...
scraper = scrape_me(url)
## Migrated v15
url = ...
scraper = scrape_html(html=None, org_url=url, online=True)
Scrapers available for:
(*) offline saved files only. Page requires login
Contribute
If you spot a design change (or something else) that makes the scraper unable to work for a given site - please fire an issue asap.
If you are programmer PRs with fixes are warmly welcomed and acknowledged with a virtual beer. You can find documentation on how to develop scrapers here.
If you want a scraper for a new site added
Open an Issue providing us the site name, as well as a recipe link from it.
You are a developer and want to code the scraper on your own:
If Schema is available on the site - you can go like this.
Otherwise, scrape the HTML - like this
Generating a new scraper class:
python generate.py <ClassName> <URL>
ClassName: The name of the new scraper class.
URL: The URL of an example recipe from the target site. The content will be stored in test_data to be used with the test class.
You can find a more detailed guide here.
For Devs / Contribute
Assuming you have >=python3.8 installed, navigate to the directory where you want this project to live in and drop these lines
git clone git@github.com:hhursev/recipe-scrapers.git &&
cd recipe-scrapers &&
python -m venv .venv &&
source .venv/bin/activate &&
python -m pip install --upgrade pip &&
pip install -r requirements-dev.txt &&
pip install pre-commit &&
pre-commit install &&
python -m unittest
In case you want to run a single unittest for a newly developed scraper
python -m unittest -k <test_file_name>
FAQ
How do I know if a website has a Recipe Schema? Run in python shell:
from recipe_scrapers import scrape_html
url = '<url of a recipe from the site>'
html = requests.get(url).content
scraper = scrape_html(html, org_url=url, wild_mode=True)
# if no error is raised - there's schema available:
scraper.title()
scraper.instructions() # etc.
Netiquette
If you’re using this library to collect large numbers of recipes from the web, please use the software responsibly and try to avoid creating high volumes of network traffic.
Python’s standard library provides a robots.txt parser that may be helpful to automatically follow common instructions specified by websites for web crawlers.
Another parser option – particularly if you find that many web requests from urllib.robotparser are blocked – is the robotexclusionrulesparser library.
Special thanks to:
All the contributors that helped improving the package. You are awesome!
Extra:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file recipe_scrapers-15.0.0rc2.tar.gz
.
File metadata
- Download URL: recipe_scrapers-15.0.0rc2.tar.gz
- Upload date:
- Size: 91.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8f1b3dee86c83862b9f780c1ddc715e6fdbafdeb1e7d63c2d77b1ef6eb454589 |
|
MD5 | e15ba150038be8d22cfb10ef50dd74c8 |
|
BLAKE2b-256 | 2c489476b75d0844b0611d47614ece7ac7f0ce280eaa5fe46792624f4ab7c405 |
File details
Details for the file recipe_scrapers-15.0.0rc2-py3-none-any.whl
.
File metadata
- Download URL: recipe_scrapers-15.0.0rc2-py3-none-any.whl
- Upload date:
- Size: 204.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.12.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 92dcf58e34167bba97bdc328d95c7801a2a955d037dc22a29b2699f4fda918cb |
|
MD5 | 739316054060497953fef99e03f09afe |
|
BLAKE2b-256 | d5ce5c9a723c7a0431ed4012ed5d9295178e3535687ede6beec5530594e13229 |