Python package, scraping recipes from all over the internet
Project description
A simple web scraping tool for recipe sites.
pip install recipe-scrapers
then:
from recipe_scrapers import scrape_me
# give the url as a string, it can be url from any site listed below
scraper = scrape_me('https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/')
# Q: What if the recipe site I want to extract information from is not listed below?
# A: You can give it a try with the wild_mode option! If there is Schema/Recipe available it will work just fine.
scraper = scrape_me('https://www.feastingathome.com/tomato-risotto/', wild_mode=True)
scraper.title()
scraper.total_time()
scraper.yields()
scraper.ingredients()
scraper.instructions()
scraper.image()
scraper.host()
scraper.links()
scraper.nutrients() # if available
Notes:
scraper.links() returns a list of dictionaries containing all of the <a> tag attributes. The attribute names are the dictionary keys.
Scrapers available for:
Contribute
If you spot a design change (or something else) that makes the scraper unable to work for a given site - please fire an issue asap.
If you are programmer PRs with fixes are warmly welcomed and acknowledged with a virtual beer.
If you want a scraper for a new site added
Open an Issue providing us the site name, as well as a recipe link from it.
You are a developer and want to code the scraper on your own:
If Schema is available on the site - you can go like this.
Otherwise, scrape the HTML - like this
Generating a new scraper class:
python generate.py <ClassName> <URL>
ClassName: The name of the new scraper class.
URL: The URL of an example recipe from the target site. The content will be stored in test_data to be used with the test class.
For Devs / Contribute
Assuming you have >=python3.7 installed, navigate to the directory where you want this project to live in and drop these lines
git clone git@github.com:hhursev/recipe-scrapers.git &&
cd recipe-scrapers &&
python3 -m venv .venv &&
source .venv/bin/activate &&
pip install -r requirements-dev.txt &&
pre-commit install &&
python run_tests.py
python3 -m build
python3 -m twine upload --repository pypi dist/*
In case you want to run a single unittest for a newly developed scraper
python -m coverage run -m unittest tests.test_myscraper
FAQ
How do I know if a website has a Recipe Schema? Run in python shell:
from recipe_scrapers import scrape_me
scraper = scrape_me('<url of a recipe from the site>', wild_mode=True)
# if no error is raised - there's schema available:
scraper.title()
scraper.instructions() # etc.
Special thanks to:
All the contributors that helped improving the package. You are awesome!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for recipe-scrapers-ap-fork-13.31.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | c83138e0388894ba123b818cf9bb125f9b9f25d43b53ba211796c3f1bab4160a |
|
MD5 | 65250e2fdf6e993a83d7c44fc677dc70 |
|
BLAKE2b-256 | b414464ef3b6d526e6f311a67fa8c7bb6fea3a3755c732aa80e95204a89eb890 |
Hashes for recipe_scrapers_ap_fork-13.31.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3ae452b6a686698e4e09577b2b2940df78ae7d1a8a5cb8bcfa499ca4086c1282 |
|
MD5 | ad0cff188719029ec17b76afad3b4fa5 |
|
BLAKE2b-256 | a4ab5237c93618f4d0374b7a552aba63cac0c3dcf2ee0128fe3ea9d9c7f89b97 |