Skip to main content

Python package, scraping recipes from all over the internet

Project description

Github Version PyPI - Python Version Downloads GitHub Actions Unittests Coveralls License Codacy Badge

A simple web scraping tool for recipe sites.

pip install recipe-scrapers

then:

from recipe_scrapers import scrape_me

scraper = scrape_me('https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/')

# Q: What if the recipe site I want to extract information from is not listed below?
# A: You can give it a try with the wild_mode option! If there is Schema/Recipe available it will work just fine.
scraper = scrape_me('https://www.feastingathome.com/tomato-risotto/', wild_mode=True)

scraper.host()
scraper.title()
scraper.total_time()
scraper.image()
scraper.ingredients()
scraper.ingredient_groups()
scraper.instructions()
scraper.instructions_list()
scraper.yields()
scraper.to_json()
scraper.links()
scraper.nutrients()  # not always available
scraper.canonical_url()  # not always available
scraper.equipment()  # not always available
scraper.cooking_method()  # not always available
scraper.keywords()  # not always available
scraper.dietary_restrictions() # not always available

You also have an option to scrape html-like content

import requests
from recipe_scrapers import scrape_html

url = "https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/"
html = requests.get(url).content

scraper = scrape_html(html=html, org_url=url)

scraper.title()
scraper.total_time()
# etc...

Notes:

  • scraper.links() returns a list of dictionaries containing all of the <a> tag attributes. The attribute names are the dictionary keys.

Some Python HTTP clients that you can use to retrieve HTML include requests and httpx. Please refer to their documentation to find out what options (timeout configuration, proxy support, etc) are available.

Scrapers available for:

(*) offline saved files only

Contribute

If you spot a design change (or something else) that makes the scraper unable to work for a given site - please fire an issue asap.

If you are programmer PRs with fixes are warmly welcomed and acknowledged with a virtual beer. You can find documentation on how to develop scrapers here.

If you want a scraper for a new site added

  • Open an Issue providing us the site name, as well as a recipe link from it.

  • You are a developer and want to code the scraper on your own:

    • If Schema is available on the site - you can go like this.

    • Otherwise, scrape the HTML - like this

    • Generating a new scraper class:

      python generate.py <ClassName> <URL>
      • ClassName: The name of the new scraper class.

      • URL: The URL of an example recipe from the target site. The content will be stored in test_data to be used with the test class.

      You can find a more detailed guide here.

For Devs / Contribute

Assuming you have >=python3.8 installed, navigate to the directory where you want this project to live in and drop these lines

git clone git@github.com:hhursev/recipe-scrapers.git &&
cd recipe-scrapers &&
python -m venv .venv &&
source .venv/bin/activate &&
python -m pip install --upgrade pip &&
pip install -r requirements-dev.txt &&
pip install pre-commit &&
pre-commit install &&
python -m unittest

In case you want to run a single unittest for a newly developed scraper

python -m unittest -k <test_file_name>

FAQ

  • How do I know if a website has a Recipe Schema? Run in python shell:

from recipe_scrapers import scrape_me
scraper = scrape_me('<url of a recipe from the site>', wild_mode=True)
# if no error is raised - there's schema available:
scraper.title()
scraper.instructions()  # etc.

Netiquette

If you’re using this library to collect large numbers of recipes from the web, please use the software responsibly and try to avoid creating high volumes of network traffic.

Python’s standard library provides a robots.txt parser that may be helpful to automatically follow common instructions specified by websites for web crawlers.

Another parser option – particularly if you find that many web requests from urllib.robotparser are blocked – is the robotexclusionrulesparser library.

Special thanks to:

All the contributors that helped improving the package. You are awesome!

https://contrib.rocks/image?repo=hhursev/recipe-scrapers

Extra:

You want to gather recipes data?
You have an idea you want to implement?
Check out our “Share a project” wall - it may save you time and spark ideas!

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

recipe_scrapers-14.57.1.tar.gz (108.1 kB view details)

Uploaded Source

Built Distribution

recipe_scrapers-14.57.1-py3-none-any.whl (216.9 kB view details)

Uploaded Python 3

File details

Details for the file recipe_scrapers-14.57.1.tar.gz.

File metadata

  • Download URL: recipe_scrapers-14.57.1.tar.gz
  • Upload date:
  • Size: 108.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for recipe_scrapers-14.57.1.tar.gz
Algorithm Hash digest
SHA256 91d4f47e4f2a609a93239f7486807eff44485007cf7e696939b010ecde7979a4
MD5 aed3b2913c265e0ea79bcfc601fc2045
BLAKE2b-256 cb4fa32828825f26e9987489d2497709a9bdb02d4f6430a4d2dd853e5c727c21

See more details on using hashes here.

File details

Details for the file recipe_scrapers-14.57.1-py3-none-any.whl.

File metadata

File hashes

Hashes for recipe_scrapers-14.57.1-py3-none-any.whl
Algorithm Hash digest
SHA256 2cf8eb80377344b580f468e1e404a9e64fe4b7065fcd9bf17308cd84826a7b59
MD5 06159193e8f446092ff0c2fb7249e81c
BLAKE2b-256 0e9a791adfa9c07c1d383e2c5d0d1ce86849146fcb7cf155fd2c0f7f2c912212

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page