Skip to main content

Python package, scraping recipes from all over the internet

Project description

Github Version Downloads GitHub Actions Unittests Coveralls License GitHub Actions Linters Black formatted Looks Good To Me

A simple web scraping tool for recipe sites.

pip install recipe-scrapers

then:

from recipe_scrapers import scrape_me

# give the url as a string, it can be url from any site listed below
scraper = scrape_me('https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/')

# Q: What if the recipe site I want to extract information from is not listed below?
# A: You can give it a try with the wild_mode option! If there is Schema/Recipe available it will work just fine.
scraper = scrape_me('https://www.feastingathome.com/tomato-risotto/', wild_mode=True)

scraper.title()
scraper.total_time()
scraper.yields()
scraper.ingredients()
scraper.instructions()
scraper.image()
scraper.host()
scraper.links()
scraper.nutrients()  # if available

Notes:

  • scraper.links() returns a list of dictionaries containing all of the <a> tag attributes. The attribute names are the dictionary keys.

Scrapers available for:

Contribute

If you spot a design change (or something else) that makes the scraper unable to work for a given site - please fire an issue asap.

If you are programmer PRs with fixes are warmly welcomed and acknowledged with a virtual beer.

If you want a scraper for a new site added

  • Open an Issue providing us the site name, as well as a recipe link from it.

  • You are a developer and want to code the scraper on your own:

    • If Schema is available on the site - you can go like this.

    • Otherwise, scrape the HTML - like this

    • Generating a new scraper class:

      python generate.py <ClassName> <URL>
      • ClassName: The name of the new scraper class.

      • URL: The URL of an example recipe from the target site. The content will be stored in test_data to be used with the test class.

For Devs / Contribute

Assuming you have >=python3.7 installed, navigate to the directory where you want this project to live in and drop these lines

git clone git@github.com:hhursev/recipe-scrapers.git &&
cd recipe-scrapers &&
python3 -m venv .venv &&
source .venv/bin/activate &&
pip install -r requirements-dev.txt &&
pre-commit install &&
python run_tests.py

In case you want to run a single unittest for a newly developed scraper

python -m coverage run -m unittest tests.test_myscraper

FAQ

  • How do I know if a website has a Recipe Schema? Run in python shell:

from recipe_scrapers import scrape_me
scraper = scrape_me('<url of a recipe from the site>', wild_mode=True)
# if no error is raised - there's schema available:
scraper.title()
scraper.instructions()  # etc.

Special thanks to:

All the contributors that helped improving the package. You are awesome!

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

recipe_scrapers-13.33.0.tar.gz (61.8 kB view details)

Uploaded Source

Built Distribution

recipe_scrapers-13.33.0-py3-none-any.whl (393.8 kB view details)

Uploaded Python 3

File details

Details for the file recipe_scrapers-13.33.0.tar.gz.

File metadata

  • Download URL: recipe_scrapers-13.33.0.tar.gz
  • Upload date:
  • Size: 61.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.0 CPython/3.9.13

File hashes

Hashes for recipe_scrapers-13.33.0.tar.gz
Algorithm Hash digest
SHA256 be1742077bca55638392446b55bf7d2e80a9f9a9625285dc30efd02c763461ec
MD5 1b9405fb657752895658461409fe02bf
BLAKE2b-256 7f7ec9ef0b76fd490d6d5a2ab98354b59ba36020378c98665f5e6a9be82483a6

See more details on using hashes here.

File details

Details for the file recipe_scrapers-13.33.0-py3-none-any.whl.

File metadata

File hashes

Hashes for recipe_scrapers-13.33.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c350ee2407167ec62327a1db9e8864f49a51cae06907689f9095885444549293
MD5 4c3bfa25e4c3dc201e33b251a668b893
BLAKE2b-256 133a7f4dd164863c8bcc714945f460fb711f2c13b6db5158b855b66ef23bf62f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page