Skip to main content

Python package, scraping recipes from all over the internet

Project description

Version Travis Coveralls License Github Black formatted

A simple web scraping tool for recipe sites.

pip install recipe-scrapers

then:

from recipe_scrapers import scrape_me

# give the url as a string, it can be url from any site listed below
scraper = scrape_me('https://www.allrecipes.com/recipe/158968/spinach-and-feta-turkey-burgers/')

# Q: What if the recipe site I want to extract information from is not listed below?
# A: You can give it a try with the wild_mode option! If there is Schema/Recipe available it will work just fine.
scraper = scrape_me('https://www.feastingathome.com/tomato-risotto/', wild_mode=True)

scraper.title()
scraper.total_time()
scraper.yields()
scraper.ingredients()
scraper.instructions()
scraper.image()
scraper.host()
scraper.links()

Note: scraper.links() returns a list of dictionaries containing all of the <a> tag attributes. The attribute names are the dictionary keys.

Scrapers available for:

Contribute

Part of the reason I want this open sourced is because if a site makes a design change, the scraper for it should be modified.

If you spot a design change (or something else) that makes the scraper unable to work for a given site - please fire an issue asap.

If you are programmer PRs with fixes are warmly welcomed and acknowledged with a virtual beer.

If you want a scraper for a new site added

For Devs / Contribute

Assuming you have python3 installed, navigate to the directory where you want this project to live in and drop these lines

git clone git@github.com:hhursev/recipe-scrapers.git &&
cd recipe-scrapers &&
python3 -m venv .venv &&
source .venv/bin/activate &&
pip install -r requirements.txt &&
pre-commit install &&
python -m coverage run -m unittest &&
python -m coverage report

In case you want to run a single unittest for a newly developed scraper

python -m coverage run -m unittest tests.test_myscraper

FAQ

  • How do I know if a website has a Recipe Schema?

    • Go to a recipe on the website you want to be supported.

    • Hit Ctrl - u on your keyboard

    • Search (Ctrl -f) for application/ld+json. It should be inside a script tag.

    • If you found it then it’s highly likely your website supports recipe schemas. Otherwise, you’ll need to parse the HTML.

Spacial thanks to:

All the contributors that helped improving the package. You are awesome!

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

recipe_scrapers-10.0.0.tar.gz (32.6 kB view details)

Uploaded Source

File details

Details for the file recipe_scrapers-10.0.0.tar.gz.

File metadata

  • Download URL: recipe_scrapers-10.0.0.tar.gz
  • Upload date:
  • Size: 32.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.0 requests-toolbelt/0.9.1 tqdm/4.50.0 CPython/3.6.7

File hashes

Hashes for recipe_scrapers-10.0.0.tar.gz
Algorithm Hash digest
SHA256 06f12aaf705470951bff3c57c43bfd03182c42caa72b0a2c1f61f00ecd01fe37
MD5 b756aed87c8dfd2f56c8be1914a077da
BLAKE2b-256 ccabdd3fe0e6f7f003aea08749be325b4b45f3808bb18bbb32bd3f29023fa2a0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page