Skip to main content

This package is design to scrape Bible data on JW.org for NLP/IAgen task.

Project description

JW Scraper

jwsoup is a simple Python package that scrapes Bible data from the JW.org website. The package provides functionality for scraping Bible verses and saving them in a structured format. It supports scraping data from one or multiple pages, handling paginated content, and storing the results in a Parquet file.

Features

  • Scrape Bible verses from individual or multiple pages.
  • Clean the scraped verse text to remove unwanted characters.
  • Store the scraped data in a Parquet file for further analysis.
  • Simple interface with reusable functions.

Installation

To install jwsoup, you can use pip from PyPI:

pip install jwsoup

Alternatively, if you want to install it locally from the source, clone the repository and run the following commands:

git clone https://github.com/sawadogosalif/jwsoup.git
cd jwsoup
pip install .

Usage

Scrape a Single Page

You can scrape a single page of Bible verses using the scrape_single_page function. This function returns a list of verses and the URL for the next page (if available).

jwsoup.text import scrape_single_page
url = "https://www.jw.org/fr/biblioth%C3%A8que/bible/bible-d-etude/livres/Gen%C3%A8se/1/"
verses, next_url = scrape_single_page(url)

# Print the scraped verses
for verse in verses:
    print(f"{verse[0]}: {verse[1]}")

# Print the next URL
print(f"Next page URL: {next_url}")

Scrape Multiple Pages

To scrape multiple pages starting from a given URL, use the scrape_multi_page function. This function will follow pagination and save the scraped data in a Parquet file.

from jwsoup.text import scrape_multi_page

start_url = "https://www.jw.org/mos/d-s%E1%BA%BDn-yiisi/biible/nwt/books/S%C9%A9ngre/1/"
output_dir = "bible_data_moore.parquet"
res = scrape_multi_page(start_url, output_dir=output_dir, max_pages=5, page_sep="books")

Save Data to Parquet

The scraped data is stored in a Parquet file for efficient storage and querying. You can specify the output file and partition the data by page.

import pandas as pd
pd.read_parque(output_dir).head()

alt text

License

This project is licensed under the MIT License - see the LICENSE file for details.

Author

Acknowledgments

  • Thanks to the requests, beautifulsoup4, pandas, loguru, and pyarrow libraries for making scraping and data handling easier.
  • Thanks to JW for providing an accessible and rich resource of Bible texts in multiple langages

Changelog

[0.0.1] - 2024-11-23

Added

  • Initial release of jw_soup.
  • Supports scraping of text-based Bible verses from JW.org.
  • Extracts individual verses and saves them to parquet files using pyarrow.
  • Includes basic error handling and logging with loguru.

Known Limitations

  • Only supports scraping textual data.
  • Does not handle multimedia content (audio/video).
  • Limited testing for edge cases (e.g., malformed HTML or network interruptions).

[0.0.2] - 2024-11-23

Added

  • Typo correction in package descritption

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jwsoup-0.0.2.tar.gz (7.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

jwsoup-0.0.2-py3-none-any.whl (5.8 kB view details)

Uploaded Python 3

File details

Details for the file jwsoup-0.0.2.tar.gz.

File metadata

  • Download URL: jwsoup-0.0.2.tar.gz
  • Upload date:
  • Size: 7.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for jwsoup-0.0.2.tar.gz
Algorithm Hash digest
SHA256 1ea5a2fb3ac0bd929c0fde813bc4c532093e1e310d5b46d8be03649ffa313fde
MD5 bb0deb0e58df8b63d09c54c6e4519ec7
BLAKE2b-256 855383237eed869b8fd739b16e42e5402d2ad03516ebb9ffde03e80a8312990d

See more details on using hashes here.

File details

Details for the file jwsoup-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: jwsoup-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 5.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for jwsoup-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 61026f4824431207c065ca709d7bd1c5199cd66611ec99ee7ba23ce1d9b7a321
MD5 c394e194760c83b5ec5da599fff3fbbc
BLAKE2b-256 96ce9174101e5c6586303ac3b1107681ce901491d36557ac67f9e6597b0a033a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page