Skip to main content

This package is design to scrape Bible data on JW.org for NLP/IAgen task.

Project description

JW Scraper

jwsoup is a simple Python package that scrapes Bible data from the JW.org website. The package provides functionality for scraping Bible verses and saving them in a structured format. It supports scraping data from one or multiple pages, handling paginated content, and storing the results in a Parquet file.

Features

  • Scrape Bible verses from individual or multiple pages.
  • Clean the scraped verse text to remove unwanted characters.
  • Store the scraped data in a Parquet file for further analysis.
  • Simple interface with reusable functions.

Installation

To install jwsoup, you can use pip from PyPI:

pip install jwsoup

Alternatively, if you want to install it locally from the source, clone the repository and run the following commands:

git clone https://github.com/sawadogosalif/jwsoup.git
cd jwsoup
pip install .

Usage

Scrape a Single Page

You can scrape a single page of Bible verses using the scrape_single_page function. This function returns a list of verses and the URL for the next page (if available).

jwsoup.text import scrape_single_page
url = "https://www.jw.org/fr/biblioth%C3%A8que/bible/bible-d-etude/livres/Gen%C3%A8se/1/"
verses, next_url = scrape_single_page(url)

# Print the scraped verses
for verse in verses:
    print(f"{verse[0]}: {verse[1]}")

# Print the next URL
print(f"Next page URL: {next_url}")

Scrape Multiple Pages

To scrape multiple pages starting from a given URL, use the scrape_multi_page function. This function will follow pagination and save the scraped data in a Parquet file.

from jwsoup.text import scrape_multi_page

start_url = "https://www.jw.org/mos/d-s%E1%BA%BDn-yiisi/biible/nwt/books/S%C9%A9ngre/1/"
output_dir = "bible_data_moore.parquet"
res = scrape_multi_page(start_url, output_dir=output_dir, max_pages=5, page_sep="books")

Save Data to Parquet

The scraped data is stored in a Parquet file for efficient storage and querying. You can specify the output file and partition the data by page.

import pandas as pd
pd.read_parque(output_dir).head()

alt text

License

This project is licensed under the MIT License - see the LICENSE file for details.

Author

Acknowledgments

  • Thanks to the requests, beautifulsoup4, pandas, loguru, and pyarrow libraries for making scraping and data handling easier.
  • Thanks to JW for providing an accessible and rich resource of Bible texts in multiple langages

Changelog

[0.0.1] - 2024-11-23

Added

  • Initial release of jw_scraper.
  • Supports scraping of text-based Bible verses from JW.org.
  • Extracts individual verses and saves them to parquet files using pyarrow.
  • Includes basic error handling and logging with loguru.

Known Limitations

  • Only supports scraping textual data.
  • Does not handle multimedia content (audio/video).
  • Limited testing for edge cases (e.g., malformed HTML or network interruptions).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jwsoup-0.0.1.tar.gz (6.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

jwsoup-0.0.1-py3-none-any.whl (5.7 kB view details)

Uploaded Python 3

File details

Details for the file jwsoup-0.0.1.tar.gz.

File metadata

  • Download URL: jwsoup-0.0.1.tar.gz
  • Upload date:
  • Size: 6.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for jwsoup-0.0.1.tar.gz
Algorithm Hash digest
SHA256 c8c50cd381550d60c5af861c54d209d733984e66acb186ecee00d01279bc9959
MD5 eda3cf535ae50e944a385c01db4d2bfb
BLAKE2b-256 13d91f26fc7be6c2a62a013b7212f94d62a803c1685e2f47139f6a480b6b4d8d

See more details on using hashes here.

File details

Details for the file jwsoup-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: jwsoup-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 5.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for jwsoup-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4f7a92ef611229c09d1f67dcf9986ef35423f8bbcfcad6018d8d9152efcfe3e7
MD5 b2874047c5d5acb91d160be061a1cae0
BLAKE2b-256 5629ec3c95c43221e405ac2f0c107e6d3c2b1a163370d74f2c9cef8e18a85adc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page