Skip to main content

This library is utility library from digipodium

Project description

Python Version Contributions welcome License Build Status

Documentation Status PyPI - Downloads Stars

A python library which can be used to extraxct data from files, pdfs, doc(x) files, as well as save data into these files. This library can be used to scrape and extract webpage data from websites as well.

Installation Requirements and Instructions

Python versions 3.8 or above should be installed. After that open your terminal: For Windows users:

pip install dputils

For Mac/Linux users:

pip3 install dputils

Files Module

Functions from dputils.files: for now, the files module has two functions:

  1. get_data:

    • To import, use statement:

      from dputils.files import get_data
      
    • Obtains data from files of any extension given as args(supports text files, binary files, pdf, doc for now, more coming!)

    • sample call:

      content = get_data(r"sample.docx")
      print(content)
      
    • Returns a string or binary data depending on the output arg

    • images will not be extracted

  2. save_data:

    • save_data can be used to write and save data into a file of valid extension.
    • sample call:
      from dputils.files import save_data
      
      pdfContent = save_data("sample.pdf", "Sample text to insert")
      print(pdfContent)
      
    • Returns True if file is successfully accessed and modified. Otherwise, False.

Scrape Module

Data extraction from a page

Here's a basic tutorial to help you get started with the scraper module.

  1. Import the required classes and functions:
from dputils.scrape import Scraper, Tag
  1. Initialize the Scraper class with the URL of the webpage you want to scrape:
url = "https://www.example.com"
scraper = Scraper(url)
  1. Define the tags you want to scrape using the Tag class:
title_tag = Tag(name='h1', cls='title', output='text')
price_tag = Tag(name='span', cls='price', output='text')
  1. Extract data from the page:
data = scraper.get_data_from_page(title=title_tag, price=price_tag)
print(data)

Extracting list of items from a page

For more advanced usage, such as extracting repeated data from lists of items on a page, you can use the following approach:

  1. Initialize the Scraper class:
url = "https://www.example.com/products"
scraper = Scraper(url)
  1. Define the tags for the target section and the items within that section: For repeated data extraction, you need to define Target and item and pass it to get_repeating_data_from_page() method.
    • target - defines the Tag() for area of the page containing the list of items.
    • items - defines the Tag() for repeated items within the target section. Like a product-card in product grid/list.
target_tag = Tag(name='div', cls='product-list')
item_tag = Tag(name='div', cls='product-item')
title_tag = Tag(name='h2', cls='product-title', output='text')
price_tag = Tag(name='span', cls='product-price', output='text')
link_tag = Tag(name='a', cls='product-link', output='href')
  1. Extract repeated data from the page:
products = scraper.get_repeating_data_from_page(
    target=target_tag,
    items=item_tag,
    title=title_tag,
    price=price_tag,
    link=link_tag
)
for product in products:
    print(product)

These functions can used on python versions 3.8 or greater.

References for more help: https://digipodium.github.io/dputils/

Contribution

if you want to contribute to this project and make it better, your help is very welcome.

  • Fork the project
  • Create your feature branch (git checkout -b feature/fooBar)
  • Commit your changes (git commit -am 'Add some fooBar')
  • Push to the branch (git push origin feature/fooBar)
  • Create a new Pull Request
  • Wait for your PR to be reviewed and merged
  • Star the project if you've found it useful
  • Share the project with your friends
  • Create an issue if you find a bug or want to request a new feature
  • Improve the project by refactoring the code
  • Review the PRs of other contributors
  • Suggest new features
  • Suggest new technologies to be used

Thank you for using dputils!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dputils-1.0.2.tar.gz (8.4 kB view details)

Uploaded Source

Built Distribution

dputils-1.0.2-py3-none-any.whl (9.0 kB view details)

Uploaded Python 3

File details

Details for the file dputils-1.0.2.tar.gz.

File metadata

  • Download URL: dputils-1.0.2.tar.gz
  • Upload date:
  • Size: 8.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.2 Windows/11

File hashes

Hashes for dputils-1.0.2.tar.gz
Algorithm Hash digest
SHA256 0e0820814eeb4c87a7898e11002c474579396ef8420d5759ea47b40823c0004b
MD5 17d27aa1aca78aaf6a7d0e52ce0e724a
BLAKE2b-256 7a61a607c57c85d96f53623613a5ac24d895566edbb03d2383677466e22fc206

See more details on using hashes here.

File details

Details for the file dputils-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: dputils-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 9.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.2 Windows/11

File hashes

Hashes for dputils-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e5188f37c521c4d8a0f24f1516b78c539006d440d35775feb21022f8719e1760
MD5 0fcdcbe9ed2e5b12987078e46601bceb
BLAKE2b-256 eed7b1b664199e6d08d7cb408fb3f281d60e0d5654b8bd41f92fa4c6be03e089

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page