Skip to main content

DataScraper: Effortless Dataset Extraction

Project description

Dataset Scraper (Scrapset)

Scrapset is a Python module specifically created for web scraping data from websites like Kaggle and Data.gov. It simplifies the task of extracting dataset information such as titles, upvotes (for Kaggle), and recent views (for Data.gov).

By utilizing the Scrapset module, you can automate the retrieval of dataset details from these platforms. This can be beneficial for various purposes such as data analysis, research, or developing machine learning models. The module employs the Selenium library to interact with the websites and extract the desired data.

With Scrapset, you can quickly and easily scrape dataset information, empowering you to work with valuable data from Kaggle, Data.gov, and similar websites.

KaggleDataSet Class

The KaggleDataSet class enables scraping of dataset information from Kaggle.

Methods: web_driver_chrome(): Initializes and returns a Selenium Chrome WebDriver with customized options for scraping Kaggle datasets.

data_set_page(url, last_page, initial_page): Scrapes the titles, upvotes, and additional details of datasets from Kaggle. The method takes the url of the Kaggle datasets page, the last_page number to scrape up to, and the initial_page number to start scraping from. It returns a dictionary containing the scraped dataset information.

DataDotGov Class

The DataDotGov class facilitates scraping of dataset information from Data.gov.

Methods: web_driver_chrome(): Initializes and returns a Selenium Chrome WebDriver with customized options for scraping Data.gov datasets.

data_set_page(url, last_page, initial_page): Scrapes the titles, recent views, and authors of datasets from Data.gov. The method takes the url of the Data.gov datasets page, the last_page number to scrape up to, and the initial_page number to start scraping from. It returns a dictionary containing the scraped dataset information.

Example code to extract titles of datasets from Data.gov


import kaggle_datasets as m
import pandas as pd
df=m.DataDotGov()
data=df.data_set_page('https://catalog.data.gov',last_page=10,initial_page=5)
datf=pd.DataFrame(data)
datf.to_csv('datagov.csv',index=False)

Example code to extract titles, upvote, Usability index of datasets from kaggle


import kaggle_datasets as m
import pandas as pd
df=m.KaggleDataSet()
data=df.data_set_page('https://kaggle.com',last_page=10,initial_page=5)
datf=pd.DataFrame(data)
datf.to_csv('kaggle.csv',index=False)


Example code to extract job details from indeed

You get the details in the form of a dictionary

There are three arguments in indeed_jobs method First: Url Second: last page that you want to scrap the data query: what job do you want to scrap

import indeed as in
dictionary=in()
data = indeed('https://ie.indeed.com', 40, 'data scientist')

IMDb Class

The IMDb class enables scraping of comments from IMDb movie pages.

Methods

web_driver_chrome()

def web_driver_chrome(self) -> webdriver.Chrome:
    """
    Initializes and returns a Selenium Chrome WebDriver with customized options for scraping IMDb comments.

    Returns:
        webdriver.Chrome: The Chrome WebDriver object.
    """

comments(url: str) -> List[str]

def comments(self, url: str) -> List[str]:
    """
    Scrapes comments from an IMDb movie page.

    Args:
        url (str): The URL of the IMDb movie page.

    Returns:
        List[str]: A list containing the scraped comments.
    """

Example Code

Here's an example code demonstrating how to use the IMDb class to scrape comments from an IMDb movie page:

import scrapset as m

df = m.imdb()
data = df.comments('https://www.imdb.com/title/tt0111161/reviews')

Please note that you should replace the URL 'https://www.imdb.com/title/tt0111161/reviews' with the IMDb movie page URL you want to scrape comments from.

VesselFinder Class

The VesselFinder class facilitates scraping vessel details and locations.

Methods

vessel_details(url: str) -> List[str]

def vessel_details(self, url: str) -> List[str]:
    """
    Scrapes vessel details from VesselFinder.

    Args:
        url (str): The URL of the VesselFinder vessel page.

    Returns:
        List[str]: A list containing the scraped vessel details.
    """

vessel_location(url: str) -> List[str]

def vessel_location(self, url: str) -> List[str]:
    """
    Scrapes vessel locations from VesselFinder.

    Args:
        url (str): The URL of the VesselFinder port page.

    Returns:
        List[str]: A list containing the scraped vessel locations.
    """

Example Code

import scrapset as m

df = m.VesselFinder()

# Scrape vessel details
vessel_details = df.vessel_details('https://www.vesselfinder.com/vessels')

# Scrape vessel locations
vessel_location = df.vessel_location('https://www.vesselfinder.com/ports')

Please replace the URLs 'https://www.vesselfinder.com/vessels' and 'https://www.vesselfinder.com/ports' with the specific VesselFinder pages you want to scrape vessel details and locations from.

import scrapset as m
import pandas as pd

# Scrape Kaggle dataset information
kaggle_df = m.KaggleDataSet()
kaggle_data = kaggle_df.data_set_page('https://kaggle.com', last_page=10, initial_page=5)
kaggle_datf = pd.DataFrame(kaggle_data)
kaggle_datf.to_csv('kaggle.csv', index=False)

# Scrape Data.gov dataset information
datagov_df = m.DataDotGov()
datagov_data = datagov_df.data_set_page('https://catalog.data.gov', last_page=10, initial_page=5)
datagov_datf = pd.DataFrame(datagov_data)
datagov_datf.to_csv('datagov.csv', index=False)

# Scrape job details from Indeed
indeed_df = m.indeed()
indeed_data = indeed_df.indeed_jobs('https://ie.indeed.com', 40, 'data scientist')
indeed_datf = pd.DataFrame(indeed_data)
indeed_datf.to_csv('indeed_jobs.csv', index=False)

# Scrape comments from IMDb movie page
imdb_df = m.imdb()
imdb_data = imdb_df.comments('https://www.imdb.com/title/tt0111161/reviews')

# Scrape vessel details and locations from VesselFinder
vesselfinder_df = m.VesselFinder()
vessel_details = vesselfinder_df.vessel_details('https://www.vesselfinder.com/vessels')
vessel_location = vesselfinder_df.vessel_location('https://www.vesselfinder.com/ports')

Angel Scraper

Introduction

The Angel class in the scrapset module is designed to scrape data from Google Maps. It provides a method for scrolling down the map and extracting information about companies and their phone numbers.

Methods

1. scroll_using_mouse(duration=10, scroll_amount=1)

This method simulates scrolling down on the webpage using the mouse wheel. It continues scrolling for the specified duration with a specified scroll amount.

  • Parameters:
    • duration (int): The duration (in seconds) for which the scrolling action will continue.
    • scroll_amount (int): The number of "clicks" of the scroll wheel to simulate. A positive value scrolls down, and a negative value scrolls up.

2. Map(query)

This method initiates a search on Google Maps based on the provided query. It then utilizes the scroll_using_mouse method to scroll down the map and extracts information about companies and their phone numbers.

  • Parameters:

    • query (str): The search query for Google Maps.
  • Note on Scrolling:

    • The scrolling action performed by scroll_using_mouse will only work correctly when the mouse cursor is positioned over the map cards on the webpage.

Example Usage

from scrapset import Angel

# Create an instance of the Angel class
angel_instance = Angel()

# Perform a Google Maps search for "example query"
result = angel_instance.Map("example query")

# Print the result
print(result)



#  Note:  This is for running Scrapset in google colab : 

#run this command in the cell !apt-get update !apt-get install chromium chromium-driver !pip install selenium



%%shell

Add debian buster

cat > /etc/apt/sources.list.d/debian.list <<'EOF' deb [arch=amd64 signed-by=/usr/share/keyrings/debian-buster.gpg] http://deb.debian.org/debian buster main deb [arch=amd64 signed-by=/usr/share/keyrings/debian-buster-updates.gpg] http://deb.debian.org/debian buster-updates main deb [arch=amd64 signed-by=/usr/share/keyrings/debian-security-buster.gpg] http://deb.debian.org/debian-security buster/updates main EOF

Add keys

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys DCC9EFBF77E11517 apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 648ACFD622F3D138 apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 112695A0E562B32A

apt-key export 77E11517 | gpg --dearmour -o /usr/share/keyrings/debian-buster.gpg apt-key export 22F3D138 | gpg --dearmour -o /usr/share/keyrings/debian-buster-updates.gpg apt-key export E562B32A | gpg --dearmour -o /usr/share/keyrings/debian-security-buster.gpg

Prefer debian repo for chromium* packages only

Note the double-blank lines between entries

cat > /etc/apt/preferences.d/chromium.pref << 'EOF' Package: * Pin: release a=eoan Pin-Priority: 500

Package: * Pin: origin "deb.debian.org" Pin-Priority: 300

Package: chromium* Pin: origin "deb.debian.org" Pin-Priority: 700 EOF

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Scrapset-9.4.9.tar.gz (10.5 kB view details)

Uploaded Source

File details

Details for the file Scrapset-9.4.9.tar.gz.

File metadata

  • Download URL: Scrapset-9.4.9.tar.gz
  • Upload date:
  • Size: 10.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for Scrapset-9.4.9.tar.gz
Algorithm Hash digest
SHA256 347b71c182f3a82a2c6307bbc0adc2f830e7dd3f6a441c73697b3d339a07fc83
MD5 ce26b2b5373345c52b46782456ee70c1
BLAKE2b-256 a82f7a1be306454757a7c1d23f04568b84c8ad3813bd7e118ab7dd27f09d59a3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page