Skip to main content

No project description provided

Project description

Auction crawler project

Description

Auction crawler project is a Python package to crawl a website with forest auctions. This project utilizes Python 3.12, along with requests, selenium libraries. It focuses on data extraction from certain website.

Installation

Using a package manager

You can install forest auction crawler as a package: Using pip:

pip install auction_crawler

Or using 'poetry':

poetry add auction_crawler

Cloning the repository

Also, you can clone this repository and install dependencies using 'poetry':

git clone https://github.com/vytenisdam/auction_crawler
cd auction_crawler
poetry install

Usage

As a module

from auction_crawler.main import crawl_site

print(crawl_site('csv', scroll_time=40))

Structure

The project is structured as follows:

  • auction_crawler/: Main package directory.
    • __init__.py: Package initialization file.
    • main.py: Main script for the auction crawler package.
    • csv_write.py: Script for writing crawled data to csv file.
    • selenium_crawl.py: Separate functions that make the crawler work.
  • tests/: Tests directory.
    • __init__.py: Initialization file for tests.
    • tests_main.py: Test scripts for the package

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

auction_crawler-0.1.1.tar.gz (3.8 kB view hashes)

Uploaded Source

Built Distribution

auction_crawler-0.1.1-py3-none-any.whl (4.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page