Skip to main content

Extracting home information from zillow.com

Project description

Building Web-Crawler Building a Web-Crawler Software is easy, and helps you take advantage of a data mining software. This document will help guide you through understanding this build process.

## Requirements

  1. BeautifulSoup4, module is needed and can be installed with pip install BeautifulSoup4.
  2. re, module is needed and can be installed with pip install re.
  3. matplotlib, module is needed to plot the graphs for the scatterplot and boxplot.
  4. pandas, module is needed to load the csv data into a dataframe.

2. Web-Crawler, using one of the following configurations: * macOS You can either use Web-Crawler for Mac or See installation instructions. * Linux Install Web-Crawler according to the [instructions] for your OS.

## Overview

While it is possible to build a web-crawler using a local python installation, we have a build process that runs on a local environment. This simplifies initial set up and provides for a very consistent build and test environment.

## Key scripts

The following scripts are found in the build/ directory. Note that all scripts must be run from the Web-Crawler root directory. 1. src/webcrawler/move_csv.sh

Project details


Release history Release notifications

This version

1.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for Final_Project, version 1.0
Filename, size File type Python version Upload date Hashes
Filename, size Final_Project-1.0.tar.gz (254.7 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page