Skip to main content
Join the official 2019 Python Developers SurveyStart the survey!

Web Crawler, HTML Parser, and Data Visualization

Project description

! -- Make sure you have downloaded libslt1 and libxml2 in order for lxml to work properly on Linux systems. (Ubuntu at least)


Documentation is under construction right now!

If you have any questions you can email me at

gitwebby@gmail.com

NOW recieved pylint rating of 9.62/10


Webby quickly brings web crawling and xml/html parsing to your fingertips.


Creating Crawlers has never been this easy, feel free to connect to any website and use XPATHs to harvest valuable web data.

Example Setup:

import webby
spider = webby.Crawler('http://example.com')

parse = webby.Parser(spider.source)
parse.scrape("//p") #Will return all 'p' data tags from html on example.com

#To print out what you just scraped
for value in parse.data.itervalues():
print value

Project details


Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page