Skip to main content

Web Crawler, HTML Parser, and Data Visualization

Project description

! -- Make sure you have downloaded libslt1 and libxml2 in order for lxml to work properly on Linux systems. (Ubuntu at least)


Documentation is under construction right now!

If you have any questions you can email me at

gitwebby@gmail.com

NOW recieved pylint rating of 9.62/10


Webby quickly brings web crawling and xml/html parsing to your fingertips.


Creating Crawlers has never been this easy, feel free to connect to any website and use XPATHs to harvest valuable web data.

Example Setup:

import webby
spider = webby.Crawler('http://example.com')

parse = webby.Parser(spider.source)
parse.scrape("//p") #Will return all 'p' data tags from html on example.com

#To print out what you just scraped
for value in parse.data.itervalues():
print value

Project details


Release history Release notifications

This version
History Node

1.3.0

History Node

1.2.1

History Node

1.2.0

History Node

1.1.1

History Node

1.1.0

History Node

1.0.6

History Node

1.0.0

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging CloudAMQP CloudAMQP RabbitMQ AWS AWS Cloud computing Fastly Fastly CDN DigiCert DigiCert EV certificate StatusPage StatusPage Status page