Crawling and feeding html content into a transmogrifier pipeline
Crawling - html to import
transmogrify.webcrawler will crawl html to extract pages and files as a source for your transmogrifier pipeline. transmogrify.webcrawler.typerecognitor aids in setting ‘_type’ based on the crawled mimetype. transmogrify.webcrawler.cache helps speed up crawling and reduce memory usage by storing items locally.
These blueprints are designed to work with the funnelweb pipeline but can be used independently.
A source blueprint for crawling content from a site or local html files.
Webcrawler imports HTML either from a live website, for a folder on disk, or a folder on disk with html which used to come from a live website and may still have absolute links refering to that website.
To crawl a live website supply the crawler with a base http url to start crawling with. This url must be the url which all the other urls you want from the site start with.
[crawler] blueprint = transmogrify.webcrawler url = http://www.whitehouse.gov max = 50
will restrict the crawler to the first 50 pages.
You can also crawl a local directory of html with relative links by just using a file: style url
[crawler] blueprint = transmogrify.webcrawler url = file:///mydirectory
or if the local directory contains html saved from a website and might have absolute urls in it the you can set this as the cache. The crawler will always look up the cache first
[crawler] blueprint = transmogrify.webcrawler url = http://therealsite.com --crawler:cache=mydirectory
The following will not crawl anything larget than 4Mb
[crawler] blueprint = transmogrify.webcrawler url = http://www.whitehouse.gov maxsize=400000
To skip crawling links by regular expression
[crawler] blueprint = transmogrify.webcrawler url=http://www.whitehouse.gov ignore = \.mp3 \.mp4
If webcrawler is having trouble parsing the html of some pages you can preprocesses the html before it is parsed. e.g.
[crawler] blueprint = transmogrify.webcrawler patterns = (<script>)[^<]*(</script>) subs = \1\2
If you’d like to skip processing links with certain mimetypes you can use the drop:condition. This TALES expression determines what will be processed further. see http://pypi.python.org/pypi/collective.transmogrifier/#condition-section
the top url to crawl
list of regex for urls to not crawl
local directory to read crawled items from instead of accessing the site directly
Regular expressions to substitute before html is parsed. New line seperated
Text to replace each item in patterns. Must be the same number of lines as patterns. Due to the way buildout handles empty lines, to replace a pattern with nothing (eg to remove the pattern), use <EMPTYSTRING> as a substitution.
don’t crawl anything larger than this
Limit crawling to this number of pages
a list of urls to initially crawl
if set, will ignore the robots.txt directives and crawl everything
WebCrawler will emit items like
item = dict(_site_url = "Original site_url used", _path = "The url crawled without _site_url, _content = "The raw content returned by the url", _content_info = "Headers returned with content" _backlinks = names, _sortorder = "An integer representing the order the url was found within the page/site )
A blueprint that saves crawled content into a directory structure
Allows you to override the field path is stored in. Defaults to ‘_path’
Directory to store cached content in
A blueprint for assigning content type based on the mime-type as given by the webcrawler
setuptools-git wasn’t installed so release was missing files [djay]
fix cache check to prevent overwriting cache [djay]
turn redirects into Link objects [djay]
summary stats of which mimetypes were crawled [djay]
fixed bug where redirected pages weren’t getting uploaded [djay]
fixed bugs with storing default pages in cache [djay]
fixed bug with space chars in urls [ivanteoh]
better handling of charset detection [djay]
add start-urls option [djay]
add ignore_robots option [djay]
fixed bug in http-equiv refresh handling [djay]
fixes to disk caching [djay]
better logging [djay]
default maxsize is unlimited [djay]
Provide ability for the reformat function to substitute patterns with empty strings (nothing). Buildout does not support empty lines within configuration, so if a substitution is <EMPTYSTRING> this becomes an empty string. [davidjb]
Provide a logger in the LXMLPage class so the reformat function can succeed [davidjb]
Reformat spacing in webcrawler reformat function [davidjb]
many fixes for importing from local directory w/ many languages [simahawk]
fix UnicodeEncodeError when file name/language is not english [simahawk]
fix iterating over non-sequence [simahawk]
fix missing import for MyStringIO [simahawk]
fix bug in cache check [djay]
only open cache files when needed so don’t run out of handles [djay]
follow http-equiv refresh links [djay]
files use file pointers to reduce memory usage [djay]
cache saves .metadata files to record and playback headersx [djay]
improve logging [djay]
fix encoding bug caused by cache [djay]
Fixed bug in cache that caused many links to be ignored in some cases [djay]
Fix documentation up [djay]
Stopped localhost output when no output set [djay]
change site_url to just url. [djay]
rename maxpage to maxsize [djay]
fix file: style urls [djay]
Added cache option to replace base_alias [djay]
fix _origin key set by webcrawler, instead of url now it is path as expected by further blue [Vitaliy Podoba]
add _orig_path to pipeline item to keep original path for any further purposes, we will need [Vitaliy Podoba]
- make all url absolute taking into account base tags inside webcrawler blueprint
- renamed package from pretaweb.blueprints to transmogrify.webcrawler.
enhanced import view [djay]
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for transmogrify.webcrawler-1.2.1.tar.gz