Skip to main content

This blueprint extracts out title, description and body from html either via xpath or by automatic cluster analysis

Project description


Helpful transmogrifier blueprints to extract text or html out of html content.

This blueprint has a clustering algorithm that tries to automatically extract the content from the HTML template.
This is slow and not always effective. Often you will need to input your own template extraction rules.
In addition to extracting Title, Description and Text of items the blueprint will output
the rules it generates to a logger with the same name as the blueprint.

Setting debug mode on templateauto will give you details about the rules it uses. ::

DEBUG:templateauto:'icft.html' discovered rules by clustering on 'http://...'
text= html //div[@id = "dal_content"]//div[@class = "content"]//p
title= text //div[@id = "dal_content"]//div[@class = "content"]//h3
TITLE: ...
MAIN-10: ...
MAIN-10: ...
MAIN-10: ...


TAL Expression to control use of this blueprint

default is ''


This blueprint extracts out title, description and body from html
either via xpath, TAL or by automatic cluster analysis

Rules are in the form of ::

(title|description|text|anything) = (text|html|optional|tal) Expression

Where expression is either TAL or XPath

For example ::

blueprint = transmogrify.htmlcontentextractor
title = text //div[@class='body']//h1[1]
_delete1 = optional //div[@class='body']//a[@class='headerlink']
_delete2 = optional //div[contains(@class,'admonition-description')]
description = text //div[contains(@class,'admonition-description')]//p[@class='last']
text = html //div[@class='body']

Note that for a single template e.g. template1, ALL of the XPaths need to match otherwise
that template will be skipped and the next template tried. If you'd like to make it
so that a single XPath isn't nessary for the template to match then use the keyword `optional` or `optionaltext`
instead of `text` or `html` before the XPath.

When an XPath is applied within a single template, the HTML it matches will be removed from the page.
Another rule in that same template can't match the same HTML fragment.

If a content part is not useful (e.g. redundant text, title or description) it is a way to effectively remove that HTML
from the content.

To help debug your template rules you can set debug mode.

For more information about XPath see



This blueprint extracts out fields from html either via xpath rules or by automatic cluster


You can define a series of rules which will get applied to the to the '_text'
of the input item. The rules use a XPATH expression or a TAL expression to
extract html or text out of the html and adds it as key to the outputted item.

Each option of the blueprint is a rule of the following form ::

(N-)field = (optional)(text|html|delete|optional) xpath


(N-)field = (optional)tal tal-expression

"field" is the attribute that will be set with the results of the xpath

"format" is what to do with the results of the xpath. "optional" means the same
as "delete" but won't cause the group to not match. if the format is delete or optional
then the field name doesn't matter but will still need to be unique

"xpath' is an xpath expression

If the format is 'tal' then instead of an XPath use can use a TAL expression. TAL expression
is evaluated on the item object AFTER the XPath expressions have been applied.

For example ::

blueprint = transmogrify.htmlcontentextractor
title = text //div[@class='body']//h1[1]
_permalink = text //div[@class='body']//a[@class='headerlink']
_text = html //div[@class='body']
_label = optional //p[contains(@class,'admonition-title')]
description = optional //div[contains(@class,'admonition-description')]/p[@class='last']/text()
_remove_useless_links = optional //div[@id = 'indices-and-tables']
mimetype = tal string:text/html
text = tal python:item['_text'].replace('id="blah"','')

You can delete a number of parts of the html by extracting content to fields such as _permalink and _label.
These items won't get used be set used to set any properties on the final content so are effective as a means
of deleting parts of the html.
TAL expressions are evaluated after XPath expressions so we can post process the _text XPath to produce a text
stripped of a certain id.

N is the group number. Groups are run in order of group number. If
any rule doesn't match (unless its marked optional) then the next group
will be tried instead. Group numbers are optional.

Instead of groups you can also chain several blueprints togeather. The blueprint
will set '_template' on the item. If another blueprint finds the '_template' key in an item
it will ignore that item.

The '_template' field is the remainder of the html once all the content selected by the
XPATH expressions have been applied.
This blueprint will analyse the html and attempt to discover the rules to extract out the
title, description and body of the html.

If the logger output is in DEBUG mode then the XPaths used by the auto extrator will be output
to the logger.


1.0 (2012-04-18)

- include datetime in tal expression. [djay]
- fix bug in drop_tree when removing html [djay]
- better logging [djay]
- better handling of text nodes [djay]
- added iterating accention of a text node in order uniquifiy [aterry]

1.0b5 (2011-06-29)

- include docs
- now can use TAL expressions

1.0b4 (2011-02-06)

- handle '/text()' in xpaths
- new 'optionaltext' rule format

1.0b3 (2010-12-13)

- simpler autogenerated xpath
- better logging

1.0b2 (2010-11-09)

- Put condition on autofinder so can be turned off

1.0b1 (2010-11-03)

- ignore already found items. better debug
["Dylan Jay"]

- skip templates if item already parsed
["Dylan Jay"]

- print automaticly found XPaths
["Dylan Jay"]

- make text fields strip tail text
["Vitaliy Podoba"]

1.0dev (2010-03-22)

- split the auto templatefinder out to it's own blueprint
["Dylan Jay"]

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution (388.5 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page