Skip to main content

Map the HTML schema of portals to valid TEI XML with the tags and structures used in them usingsmall manual portal-specific configurations

Project description

HTML2TEI

Map the HTML schema of portals to valid TEI XML with the tags and structures used in them using small manual portal-specific configurations.

The portal-specific configuration is created manually with the help of three different tools which aid evaluating the inventory of the tags and structures used in the HTML code. The manual evaluation of such structures enables one to create a valid TEI XML from the HTML source keeping all desired (text) shema elements in a fine-grained way carefully supervised by the user. In addition to converting the article body, the metadata can be converted to the Schema.org standard.

The conversion process is automatic and scales well on large portals with the same schema

Requirements

  • Python 3.6+
  • For Newspaper3k, the installation of the following packages must precede the installation of this program: python3-dev libxml2-dev libxslt-dev libjpeg-dev zlib1g-dev libpng12-dev

Install

pip

pip3 install html2tei

Manual

  1. git clone https://github.com/ELTE-DH/HTML2TEI.git
  2. Run python3 setup.py install (you may have to use sudo at the beginning of this command)

Usage

This program is designed to be used with WebArticleCurator. The article WARC files should be placed in a directory (warc-dir) and a configuration YAML must map the WARC files to the specific portal configuration. The program can be run from command line or from the Python API see the details below

Modes

There are five modes of the program:

  • Create HTML Content Tree (content-tree): Read all the articles to summarize all the structures that occur in the portal schema. Finally the accumulated information represents the tree structure as a nested YAML dictionary (for manual inspection)
  • The Tag Inventory Maker (inventory-maker): Create the tag tables from the articles with their gathered information (it will be the basis for manual configuration of renaming)
  • The Tag Bigrams Maker (bigram-maker): Create the bigram tag table from the articles with their gathered information (this table is an add-on that can be used to map the schema)
  • The Portal Article Cleaner (cleaner): Create the TEI XMLs from the site-specific configuration and from the tables supplemented with new label names
  • Diff Tag Tables (diff-tables): Compare and update the generated (and modified) tables if there are new data for the same portal

Command Line Arguments

Common Arguments

  • -i, --input-config: WARC filename to portal name mappig in YAML
  • -c, --configs-dir: The directory for portal-speicific configs
  • -l, --log-dir: The directory for putting logs
  • -w, --warc-dir: The directory to read WARCs from
  • -o, --output-dir: The directory to put output files
  • -L, --log-level: Log verbosity level (default: INFO)'

The files and directories must present. All arguments except log-level are mandatory for the following four modes

HTML Content Tree (content-tree)

  • -t, --task-name: The name of the task to appear in the logs (default: HTML Content Tree)

Tag Inventory Maker (inventory-maker)

  • -t, --task-name: The name of the task to appear in the logs (default: Tag Inventory Maker)
  • -r, --recursive: Use just direct descendants or all (default: True)

Tag Bigrams Maker (bigram-maker)

  • -t, --task-name: The name of the task to appear in the logs (default: Tag Bigrams Maker)
  • -r, --recursive: Use just direct descendants or all (default: True)

Portal Article Cleaner (cleaner)

  • -m, --write-out-mode: The schema removal tool to use (ELTEDH, JusText, Newspaper3k) (default: eltedh)
  • -t, --task-name: The name of the task to appear in the logs (default: Portal Article Cleaner)
  • -O, --output-debug: Normal output generation (validate-hash-compress and UUID file names) or print into the output directory without validation using human-firendly names (default: False, normal output)
  • -p, --run-parallel: Run processing in parallel or all operation must be used sequentially (default: True, parallel)
  • -d, --with-specific-dicts: Load portal-specific dictionaries (tables) (default: True)
  • -b, --with-specific-base-tei: Load portal-specific base TEI XML (default: True)

Diff Tag Tables (diff-tables)

  • --diff-dir: The directory which contains the directories
  • --old-filename: The filename for the old table
  • --new-filename: The filename for the new table
  • --merge-filename: The filename for the merged table

Python API

Helper functions for the Configs

  • parse_date(date_raw, date_format, locale='hu_HU.UTF-8'): Parse date according to the parameters (locale and date format)
  • BASIC_LINK_ATTRS: A basic list of html tags that contain attributes to preserve. It can be overwritten based on the set of the given portal
  • decompose_listed_subtrees_and_mark_media_descendants(article_dec, decomp, media_list): Mark the lower level of the media blocks and delete tags to be deleted
  • tei_defaultdict(mandatory_keys=('sch:url', 'sch:name'), missing_value=None): Create a defaultdict preinitialized with the mandatory Schema.org keys set to default

For the Main Pyhton API

  • run_main(warc_filename, configs_dir, log_dir, warc_dir, output_dir, init_portal_fun, run_params=None, logfile_level='INFO', console_level='INFO'): Main runner funtion
  • WRITE_OUT_MODES: A dictionary to add custom write-out modes when needed
  • diff_all_tag_table(diff_dir, old_filename, new_filename, out_filename): The main function to update tables
  • tag_bigrams_init_portal(log_dir, output_dir, run_params, portal_name, tei_logger, warc_level_params, rest_config_params): The portal initator function as called from CLI argument
  • content_tree_init_portal(log_dir, output_dir, run_params, portal_name, tei_logger, warc_level_params, rest_config_params): The portal initator function as called from CLI argument
  • tag_inventory_init_portal(log_dir, output_dir, run_params, portal_name, tei_logger, warc_level_params, rest_config_params): The portal initator function as called from CLI argument
  • portal_article_cleaner_init_portal(log_dir, output_dir, run_params, portal_name, tei_logger, warc_level_params, rest_config_params): The portal initator function as called from CLI argument

For the Low-level API: Defining Custom Modes

  • init_output_writer(output_dir, portal_name, output_debug, tei_logger): Initialises the class for writing output (into a zipfile or a directory)
  • create_new_tag_with_string(beauty_xml, tag_string, tag_name, append_to=None): Helper function to create a new XML tag containing string in it. If provided append the newly created tag to a parent tag
  • immediate_text(tag): Count the number of words (non-wthitespace text) immediately under the parameter tag excluding comments
  • to_friendly(ch, excluded_tags_fun): Convert tag name and sorted attributes to string in order to use it later (e.g. tag_freezer in the tables)
  • run_single_process(warc_filename, file_names_and_modes, main_function, sub_functions, after_function, after_params): Read a WARC file and sequentally process all articles in it with main_function (multi-page articles are handled as one entry) and yield the result after filtered through after_function
  • run_multiple_process(warc_filename, file_names_and_modes, main_function, sub_functions, after_function, after_params): Read a WARC file and sequentally process all articles in it with main_function in parallel preserving ordering (multi-page articles are handled as one entry) and yield the result after filtered through after_function
  • dummy_fun(*_): A function always returns None no matter how many arguments were given
  • process_article: A generic article processsing skeleton used by multiple targets

Licence

This project is licensed under the terms of the GNU LGPL 3.0 license.

References

The DOI of the code is: TODO

If you use this program, please cite the following paper:

The ELTE.DH Pilot Corpus – Creating a Handcrafted Gigaword Web Corpus with Metadata Balázs Indig, Árpád Knap, Zsófia Sárközi-Lindner, Mária Timári, Gábor Palkó In the Proceedings of the 12th Web as Corpus Workshop (WAC XII), pages 33-41 Marseille, France 2020

@inproceedings{indig-etal-2020-elte,
    title = "The {ELTE}.{DH} Pilot Corpus {--} Creating a Handcrafted {G}igaword Web Corpus with Metadata",
    author = {Indig, Bal{\'a}zs  and
      Knap, {\'A}rp{\'a}d  and
      S{\'a}rk{\"o}zi-Lindner, Zs{\'o}fia  and
      Tim{\'a}ri, M{\'a}ria  and
      Palk{\'o}, G{\'a}bor},
    booktitle = "Proceedings of the 12th Web as Corpus Workshop",
    month = may,
    year = "2020",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://www.aclweb.org/anthology/2020.wac-1.5",
    pages = "33--41",
    abstract = "In this article, we present the method we used to create a middle-sized corpus using
     targeted web crawling. Our corpus contains news portal articles along with their metadata, that can be useful
     for diverse audiences, ranging from digital humanists to NLP users. The method presented in this paper applies
     rule-based components that allow the curation of the text and the metadata content. The curated data can thereon
     serve as a reference for various tasks and measurements. We designed our workflow to encourage modification and
     customisation. Our concept can also be applied to other genres of portals by using the discovered patterns
     in the architecture of the portals. We found that for a systematic creation or extension of a similar corpus,
     our method provides superior accuracy and ease of use compared to The Wayback Machine, while requiring minimal
     manpower and computational resources. Reproducing the corpus is possible if changes are introduced
     to the text-extraction process. The standard TEI format and Schema.org encoded metadata is used
     for the output format, but we stress that placing the corpus in a digital repository system is recommended
     in order to be able to define semantic relations between the segments and to add rich annotation.",
    language = "English",
    ISBN = "979-10-95546-68-9",
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

html2tei-1.0.1.tar.gz (50.9 kB view details)

Uploaded Source

Built Distribution

html2tei-1.0.1-py3-none-any.whl (56.4 kB view details)

Uploaded Python 3

File details

Details for the file html2tei-1.0.1.tar.gz.

File metadata

  • Download URL: html2tei-1.0.1.tar.gz
  • Upload date:
  • Size: 50.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for html2tei-1.0.1.tar.gz
Algorithm Hash digest
SHA256 e526563d0e062a0836ccbf901590ab64de9dfa455436770ec99474d096dc4671
MD5 cceeb710fa63d1d932b4901d28f7f781
BLAKE2b-256 57f997007ac2518e5eb86008b2ad566ecdd7979d239bc8a08efec0ab2ac8a8fb

See more details on using hashes here.

File details

Details for the file html2tei-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: html2tei-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 56.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.0.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.5

File hashes

Hashes for html2tei-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f4af46a7f07ccdb2aa6d8ee69e4507dfb854dc15d3e2d69814f8b426eefd3757
MD5 be1826d81bbcd69248177e0d102aa356
BLAKE2b-256 76da3c384fc3a4e25e8462537f9a01dd8b363ac02b528d0c4a80c820684ae0cc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page