Skip to main content

This summarizer attempts to leverage Byte Pair Encoding (BPE) tokenization and the Bart vocabulary to filter text by semantic meaningfulness.

Project description

BPE Summarizer

CI

This summarizer attempts to leverage Byte Pair Encoding (BPE) tokenization and the Bart vocabulary to filter text by semantic meaningfulness.

BPE text representation is a subword level approach to tokenization which aims to efficiently reuse parts of words while retaining semantic value.

The algorithm is based on the frequency of n-gram pairs. More frequent pairs are represented by larger tokens.

This project explored the assumption that token size correlates strongly to semantic meaningfulness. This summarization approach intends to surface the most meaningful sentences with comparing token values and retaining sentences from the original text that included meaningful tokens within a specified percentile.

Install

pip install bpe-summarizer

Usage

from bpe_summarizer import bpe_summarize

bpe_summarize(article, percentile=99)

Parameters

Parameter Definition Default Type
document A text blob with sentences delineated by punctuation None String
percentile Sentences that include tokens in the top kth percentile will remain after summarization 99 Float
tokenizer A huggingface PreTrainedTokenizer instance that relies on byte-pair-encoding BartTokenizer.from_pretrained("facebook/bart-large") transformers.PreTrainedTokenizer
apply_intra_sentence If True, summarization will be applied at both the document level and the sentence level False False
intra_sentence_percentile When apply_intra_sentence is True, this percentile will be applied to individual sentences 50* Float
  • Note: intra_sentence_percentile is ignored if its value represents less than the percentile score of the mean of tokens, otherwise the percentile score of the mean is used.

Examples

Human Summary

Building Deep Dependency Structures Using A Wide-Coverage CCG Parser

This paper describes a wide-coverage statistical parser that uses Combinatory Categorial Grammar (CCG) to derive dependency structures.

The parser differs from most existing wide-coverage treebank parsers in capturing the long-range dependencies inherent in constructions such as coordination, extraction, raising and control, as well as the standard local predicate-argument dependencies.

A set of dependency structures used for training and testing the parser is obtained from a treebank of CCG normal-form derivations, which have been derived (semi-) automatically from the Penn Treebank.\nThe parser correctly recovers over 80% of labelled dependencies, and around 90% of unlabelled dependencies.

We provide examples showing how heads can fill dependency slots during a derivation, and how long-range dependencies can be recovered through unification of co-indexed head variables.

We define predicate argument structure for CCG in terms of the dependencies that hold between words with lexical functor categories and their arguments.\n

BPE Summary

Building Deep Dependency Structures Using A Wide-Coverage CCG Parser

This paper describes a wide-coverage statistical parser that uses Combinatory Categorial Grammar (CCG) to derive dependency structures.

The parser differs from most existing wide-coverage treebank parsers in capturing the long-range dependencies inherent in constructions such as coordination, extraction, raising and control, as well as the standard local predicate-argument dependencies.

A set of dependency structures used for training and testing the parser is obtained from a treebank of CCG normal-form derivations, which have been derived (semi-) automatically from the Penn Treebank. The parser correctly recovers over 80% of labelled dependencies, and around 90% of unlabelled dependencies. However, the dependencies are typically derived from a context-free phrase structure.

Evaluation

To evaluate the quality of the summarization, we apply a semantic similarity metric, to compare auto-summarized examples with human summaries from the scisummnet dataset. Text was represented using sentence-level embeddings. Figure 1. charts the results from the BPE Summarizer as compared to widely used summarization techniques. It performed competitively and completed summarization in one one-hundredth of a second as compared to 55 seconds* over 100 samples.

Side-by-side with widely used summarizer

Fig1. Evaluation alongside a widely used summarizer

*Performance evaluation was done using a CPU, and the competitive technique was applied after stripping down to use only the summarization component.

References:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bpe-summarizer-0.1.8.tar.gz (6.3 kB view hashes)

Uploaded Source

Built Distribution

bpe_summarizer-0.1.8-py3-none-any.whl (6.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page