Abstractive Summarization for Data Augmentation
absum - Abstractive Summarization for Data Augmentation
Imbalanced class distribution is a common problem in ML. Undersampling combined with oversampling are two methods of addressing this issue. A technique such as SMOTE can be effective for oversampling, although the problem becomes a bit more difficult with multilabel datasets. MLSMOTE has been proposed, but the high dimensional nature of numerical vectors created from text can sometimes make other forms of data augmentation more appealing.
absum is an NLP library that uses abstractive summarization to perform data augmentation in order to oversample under-represented classes in datasets. Recent developments in abstractive summarization make this approach optimal in achieving realistic data for the augmentation process.
It uses the latest Huggingface T5 model by default, but is designed in a modular way to allow you to use any pre-trained or out-of-the-box Transformers models capable of abstractive summarization. absum is format agnostic, expecting only a dataframe containing text and all features. It also uses multiprocessing to achieve optimal performance.
Singular summarization calls are also possible.
Append counts or the number of rows to add for each feature are first calculated with a ceiling threshold. Namely, if a given feature has 1000 rows and the ceiling is 100, its append count will be 0.
For each feature it then completes a loop from an append index range to the append count specified for that given feature. The append index is stored to allow for multi processing.
An abstractive summarization is calculated for a specified size subset of all rows that uniquely have the given feature. If multiprocessing is set, the call to abstractive summarization is stored in a task array later passed to a sub-routine that runs the calls in parallel using the multiprocessing library, vastly reducing runtime.
Each summarization is appended to a new dataframe with the respective features one-hot encoded.
pip install absum
git clone https://github.com/aaronbriel/absum.git pip install [--editable] .
pip install git+https://github.com/aaronbriel/absum.git
absum expects a DataFrame containing a text column which defaults to 'text', and the remaining columns representing one-hot encoded features. If additional columns are present that you do not wish to be considered, you have the option to pass in specific one-hot encoded features as a comma-separated string to the 'features' parameter. All available parameters are detailed in the Parameters section below.
import pandas as pd from absum import Augmentor csv = 'path_to_csv' df = pd.read_csv(csv) augmentor = Augmentor(df, text_column='review_text') df_augmented = augmentor.abs_sum_augment() # Store resulting dataframe as a csv df_augmented.to_csv(csv.replace('.csv', '-augmented.csv'), encoding='utf-8', index=False)
Running singular summarization on any chunk of text is simple:
text = chunk_of_text_to_summarize augmentor = Augmentor(min_length=100, max_length=200) output = augmentor.get_abstractive_summarization(text)
NOTE: When running any summarizations you may see the following warning message which can be ignored: "Token indices sequence length is longer than the specified maximum sequence length for this model (2987 > 512). Running this sequence through the model will result in indexing errors". For more information refer to this issue.
||Dataframe containing text and one-hot encoded features.|
||Column in df containing text.|
||Comma-separated string of features to possibly augment data for.|
||Torch device to run on cuda if available otherwise cpu.|
||Model used for abstractive summarization.|
||Tokenizer used for abstractive summarization.|
||Can be set to ‘tf’, ‘pt’ or ‘np’ to return respectively TensorFlow tf.constant, PyTorch torch.Tensor or Numpy :oj: np.ndarray instead of a list of python integers.|
||Number of beams for beam search. Must be between 1 and infinity. 1 means no beam search. Default to 1.|
||If set to int > 0, all ngrams of size no_repeat_ngram_size can only occur once.|
||The min length of the sequence to be generated. Between 0 and infinity. Default to 10.|
||The max length of the sequence to be generated. Between min_length and infinity. Default to 50.|
||bool if set to True beam search is stopped when at least num_beams sentences finished per batch. Defaults to False as defined in configuration_utils.PretrainedConfig.|
||Don't decode special tokens (self.all_special_tokens). Default: False.|
||Number of samples to pull from dataframe with specific feature to use in generating new sample with Abstractive Summarization.|
||Maximum ceiling for each feature, normally the under-sample max.|
||If set, stores calls to abstractive summarization in array which is then passed to run_cpu_tasks_in_parallel to allow for increasing performance through multiprocessing.|
||If set, prints generated summarizations.|
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.