Skip to main content

No project description provided

Project description

VILA🌴
Incorporating VIsual LAyout Structures for Scientific Text Classification

Motivation

Scientific papers typically organize contents in visual groups like text blocks or lines, and text within each group usually have the same semantics. We explore different approaches for injecting the group structure into the text classifiers, and build models that improves the accuracy or efficiency of the scientific text classification task.

tease

Installation

After cloning the github repo, you can either install the vila library or just install the dependencies:

git clone git@github.com:allenai/VILA.git
cd VILA 
conda create -n vila python=3.6
pip install -e . # Install the `vila` library 
pip install -r requirements.txt # Only install the dependencies 

We tested the code and trained the models using Python≥3.6, PyTorch==1.7.1, and transformers==4.4.2.

Usage

Run Inference/Prediction

Model Weights

We've uploaded a collection of pre-trained models to HuggingFace's model Hub:

Weights Name Model Dataset F1 Latency
layoutlm-base-uncased baseline docbank 91.06 52.56
allenai/ivila-block-layoutlm-finetuned-docbank ivila docbank 92.00 -
allenai/hvila-block-layoutlm-finetuned-docbank hvila docbank 87.78 16.37
allenai/hvila-row-layoutlm-finetuned-docbank hvila docbank 91.27 28.07
layoutlm-base-uncased baseline grotoap2 92.34 52.56
allenai/ivila-block-layoutlm-finetuned-grotoap2 ivila grotoap2 93.38 -
allenai/hvila-block-layoutlm-finetuned-grotoap2 hvila grotoap2 92.37 16.37
allenai/hvila-row-layoutlm-finetuned-grotoap2 hvila grotoap2 91.65 28.07

MMDA VILA Example

MMDA is our newly designed toolkit that provides flexible supports for PDF document analysis. Please check the VILA predictor example here for more details.

PDFPredictors in VILA

In VILA repo, we also implemented a set of PDFPredictors. Please refer to the example code below:

import layoutparser as lp # For visualization 

from vila.pdftools.pdf_extractor import PDFExtractor
from vila.predictors import HierarchicalPDFPredictor
# Choose from SimplePDFPredictor,
# LayoutIndicatorPDFPredictor, 
# and HierarchicalPDFPredictor

pdf_extractor = PDFExtractor("pdfplumber")
page_tokens, page_images = pdf_extractor.load_tokens_and_image(f"path-to-your.pdf")

vision_model = lp.EfficientDetLayoutModel("lp://PubLayNet") 
pdf_predictor = HierarchicalPDFPredictor.from_pretrained("allenai/hvila-row-layoutlm-finetuned-docbank")

for idx, page_token in enumerate(page_tokens):
    blocks = **vision_model**.detect(page_images[idx])
    page_token.annotate(blocks=blocks)
    pdf_data = page_token.to_pagedata().to_dict()
    predicted_tokens = pdf_predictor.predict(pdf_data)
    lp.draw_box(page_images[idx], predicted_tokens, box_width=0, box_alpha=0.25)

Training

Directory Structure

VILA
├─ checkpoints  # For all trained weights 
│  └─ grotoap2  # For each dataset                                 
│     ├─ baseline  # For the experiment type, e.g., baseline, ivila, hvila, ...
│     │  └─ bert-base-uncased  # For the used base model, e.g., bert-base-uncased. 
│     │     ├─ checkpoint-199999                                
│     │     ├─ checkpoint-299999                                 
│     │     ├─ all_results.json                                       
│     │     └─ pytorch_model.bin                         
│     └─ ivila-BLK-row                           
│        └─ microsoft-layoutlm-base-uncased 
└─ data                                       
   ├─ docbank
   ├─ ...
   └─ grotoap2                                 

Note:

  • We will provide the download links to the datasets very soon.

Training Scripts

All training scripts are in the ./scripts folder.

  1. Train the baseline models

    cd scripts
    # bash train_baseline.sh [dataset-name] [base-model-name]
    bash train_baseline.sh grotoap2 bert-base-uncased
    bash train_baseline.sh docbank microsoft/layoutlm-base-uncased
    
  2. Train the I-VILA models

    cd scripts
    # bash train_ivila.sh [dataset-name] [how-to-obtain-layout-indicators] [used-special-token] [base-model-name]
    bash train_ivila.sh grotoap2 row BLK microsoft/layoutlm-base-uncased 
      # Row is an alias for textline 
    bash train_ivila.sh docbank block SEP bert-base-uncased
      # We can also use the default special tokens like SEP 
    bash train_ivila.sh s2-vl sentence BLK roberta-base 
      # We can also extract the sentence breaks using spacy and use them as indicators.
    
  3. Train the H-VILA models

    cd tools
    python create_hvila_model_base_weights.py 
    
    cd ../scripts
    # bash train_hvila.sh \
    #  [dataset-name] \
    #  [H-VILA-names] \
    #  [Group-Encoder-Output-Aggregation-Function] \
    #  [How-to-Obtain-Bounding-Box] \
    #  [Use-textline-or-block-as-the-group]
    
    bash train_hvila.sh \
      grotoap2 \
      weak-strong-layoutlm \
      average \
      first \
      row 
    

    In the above example, we use the:

    1. average of the group encoder outputs for all tokens as the group representation
    2. the bounding box of the first token as the group's bounding box
    3. textline (or row) to construct the groups

Evaluation Toolkit

The evaluation toolkit can generate a detailed report for the prediction accuracy (marco F1 scores) and Visual Layout consistency (group entropy) for the prediction files test_predictions.csv produced by the training scripts.

  1. Generate reports for a group of experiments for a specific dataset
  cd tools
  python generate-eval.py --dataset_name grotoap2 --experiment_name baseline
  # It will create a _reports folder in ../checkpoints/grotoap2/baseline and store the 
  # scores in the report.csv file. 
  1. Generate reports for all experiments for a specific dataset
  cd tools
  python generate-eval.py --dataset_name grotoap2
  # It will create reports for all experiments in the ../checkpoints/grotoap2/ folder
  # Also it will aggregate all the results and save them in ../checkpoints/grotoap2/_reports 
  1. Generate reports for per-class accuracy
  cd tools
  python generate-eval.py --dataset_name grotoap2 --experiment_name baseline --store_per_class
  # In additiona to the report.csv file, it will also generate a report_per_class.csv
  # table in the corresponding folder. 

Note: this evaluation toolkits might take a long time to run as calculing the group entropy may take long.

Note

In order to support the AutoModel API, we changed the default transformers requirements to >=4.5 instead of 4.4.2. If you're working on reproducing the results, you might want to downgrade the transformers version to 4.4.2 to get the matching results.

Citation

@article{Shen2021IncorporatingVL,
  title={Incorporating Visual Layout Structures for Scientific Text Classification},
  author={Zejiang Shen and Kyle Lo and Lucy Lu Wang and Bailey Kuehl and Daniel S. Weld and Doug Downey},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.00676},
  url={https://arxiv.org/abs/2106.00676}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vila-0.2.1.tar.gz (28.7 kB view details)

Uploaded Source

Built Distribution

vila-0.2.1-py3-none-any.whl (31.9 kB view details)

Uploaded Python 3

File details

Details for the file vila-0.2.1.tar.gz.

File metadata

  • Download URL: vila-0.2.1.tar.gz
  • Upload date:
  • Size: 28.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.10.0

File hashes

Hashes for vila-0.2.1.tar.gz
Algorithm Hash digest
SHA256 9d229a50eb9d2d5777f9ebf06bd667a43fc8113f6782a43119be57dd9afb6a05
MD5 feb8d78b835554bfcd540b415c4ad4aa
BLAKE2b-256 c499e0c1a054f1af0ce4cd0d0780e04689eb7687c0d74c32aaa90f3131a08d8d

See more details on using hashes here.

File details

Details for the file vila-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: vila-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 31.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.10.0

File hashes

Hashes for vila-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b7a223cae9bcf604d5372ace9c0006497b7bee73d556ed26377b89947e0d4ffb
MD5 371578330e53fcc5a57e80348b8a22e0
BLAKE2b-256 6bc1270144bcb8e2a770bb7ae03d4f851027536c2288391009a1c1e814eba176

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page