Language model powered proof reader for correcting contextual errors in natural language.

## lmproof - Language Model Proof Reader

Library to do proof-reading corrections for Grammatical Errors, spelling errors, confused word errors and other errors using pre-trained Language Models.

Currently, we use the language model based approach mentioned in Christopher Bryant and Ted Briscoe. 2018 with few changes.

Unlike many approaches to GEC, this approach does NOT require annotated training data and mainly depends on a monolingual language model. The program works by iteratively comparing certain words in a text against alternative candidates and applying a correction if one of these candidates is more probable than the original word. These correction candidates are variously generated by a word inflection library or are otherwise defined manually. Currently, this system only corrects:

Non-words (e.g. freind and informations)
Morphology (e.g. eat, ate, eaten, eating, etc.)
Common Determiners and Prepositions (e.g. the, a, in, at, to, etc.)
Commonly Confused Words (e.g. bear/bare, lose/loose, etc.)


This work builds upon https://github.com/chrisjbryant/lmgec-lite/

## Components

### Inflection generators

• LemmInflect is used to lemmatize and generate inflections for candidate proposals to the language model.

### Spell Checker

• symspellpy is used for obtaining spell check candidates.

The components are highly modularised to facilitate experimentation with newer scorers and support more languages. Pre-trained language models for other languages, inflectors, common error patterns can be easily added to support more languages.

## TODOs

• Research on distilling gpt-2 to a smaller model (LSTM?) to reduce the horrendous latency.
• Experiment on GEC dev sets to obtain optimal thresholds.
• Anyway to handle insertions.
• Check whether LemmInflect proposals are actually better than just using AGID.

## Project details

Uploaded source
Uploaded py3