language form evaluation in python
Project description
Language Form Evaluation
This package includes efficient (almost) pure python implementation of the following metrics:
- BLEU
- reference implementation: nltk.translate.bleu_score
- error: < 1%
- speed: + 189%
- reference implementation: nltk.translate.bleu_score
- ROUGE (in progress)
- METEOR (in progress)
- CIDEr/CIDEr-D
- reference implementation: https://github.com/vrama91/cider
- with the same tokenizer
- error: < 1 %
- speed: + 81 %
- with different tokenizers (FormEval use Regexp by default)
- error: ~ 15 %
- speed: + 332 %
- with the same tokenizer
- reference implementation: https://github.com/vrama91/cider
- SPICE
- placeholder wrapper of reference implementation: https://github.com/tylin/coco-caption/tree/master/pycocoevalcap/spice
- TODO: python scene graph parser
- placeholder wrapper of reference implementation: https://github.com/tylin/coco-caption/tree/master/pycocoevalcap/spice
*All stats shown above are estimations
Dependencies
- python 3.6 +
- nltk 3.5+
Setup
pip install formeval
optional setups:
-
spice dependencies
-
wordnet lemmatizer
python3 -c 'from formeval.setup import setup_everything; setup_everything()'
Example Data
Usage
See run_examples.py
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
formeval-1.0.5.tar.gz
(12.0 kB
view hashes)
Built Distribution
formeval-1.0.5-py3-none-any.whl
(15.5 kB
view hashes)