MS-COCO Caption Evaluation for Python 3
Project description
Microsoft COCO Caption Evaluation
Evaluation codes for MS COCO caption generation.
Description
This repository provides Python 3 support for the caption evaluation metrics used for the MS COCO dataset.
The code is derived from the original repository that supports Python 2.7: https://github.com/tylin/coco-caption.
Caption evaluation depends on the COCO API that natively supports Python 3.
Requirements
- Java 1.8.0
- Python 3
Installation
To install pycocoevalcap and the pycocotools dependency (https://github.com/cocodataset/cocoapi), run:
pip install pycocoevalcap
Usage
See the example script: example/coco_eval_example.py
Files
./
- eval.py: The file includes COCOEavlCap class that can be used to evaluate results on COCO.
- tokenizer: Python wrapper of Stanford CoreNLP PTBTokenizer
- bleu: Bleu evalutation codes
- meteor: Meteor evaluation codes
- rouge: Rouge-L evaluation codes
- cider: CIDEr evaluation codes
- spice: SPICE evaluation codes
Setup
- SPICE requires the download of Stanford CoreNLP 3.6.0 code and models. This will be done automatically the first time the SPICE evaluation is performed.
- Note: SPICE will try to create a cache of parsed sentences in ./spice/cache/. This dramatically speeds up repeated evaluations. The cache directory can be moved by setting 'CACHE_DIR' in ./spice. In the same file, caching can be turned off by removing the '-cache' argument to 'spice_cmd'.
References
- Microsoft COCO Captions: Data Collection and Evaluation Server
- PTBTokenizer: We use the Stanford Tokenizer which is included in Stanford CoreNLP 3.4.1.
- BLEU: BLEU: a Method for Automatic Evaluation of Machine Translation
- Meteor: Project page with related publications. We use the latest version (1.5) of the Code. Changes have been made to the source code to properly aggreate the statistics for the entire corpus.
- Rouge-L: ROUGE: A Package for Automatic Evaluation of Summaries
- CIDEr: CIDEr: Consensus-based Image Description Evaluation
- SPICE: SPICE: Semantic Propositional Image Caption Evaluation
Developers
- Xinlei Chen (CMU)
- Hao Fang (University of Washington)
- Tsung-Yi Lin (Cornell)
- Ramakrishna Vedantam (Virgina Tech)
Acknowledgement
- David Chiang (University of Norte Dame)
- Michael Denkowski (CMU)
- Alexander Rush (Harvard University)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file pycocoevalcap-1.2.tar.gz
.
File metadata
- Download URL: pycocoevalcap-1.2.tar.gz
- Upload date:
- Size: 104.3 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/49.6.0.post20201009 requests-toolbelt/0.9.1 tqdm/4.52.0 CPython/3.9.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7857f4d596ca2fa0b1a9a3c2067588a4257556077b7ad614d00b2b7b8f57cdde |
|
MD5 | 0e36bfd9f50d767100ace969d995dc0d |
|
BLAKE2b-256 | aed76b77c7cddc3832ec4c551633c787aeeda168cc2e0ff173649ce145f1b85c |
File details
Details for the file pycocoevalcap-1.2-py3-none-any.whl
.
File metadata
- Download URL: pycocoevalcap-1.2-py3-none-any.whl
- Upload date:
- Size: 104.3 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/49.6.0.post20201009 requests-toolbelt/0.9.1 tqdm/4.52.0 CPython/3.9.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 083ed7910f1aec000b0a237ef6665f74edf19954204d0b1cbdb8399ed132228d |
|
MD5 | 14526e84cc463601a44f9e8536e2eff7 |
|
BLAKE2b-256 | 08f9466f289f1628296b5e368940f89e3cfcfb066d15ddc02ff536dc532b1c93 |