Library to help analyze crowdsourcing results
Project description
crowdnalysis
Crowdsourcing Citizen Science projects usually require citizens to classify items (images, pdfs, songs, etc.) into one of a finite set of categories. Once an image is annotated by contributing citizens, we need to aggregate these annotations to obtain a consensus classification. Usually, the consensus for an item is achieved by selecting the most voted category for the item. crowdnalysis allows computing consensus using more advanced techniques beyond the standard majority voting. In particular, it provides consensus methods that model quality for each of the citizen scientists involved in the project. This more advanced consensus results in higher quality information for the Crowdsourcing Citizen Science project, an essential requirement as citizens are increasingly willing and able to contribute to science.
Implemented consensus algorithms
- Majority Voting
- Probabilistic
- Multinomial
- Dawid-Skene
In addition to the pure Python implementations above, the following models are implemented in the
probabilistic programming language Stan and used via the
CmdStanPy
interface:
- Multinomial
- Multinomial Eta
- Dawid-Skene
- Dawid-Skene Eta Hierarchical
~ Eta models impose that the probability of a reported label is higher for the real class in the error-rate (a.k.a. confusion) matrix.
Features
- Import annotation data from a
CSV
file with preprocessing option - Set inter-dependencies between questions to filter out irrelevant annotations
- Distinguish real classes for answers from reported labels (e.g., "Not answered")
- Calculate inter-rater reliability with different measures
- Fit selected model to annotation data and compute the consensus
- Compute the consensus with a fixed pre-determined set of parameters
- Fit the model parameters provided that the consensus is already known
- Given the parameters of a generative model (Multinomial, Dawid-Skene); sample annotations, tasks, and workers (i.e., annotators)
- Conduct prospective analysis of the 'accuracy vs. number of annotations' for a given set of models
- Visualize the error-rate matrix for annotators
- Visualize the consensus on annotated images in
HTML
format
Quick start
crowdnalysis is distributed via PyPI: https://pypi.org/project/crowdnalysis/
Install as a standard Python package:
pip install crowdnalysis
CmdStanPy
will be installed automatically as a dependency. However, this package requires the installation of the
CmdStan
command-line interface too.
This can be done via executing the install_cmdstan
utility that comes with CmdStanPy
.
We recommend installing the version 2.26.1
as this is the latest version we have tested crowdnalysis with.
See related docs for more information.
install_cmdstan -v 2.26.1
Use the package in code:
>>> import crowdnalysis
Check available consensus models:
>>> crowdnalysis.factory.Factory.list_registered_algorithms()
See the tutorial notebook for the usage of main features.
How to run unit tests
We use pytest as the testing framework. Tests can be run—at the repo directory—by:
pytest
If you want to get the logs of the execution, run:
pytest --log-cli-level 0
Logging
We use the standard logging
library.
Deployment to PyPI
Follow these simple steps to have a new release automatically deployed to PyPI
by the CD workflow. The example is given for version v1.0.1
:
- Update the version in
src/crowdnalysis/_version.py
:
__version__ = "1.0.1" # Note no "v" prefix here.
git push
the changes toorigin
and make sure the remotemaster
branch is up-to-date;- Create a new
tag
preferably with annotation:
git tag -a v1.0.1 -m "New sections added to README"
- Push the tag to
origin
:
git push origin v1.0.1
And shortly, the new version will be available on PyPI.
License
This project is licensed under the GNU Affero General Public License v3.0 - see the LICENSE file for details.
Citation
If you find our software useful for your research, kindly consider citing it using the following biblatex
entry with
the DOI attached to all versions:
@software{crowdnalysis2022,
author = {Cerquides, Jesus and M{\"{u}}l{\^{a}}yim, Mehmet O{\u{g}}uz},
title = {crowdnalysis: A software library to help analyze crowdsourcing results},
month = jan,
year = 2022,
publisher = {Zenodo},
doi = {10.5281/zenodo.5898579},
url = {https://doi.org/10.5281/zenodo.5898579}
}
Acknowledgements
crowdnalysis is being developed within the Crowd4SDG project funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 872944.Reference
For the details of the conceptual and mathematical model of crowdnalysis, see:
[1] Cerquides, J.; Mülâyim, M.O.; Hernández-González, J.; Ravi Shankar, A.; Fernandez-Marquez, J.L. A Conceptual Probabilistic Framework for Annotation Aggregation of Citizen Science Data. Mathematics 2021, 9, 875. https://doi.org/10.3390/math9080875
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for crowdnalysis-1.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f23a699628ffee72f90e27a852236406e1cf8ae9f9e19cb9319b017285fa636b |
|
MD5 | e6ef867165bac1d8637df9bd81940855 |
|
BLAKE2b-256 | a92a020257a0e9e0cc4d76925b11c6abcfa70f5e8e4f29d1d169de2c6323d86f |