An easy-to-use library for skin tone classification
Project description
An easy-to-use library for skin tone classification.
This can be used to detect face or skin area in the specified images. The detected skin tones are then classified into the specified color categories. The library finally generates results to report the detected faces (if any), dominant skin tones and color categories.
Table of Contents
- Changelogs
- Citation
- Installation
- HOW TO USE
- Quick Start
- Detailed Usage
- Use Cases
- 1. To process multiple images
- 2. To specify color categories
- 3. Specify output folder
- 4. Store report images for debugging
- 5. Specify the types of the input image(s)
- 6. Convert the
color
images toblack/white
images and then do the classification usingbw
palette - 7. Tune parameters of face detection
- 8. Multiprocessing settings
- 9. Used as a library by importing into other projects
- Use Cases
- Contributing
Changelogs
v1.1.2
In this version, we have made the following changes:
- 🐛 FIX!: We fixed a bug where the app will crash when using the
-bw
option. Error message:cannot reshape array of size 62500 into shape (3)
. - 🐛 FIX!: We fixed a bug where the app may identify the image type as
color
when using the-bw
option.
v1.1.1
In this version, we have made the following changes:
- ✨ NEW!: We add the
-v
(or--version
) option to show the version number. - ✨ NEW!: We add the
-r
(or--recursive
) option to enable recursive search for images.- For example,
stone -i ./path/to/images/ -r
will search all images in the./path/to/images/
directory and its subdirectories. stone -i ./path/to/images/
will only search images in the./path/to/images/
directory.
- For example,
- 🐛 FIX!: We fixed a bug where the app cannot correctly identify the current folder if
-i
option is not specified.
v1.1.0
Click here to show more.
In this version, we have made the following changes:
- ✨ NEW!: Now,
stone
can not only be run on the command line, but can also be imported into other projects for use. Check this for more details.- We expose the
process
andshow
functions in thestone
package.
- We expose the
- ✨ NEW!: We add
URL
support for the input images.- Now, you can specify the input image as a URL, e.g.,
https://example.com/images/pic.jpg
. Of course, you can mix the URLs and local filenames.
- Now, you can specify the input image as a URL, e.g.,
- ✨ NEW!: We add recursive search support for the input images.
- Now, when you specify the input image as a directory, e.g.,
./path/to/images/
. The app will search all images in the directory recursively.
- Now, when you specify the input image as a directory, e.g.,
- 🧬 CHANGE!: We change the column header in
result.csv
:prop
=>percent
PERLA
=>tone label
- 🐛 FIX!: We fixed a bug where the app would not correctly sort files that did not contain numbers in their filenames.
v1.0.1
Click here to show more.
-
👋 BYE: We have removed the function to pop up a resulting window when processing a single image.
- It can raise an error when running the app in a web browser environment, e.g., Jupyter Notebook or Google Colab.
- If you want to see the processed image, please use the
-d
option to store the report image in the./debug
folder.
v1.0.0
Click here to show more.
🎉We have officially released the 1.0.0 version of the library! In this version, we have made the following changes:
- ✨ NEW!: We add the
threshold
parameter to control the minimum percentage of required face areas (Defaults to 0.15).- In previous versions, the library could incorrectly identify non-face areas as faces, such as shirts, collars,
necks, etc.
In order to improve its accuracy, the new version will further calculate the proportion of skin in the recognized
area
after recognizing the facial area. If it is less than the
threshold
value, the recognition area will be ignored. (While it's still not perfect, it's an improvement over what it was before.)
- In previous versions, the library could incorrectly identify non-face areas as faces, such as shirts, collars,
necks, etc.
In order to improve its accuracy, the new version will further calculate the proportion of skin in the recognized
area
after recognizing the facial area. If it is less than the
- ✨ NEW!: Now, we will back up the previous results if it already exists.
The backup file will be named as
result_bak_<current_timestamp>.csv
. - 🐛 FIX!: We fix the bug that the
image_type
option does not work in the previous version. - 🐛 FIX!: We fix the bug that the library will create an empty
log
folder when checking the help information by runningstone -h
.
v0.2.0
Click here to show more.
In this version, we have made the following changes:
-
✨ NEW!: Now we support skin tone classification for black and white images.
-
In this case, the app will use different skin tone palettes for color images and black/white images.
-
We use a new parameter
-t
or--image_type
to specify the type of the input image. It can becolor
,bw
orauto
(default).auto
will let the app automatically detect whether the input is color or black/white image. -
We use a new parameter
-bw
or--black_white
to specify whether to convert the input to black/white image. If so, the app will convert the input to black/white image and then classify the skin tones based on the black/white palette.For example:
-
-
✨ NEW!: Now we support multiprocessing for processing the images. It will largely speed up the processing.
- The number of processes is set to the number of CPU cores by default.
- You can specify the number of processes by
--n_workers
parameter.
-
🧬 CHANGE!: We add more details in the report image to facilitate the debugging, as shown above.
- We add the face id in the report image.
- We add the effective face or skin area in the report image. In this case, the other areas are blurred.
-
🧬 CHANGE!: Now, we save the report images into different folders based on their
image_type
(color or black/white) and the number of detected faces.- For example, if the input image is color and there are 2 faces detected, the report image will be saved
in
./debug/color/faces_2/
folder. - If the input image is black/white and no face has been detected, the report image will be saved
in
./debug/bw/faces_0/
folder. - You can easily to tune the parameters and rerun the app based on the report images in the corresponding folder.
- For example, if the input image is color and there are 2 faces detected, the report image will be saved
in
-
🐛 FIX!: We fix the bug that the app will crash when the input image has dimensionality errors.
- Now, the app won't crash and will report the error message in
./result.csv
.
- Now, the app won't crash and will report the error message in
Citation
If you are interested in our work, please cite:
@article{https://doi.org/10.1111/ssqu.13242,
author = {Rej\'{o}n Pi\tilde{n}a, Ren\'{e} Alejandro and Ma, Chenglong},
title = {Classification Algorithm for Skin Color (CASCo): A new tool to measure skin color in social science research},
journal = {Social Science Quarterly},
volume = {n/a},
number = {n/a},
pages = {},
keywords = {colorism, measurement, photo elicitation, racism, skin color, spectrometers},
doi = {https://doi.org/10.1111/ssqu.13242},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/ssqu.13242},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1111/ssqu.13242},
abstract = {Abstract Objective A growing body of literature reveals that skin color has significant effects on people's income, health, education, and employment. However, the ways in which skin color has been measured in empirical research have been criticized for being inaccurate, if not subjective and biased. Objective Introduce an objective, automatic, accessible and customizable Classification Algorithm for Skin Color (CASCo). Methods We review the methods traditionally used to measure skin color (verbal scales, visual aids or color palettes, photo elicitation, spectrometers and image-based algorithms), noting their shortcomings. We highlight the need for a different tool to measure skin color Results We present CASCo, a (social researcher-friendly) Python library that uses face detection, skin segmentation and k-means clustering algorithms to determine the skin tone category of portraits. Conclusion After assessing the merits and shortcomings of all the methods available, we argue CASCo is well equipped to overcome most challenges and objections posed against its alternatives. While acknowledging its limitations, we contend that CASCo should complement researchers. toolkit in this area.}
}
Installation
Install from pip
pip install skin-tone-classifier --upgrade
Install from source
git clone git@github.com:ChenglongMa/SkinToneClassifier.git
cd SkinToneClassifier
pip install -e . --verbose
HOW TO USE
Quick Start
Given the famous photo of Lenna, to detect her skin tone,
stone -i /path/to/lenna.jpg --debug
Then, you can find the processed image in ./debug/color/faces_1
folder, e.g.,
In this image, from left to right you can find the following information:
- detected face with a label (Face 1) enclosed by a rectangle.
- dominant colors.
- The number of colors depends on settings (default is 2), and their sizes depend on their proportion.
- specified color palette and the target label is enclosed by a rectangle.
- you can find a summary text at the bottom.
Furthermore, there will be a report file named result.csv
which contains more detailed information, e.g.,
file | image type | face id | dominant 1 | percent 1 | dominant 2 | percent 2 | skin tone | tone label | accuracy(0-100) |
---|---|---|---|---|---|---|---|---|---|
lena_std | color | 1 | #CF7371 | 0.52 | #E4A89F | 0.48 | #E7C1B8 | CI | 85.39 |
Interpretation of the table
file
: the filename of the processed image.- NB: The filename pattern of report image is
<file>-<face id>
- NB: The filename pattern of report image is
image type
: the type of the processed image, i.e.,color
orbw
(black/white).face id
: the id of the detected face, which matches the reported image.NA
means no face has been detected.dominant n
: then
-th dominant color of the detected face.percent n
: the percentage of then
-th dominant color, (0~1.0).skin tone
: the skin tone category of the detected face.tone label
: the label of skin tone category of the detected face.accuracy
: the accuracy of the skin tone category of the detected face, (0~100). The larger, the better.
Detailed Usage
To see the usage and parameters, run:
stone -h (or --help)
Output in console:
usage: stone [-h] [-i IMAGE FILENAME [IMAGE FILENAME ...]] [-r] [-t IMAGE TYPE] [-p PALETTE [PALETTE ...]]
[-l LABELS [LABELS ...]] [-d] [-bw] [-o DIRECTORY] [--n_workers WORKERS] [--n_colors COLORS]
[--new_width WIDTH] [--scale SCALE] [--min_nbrs NEIGHBORS] [--min_size WIDTH [HEIGHT ...]]
[--threshold THRESHOLD] [-v]
Skin Tone Classifier
options:
-h, --help show this help message and exit
-i IMAGE FILENAME [IMAGE FILENAME ...], --images IMAGE FILENAME [IMAGE FILENAME ...]
Image filename(s) or URLs to process;
Supports multiple values separated by space, e.g., "a.jpg b.png";
Supports directory or file name(s), e.g., "./path/to/images/ a.jpg";
Supports URL(s), e.g., "https://example.com/images/pic.jpg" since v1.1.0+.
The app will search all images in current directory in default.
-r, --recursive Whether to search images recursively in the specified directory.
-t IMAGE TYPE, --image_type IMAGE TYPE
Specify whether the input image(s) is/are colored or black/white.
Valid choices are: "auto", "color" or "bw",
Defaults to "auto", which will be detected automatically.
-p PALETTE [PALETTE ...], --palette PALETTE [PALETTE ...]
Skin tone palette;
Supports RGB hex value leading by "#" or RGB values separated by comma(,),
E.g., "-p #373028 #422811" or "-p 255,255,255 100,100,100"
-l LABELS [LABELS ...], --labels LABELS [LABELS ...]
Skin tone labels; default values are the uppercase alphabet list leading by the image type ('C' for 'color'; 'B' for 'Black&White'), e.g., ['CA', 'CB', ..., 'CZ'] or ['BA', 'BB', ..., 'BZ'].
-d, --debug Whether to generate report images, used for debugging and verification.The report images will be saved in the './debug' directory.
-bw, --black_white Whether to convert the input to black/white image(s).
If true, the app will use the black/white palette to classify the image.
-o DIRECTORY, --output DIRECTORY
The path of output file, defaults to current directory.
--n_workers WORKERS The number of workers to process the images, defaults to the number of CPUs in the system.
--n_colors COLORS CONFIG: the number of dominant colors to be extracted, defaults to 2.
--new_width WIDTH CONFIG: resize the images with the specified width. Negative value will be ignored, defaults to 250.
--scale SCALE CONFIG: how much the image size is reduced at each image scale, defaults to 1.1
--min_nbrs NEIGHBORS CONFIG: how many neighbors each candidate rectangle should have to retain it.
Higher value results in less detections but with higher quality, defaults to 5.
--min_size WIDTH [HEIGHT ...]
CONFIG: minimum possible face size. Faces smaller than that are ignored, defaults to "90 90".
--threshold THRESHOLD
CONFIG: what percentage of the skin area is required to identify the face, defaults to 0.15.
-v, --version Show the version number and exit.
Use Cases
1. To process multiple images
1.1 Multiple filenames
stone -i (or --images) a.jpg b.png
1.2 Images in some folder(s)
stone -i ./path/to/images/
NB: Supported image formats: .jpg, .gif, .png, .jpeg, .webp, .tif
.
In default (i.e., stone
without -i
option), the app will search images in current folder.
2. To specify color categories
2.1 Use hex values
stone -p (or --palette) #373028 #422811 #fbf2f3
NB: Values start with '#' and are separated by space.
2.2 Use RGB tuple values
stone -c 55,48,40 66,40,17 251,242,243
NB: Values split by comma ',', multiple values are still separated by space.
3. Specify output folder
The app puts the final report (result.csv
) in current folder in default.
To change the output folder:
stone -o (or --output) ./path/to/output/
The output folder will be created if it does not exist.
In result.csv
, each row is showing the color information of each detected face.
If more than one faces are detected, there will be multiple rows for that image.
4. Store report images for debugging
stone -d (or --debug)
This option will store the report image (like the Lenna example above) in
./path/to/output/debug/<image type>/faces_<n>
folder,
where <image type>
indicates if the image is color
or bw
(black/white);
<n>
is the number of faces detected in the image.
By default, to save space, the app does not store report images.
Like in the result.csv
file, there will be more than one report images if 2 or more faces were detected.
5. Specify the types of the input image(s)
5.1 The input are color images
stone -t (or --image_type) color
5.2 The input are black/white images
stone -t (or --image_type) bw
5.3 In default, the app will detect the image type automatically, i.e.,
stone -t (or --image_type) auto
For color
images, we use the color
palette to detect faces:
#373028 #422811 #513b2e #6f503c #81654f #9d7a54 #bea07e #e5c8a6 #e7c1b8 #f3dad6 #fbf2f3
(Please refer to our paper above for more details.)
For bw
images, we use the bw
palette to detect faces:
#FFFFFF #F0F0F0 #E0E0E0 #D0D0D0 #C0C0C0 #B0B0B0 #A0A0A0 #909090 #808080 #707070 #606060 #505050 #404040 #303030 #202020 #101010 #000000
(Please refer to Leigh, A., & Susilo, T. (2009). Is voting skin-deep? Estimating the effect of candidate ballot photographs on election outcomes. Journal of Economic Psychology, 30(1), 61-70. for more details.)
6. Convert the color
images to black/white
images and then do the classification using bw
palette
stone -bw (or --black_white)
For example:
1. Input
2. Convert to black/white image
3. The final report image
NB: we did not do the opposite, i.e., convert black/white
images to color
images
because the current AI models cannot accurately "guess" the color of the skin from a black/white
image.
It can further bias the analysis results.
7. Tune parameters of face detection
The rest parameters of CONFIG
are used to detect face.
Please refer to https://stackoverflow.com/a/20805153/8860079 for detailed information.
8. Multiprocessing settings
stone --n_workers <Any Positive Integer>
Use --n_workers
to specify the number of workers to process images in parallel, defaults to the number of CPUs in your
system.
9. Used as a library by importing into other projects
You can refer to the following code snippet:
import stone
from json import dumps
# process the image
result = stone.process(image_path, image_type, palette, *other_args, return_report_image=True)
# show the report image
report_images = result.pop("report_images") # obtain and remove the report image from the `result`
face_id = 1
stone.show(report_images[face_id])
# convert the result to json
result_json = dumps(result)
stone.process
is the main function to process the image.
It has the same parameters as the command line version.
It will return a dict
, which contains the process result and report image(s) (if required,
i.e., return_report_image=True
).
You can further use stone.show
to show the report image(s).
And convert the result to json
format.
The result_json
will be like:
{
"basename": "lena_std",
"extension": ".jpg",
"image_type": "color",
"faces": [
{
"face_id": 1,
"dominant_colors": [
{
"color": "#CF7371",
"percent": "0.52"
},
{
"color": "#E4A89F",
"percent": "0.48"
}
],
"skin_tone": "#E7C1B8",
"tone_label": "CI",
"accuracy": 85.39
}
]
}
Contributing
👋 Welcome to SkinToneClassifier! We're excited to have your contributions. Here's how you can get involved:
-
💡 Discuss New Ideas: Have a creative idea or suggestion? Start a discussion in the Discussions tab to share your thoughts and gather feedback from the community.
-
❓ Ask Questions: Got questions or need clarification on something in the repository? Feel free to open an Issue labeled as a "question" or participate in Discussions.
-
🐛 Issue a Bug: If you've identified a bug or an issue with the code, please open a new Issue with a clear description of the problem, steps to reproduce it, and your environment details.
-
✨ Introduce New Features: Want to add a new feature or enhancement to the project? Fork the repository, create a new branch, and submit a Pull Request with your changes. Make sure to follow our contribution guidelines.
-
💖 Funding: If you'd like to financially support the project, you can do so by sponsoring the repository on GitHub. Your contributions help us maintain and improve the project.
Thank you for considering contributing to SkinToneClassifier. We value your input and look forward to collaborating with you!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for skin-tone-classifier-1.1.2.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | cb1139179dc106d54cd4aa7cd0fa8c01882a66941ecf2f2639aae36b7e60dc70 |
|
MD5 | 2ec2fc2b0b2db61632c247169052abb6 |
|
BLAKE2b-256 | f0bc29e86c61309e6cce05b818f2a678161d8da5e2c5bca98dae02f16228b0bb |
Hashes for skin_tone_classifier-1.1.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a517f1dc5b2f99942cc2965c91acef3f0336458ed53a1a5a71e647888fbd5ec6 |
|
MD5 | 56102c8079020fcc16a95837f28f8bd8 |
|
BLAKE2b-256 | c01d3f1617e68bb6b964443f31283c63576429f2d3bbf21602e5536afccc97da |