Skip to main content

Perception Model

Project description

perception-model-tool

We present a model allowing inferences of perceivable screen content in relation to position and orientation of mobile or wearable devices with respect to their user. The model is based on findings from vision science and allows prediction of a value of effective resolution that can be perceived by a user. It considers distance and angle between the device and the eyes of the observer as well as the resulting retinal eccentricity when the device is not directly focused but observed in the periphery. To validate our model, we conducted a study with 12 participants. Based on our results, we outline implications for the design of mobile applications that are able to adapt themselves to facilitate information throughput and usability.

To visualize the predictions of the model, we provide a tool that – given a display position and orientation in relation to the user’s eyes – renders a picture representing the effective display resolution, e.g. to assess text readability for different sizes or fonts. We distinguish whether a person is (a) directly looking at the display or (b) looking straight ahead and observing the display in the periphery. The tool takes a picture, e.g. a screenshot of a smartwatch application, converts it to the CIE Lab* space, and only the luminance information is further considered. A second-order Butterworth filter is used to remove frequencies that would not be visible according to our model.

Please see here for the full paper: https://doi.org/10.1145/3173574.3174184

Tool usage

The tool requires Python 3 in combination with numpy, scipy, click and colour-science. For the GUI version, Tkinter is used.

GUI version

First, an input file (jpg, png) and an output file (jpg, png) to save the adjusted version to have to be specified. The parameters allow to specify characteristics of the device under investigation (display size and resolution) as well as the distance and orientation in respect to the observer. A selector gives the option to decide whether the observer is directly looking at the device or whether s/he is looking straight (peripheral observation). By pressing the "Process" button, the image is being processed and the adjusted version is saved to the specified output file.

Command-line interface (CLI)

python filter_screenshot.py ./image.png ./out.png -d 0.4 -s 0.02 0.02 -r 200 200 -ha 10 -va 20

For a full list of commands see the help page

python filter_screenshot.py --help

Project details


Release history Release notifications | RSS feed

This version

1.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

perception-model-1.0.tar.gz (11.6 kB view details)

Uploaded Source

Built Distribution

perception_model-1.0-py3-none-any.whl (12.0 kB view details)

Uploaded Python 3

File details

Details for the file perception-model-1.0.tar.gz.

File metadata

File hashes

Hashes for perception-model-1.0.tar.gz
Algorithm Hash digest
SHA256 79e5288c54298bda75ae23b2a722ba9b8f5faaa1b9e6c2e62ab664a7cdd4c7fb
MD5 eafefe48e83f86f9d2242621d73d374b
BLAKE2b-256 77ffb217d6ae2106eb0a0c1991666913d3a9aa6b635e3ef2a14f8998be517800

See more details on using hashes here.

File details

Details for the file perception_model-1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for perception_model-1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 884a98bdce68aed8d7601e777466b2a59c1a9e2e43c7642b8ef75137a575d5fa
MD5 f82acea8126f7eb6fc2076d5d002f165
BLAKE2b-256 250266270fbe12a87959b973f310a96b5b91101ab4751bbe2e92cb2d874c5ef9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page