Skip to main content

No project description provided

Project description

pytest-visual

pytest-visual introduces the concept of visual testing to ML (with automated change detection!).

Problem statement

Most ML code is poorly tested. This is at least partially because a lot of ML code is very hard to test meaningfully.

For example, we can easily test whether an image data augmentation pipeline produces an image with the correct shape, but does the image also look realistic? And does augmentation produce enough variation? These tests might succeed, but that doesn't mean that your code works.

Solution - visual testing for ML

pytest-visual aims to streamline the organization of visualizations in ML code, enhancing debugging, verification, and code clarity. It automatically detects changes in visualization outputs, presents them in your browser, and prompts for approval. Accepted changes are memorized and not prompted for again. As a result, your visualizations

  • will actually catch bugs, as changes won't go unnoticed.
  • are always up to date, as they're run on every pytest-visual invocation.
  • won't clutter your production code, as they're separated into test cases.

Example

In this example, we demonstrate how to create a basic visual test case. We first create a Python file with a prefix test_* (eg. test_visualization.py), and add this function:

# `pytest-visual` accepts `plotly` figures
import plotly.express as px

# `test_` prefix makes this a pytest test
# `visual` argument makes this a visual test
def test_show_square(visual):
    # Plot the test data
    x, y = [0, 1, 2, 3, 4, 5], [0, 1, 4, 9, 16, 25]
    fig = px.scatter(x=x, y=y)

    # You can have multiple print() and show() calls in a single case.
    visualize.print("Square plot")
    visualize.show(fig)

Then run pytest --visualize, and open the url displayed in the terminal window (most likely http://127.0.0.1:8050). If the visualization looks OK, click "Accept", and pytest should complete with success.

A before and after plot displayed side by side.

Here's a sample output from examples/test_data_augmentation.py: A before and after image showing the effect of data augmentation on a picture of a dog.

Installation

pip install pytest-visual

CLI usage

With pytest-visual plugin, the following options are added to pytest:

Command Description
pytest --visual Run pytest with visual tests, and prompt user for manual review on all detected changed.
pytest --visual-yes-all Accept everything without prompting. Useful when you clone a repository or check out a new branch.
pytest --visual-reset-all Mark all cases as declined.

FAQ

Can I mix unit testing with visual testing?

Yes. And this is exactly what you should be doing! Just omit the visual fixture, and it's a normal pytest test again.

Which parts of a typical ML codebase can be visually tested?

Visual testing can be used to verify the correctness of

  • data loading and augmentation
  • synthetic data generation
  • NN model strutures
  • loss functions
  • learning rate schedules
  • detection algorithms
  • various filtering algorithms
  • etc.

My visual tests are flaky. Why?

This is typically because your output is undeterministic. For example, plotting a loss over a short training run is probably not a good idea, as it will likely be different for each run. If you need it as a test, consider a normal unit test with some threshold for expected accuracy. Tiny floating point errors could lead to inconsistent output.

That said, we're planning to make pytest-visual robust to small fluctuations in output.

Why even bother testing ML code?

If you've implemented complex data processing logic, custom neural network architectures, advanced post processing steps, or tried to reproduce a paper, you know that bugs can take a very large part of the development effort. Reduced accuracy is often the only symptom of a bug. When conducting an ML experiment, it's typical to waste a lot of time on

  • debugging when your ML code contains a bug,
  • debugging when your ML code doesn't contain a bug, but you don't know this,
  • revisiting an idea, because you're not sure whether your experiment was correct,
  • dropping a good idea because your experiment contained a bug, but you don't know this.

Proper testing, including visual testing, can avoid a large portion of these headaches.

Isn't automation the whole point of unit testing automation? Now we have a potentially lazy/irresponsible human in the loop, who can just blindly "accept" a test.

Full automation is a of course a great, but even the tests that are automatically run are written by people. Test quality can vary a lot, and tests written by a lazy/irresponsible person probably won't catch many bugs. Just having unit tests won't magically fix your code, but they are a very efficient tool to enable responsible developers write reliable code.

Contributing

Your help is greatly needed and appreciated! Given that the library is still in alpha, there will likely be a lot of changes. Thus the main focus is to get the core features right and fix bugs.

We encourage you to contribute by submitting PRs to improve the library. Usually, the PR should link to an existing issue, where the solution idea has been proposed and discussed. In case of simple fixes, where you can clearly see the problem, you can also directly submit a PR.

To get started with development, install all the required dependencies for development:

pip install -r requirements.txt && pip install -r requirements-dev.txt && pip install -e .

The entry point for the plugin is visual/interface.py, you should start exploring from that file. The code should be reasonably documented, and is quite short.

Before submitting a PR, please

  1. Run non-visual tests with pytest
  2. Lint your code with pre-commit run -a, and ensure no errors persist
  3. Verify that pytest-visual still works end-to-end with pytest --visual-reset-all && pytest --visual, and try accepting and declining changes.

Also, please verify that your function inputs and outputs are type-annotated unless there's a good reason not to do so.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest-visual-0.2.1.tar.gz (9.6 kB view details)

Uploaded Source

Built Distribution

pytest_visual-0.2.1-py3-none-any.whl (10.5 kB view details)

Uploaded Python 3

File details

Details for the file pytest-visual-0.2.1.tar.gz.

File metadata

  • Download URL: pytest-visual-0.2.1.tar.gz
  • Upload date:
  • Size: 9.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.18

File hashes

Hashes for pytest-visual-0.2.1.tar.gz
Algorithm Hash digest
SHA256 11208378c81560cbdef685a57ee6bbe8f1168ada5547849162b25ecb8f37f442
MD5 143b96ca58044dec5d99268ca1f6da69
BLAKE2b-256 a2141b303ad5d413f3ade0211e0a0f4173ec7b2737c4656581eec04f10ffccf5

See more details on using hashes here.

File details

Details for the file pytest_visual-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for pytest_visual-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 dca9d734859908e300a411e92c65e43b872531f90980f002f6a7797e40381ae5
MD5 e462b8104716c1e289f55036bd9d0082
BLAKE2b-256 8f3655437e5c8e16b29cbae690d31a99332b14c7743dfbdcc731be7302c7f224

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page