Skip to main content

Run experiments in you Django project

Project description

Build status Test coverage status

Have you ever found yourself in a situation where you’ve made changes to some functionality, all the tests are passing, manual tests look OK, but you’re still not convinced that you’ve covered all of the edge-cases?

You know your new implementation is faster or more stable, but you still have the feeling you’re missing something. Wouldn’t it be great if you could run both implementations side-by-side and compare the results?

Maybe you’d want to try it out on a limited set of users for a certain period of time just to flesh out all the cases you’ve missed. Or you just want to run a couple of experiments and study the effects without severely impacting the users in a negative way.

Inspired by GitHub’s Scientist and RealGeeks’ lab_tech, this project brings Joe Alcorn’s laboratory to Django’s world not only to allow you to run experiments, but to dynamically modify their impact on users. This would give you the confirmations and the peace of mind you’re looking for and your users wouldn’t be inconvenienced by potential errors.

Installation

To use this library, install it using pip

pip install django-studies

register the Django app in your settings.py:

# project/settings.py
INSTALLED_APPS = [
    # ...
    "studies",
]

and run the migrations:

python manage.py studies

Features

  • To run an experiment, instantiate the Experiment class, define the control and the candidate and conduct the experiment. For example, a simple Django class-based view with an experiment would look like (the one from the demo project):

from studies.experiments import Experiment


class ViewWithMatchingResults(View):
    def get(self, request, *args, **kwargs):
        with Experiment(
            name="ViewWithMatchingResults",
            context={"context_key": "context_value"},
            percent_enabled=100,
        ) as experiment:
            arg = "match"
            kwargs = {"extra": "value"}
            experiment.control(
                self._get_control,
                context={"strategy": "control"},
                args=[arg],
                kwargs=kwargs,
            )
            experiment.candidate(
                self._get_candidate,
                context={"strategy": "candidate"},
                args=[arg],
                kwargs=kwargs,
            )
            data = experiment.conduct()

        return JsonResponse(data)

    def _get_control(self, result, **kwargs):
        return {"result": result, **kwargs}

    def _get_candidate(self, result, **kwargs):
        return {"result": result, **kwargs}
  • Adjust the percentage of users who’ll be impacted by this experiment via the admin:

The experiment's detail page in the admin
  • To add support for your own reporting system, whether it’s logging, statsd or something else, override the Experiment class’ publish method and make the call (another example from the demo project):

import logging
from studies.experiments import Experiment


logger = logging.getLogger()


class ExperimentWithLogging(Experiment):
"""
An override that provides logging support for demonstration
purposes.
"""

def publish(self, result):
    if result.match:
        logging.info(
            "Experiment %(name)s is a match",
            {"name": result.experiment.name},
        )
    else:
        control_observation = result.control
        candidate_observation = result.candidates[0]
        logging.info(
            json.dumps(
                control_observation.__dict__,
                cls=ExceptionalJSONEncoder,  # defined in `demo.overrides`
            )
        )
        logging.info(
            json.dumps(
                candidate_observation.__dict__,
                cls=ExceptionalJSONEncoder,
            )
        )
        logging.error(
            "Experiment %(name)s is not a match",
            {"name": result.experiment.name},
        )
from studies.experiments import Experiment


class MyExperiment(Experiment):
    def compare(self, control, candidate):
        return control.value['id'] == candidate.value['id']

Caveats

As always there are certain caveats that you should keep in mind. As stated in laboratory’s Caveats, if the control or the candidate has a side-effect like a write operation to the database or the cache, you could end up with erroneous data or similar bugs.

At the moment, this library doesn’t provide a safe write mechanism to mitigate this situation, but it may in the future.

Contributing

To contribute to this project, take a look at CONTRIBUTING.rst.

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

django-studies-0.1.2.tar.gz (99.8 kB view details)

Uploaded Source

Built Distribution

django_studies-0.1.2-py3-none-any.whl (8.3 kB view details)

Uploaded Python 3

File details

Details for the file django-studies-0.1.2.tar.gz.

File metadata

  • Download URL: django-studies-0.1.2.tar.gz
  • Upload date:
  • Size: 99.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.14

File hashes

Hashes for django-studies-0.1.2.tar.gz
Algorithm Hash digest
SHA256 275ef8feeb832a59461bade420990f5969057a9869c3d87bfc21117fe19b4bc0
MD5 db9eccaf2dfa5e6c2c7a56a64d6927d7
BLAKE2b-256 189466ce86c945b82a5a26647fc40576d7eac66c4fe445025058a7aed4eb9913

See more details on using hashes here.

File details

Details for the file django_studies-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for django_studies-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 4ee703df1dc3eb0b5a8eb86df547b66cd3b34f1787f152e294c52900a42354e0
MD5 27a4e7d4aa1ece3b546b0f7f40d6c3c3
BLAKE2b-256 bd5633c822f396166ae84f46899af785485984f0f31b6d78f17af49d44349884

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page