Swab: Simple WSGI A/B testing
Simple WSGI A/B testing.
- 2010-2012 Oliver Cope, released under a BSD style license, please see LICENSE.txt for details.
Introduction and sample usage
Swab helps you run A/B tests on your web applications.
When you run an A/B test experiment with Swab, visitors to your web application are randomly assigned to one of the variants you have defined. For example, you might run an experiment in which you test two color variants of a button.
You also need to define the goal that you want your visitors to perform. For example, make a purchase or sign up with your service.
Swab contains WSGI middleware that tracks visitor sessions and randomly assigns every visitor to see one of the variants you have defined. Every time a variant is displayed to a visitor, a trial is recorded. Every time a goal conversion is made, that’s recorded too. Using this data, Swab calculates the conversion rate for each variant along with some basic statistics to help you decide whether there is a significant difference between the variants.
Setting up a Swab instance
Swab needs a directory where it can save the data files it uses for tracking trial and conversion data:
from swab import Swab s = Swab('/tmp/.swab-test-data')
Then you need to tell swab about the experiments you want to run, the variants available and the name of the conversion goal:
s.addexperiment('button-color', ['red', 'blue'], 'signup')
Finally you need to wrap your WSGI app in swab’s middleware:
application = s.middleware(application)
Integrating swab in your app
Swab makes a number of functions available to you that you can put in your application code:
show_variant(experiment, environ)Return the variant name to show for the current request. In the above example, a call to show_variant('button-color', environ) would return either 'red' or 'blue'
record_goal(experiment, environ)Record a goal conversion for the named experiment
Test results are available at the URL /swab/results.
Swab automatically adds a Cache-Control: no-cache response header if show_variant or record_trial was called during the request. This helps avoid proxies caching your test variants. It will also remove any other cache related headers (eg ‘ETag’ or ‘Last-Modified’). If you don’t want this behaviour, you need to pass cache_control=False when creating the Swab instance.
Viewing the variants
To test your competing pages append ‘?swab.<experiment-name>=<variant-name>’ to URLs to force any given variant to be shown.
Each visitor is assigned an identity which is persisted by means of a cookie. The identity is a base64 encoded randomly generated byte sequence. This identity is used as a seed for a RNG, which is used to switch visitors into test groups.
Every time a test is shown, a line is entered into a file at <datadir>/<experiment>/<variant>/__all__. This is triggered by calling record_trial
Every time a goal is recorded (triggered by calling record_goal), a line is entered into a file at <datadir>/<experiment>/<variant>/<goal>
Each log line has the format <timestamp>:<identity>\n.
No file locking is used: it is assumed that this will be run on a system where each line is smaller than the fs blocksize, allowing us to avoid this overhead. The lines may become interleaved, but there should be no risk of corruption even with multiple simultaneous writes. See http://www.perlmonks.org/?node_id=486488 for a discussion of the issue.
- Better exclusion of bots on server side too
- Record trial app won’t raise an error if the experiment name doesn’t exist
- Removed debug flag, the ability to force a variant is now always present
- Strip HTTP caching headers if an experiment has been invoked during the request
- Improved accuracy of conversion tracking
- Cookie path can be specified in middleware configuration
- Minor bugfixes
- Bugfix for ZeroDivisionErrors when no data has been collected
- Initial release