Powerful A/B testing for Python apps
Project description
Growth Book Python
Powerful A/B testing for Python apps.
- No external dependencies
- Lightweight and fast
- No HTTP requests everything is defined and evaluated locally
- Python 3.6+
- 100% test coverage
- Flexible experiment targeting
- Use your existing event tracking (GA, Segment, Mixpanel, custom)
- Adjust variation weights and targeting without deploying new code
Note: This library is just for running A/B tests in Python. To analyze results, use the Growth Book App (https://github.com/growthbook/growthbook).
Installation
pip install growthbook
(recommended) or copy growthbook.py
into your project
Quick Usage
from growthbook import GrowthBook, Experiment
userId = "123"
def on_experiment_viewed(experiment, result):
# Use whatever event tracking system you want
analytics.track(userId, "Experiment Viewed", {
'experimentId': experiment.key,
'variationId': result.variationId
})
gb = GrowthBook(
user = {"id": userId},
trackingCallback = on_experiment_viewed
)
result = gb.run(Experiment(
key = "my-experiment",
variations = ["A", "B"]
))
print(result.value) # "A" or "B"
GrowthBook class
The GrowthBook constructor has the following parameters:
- enabled (
boolean
) - Flag to globally disable all experiments. Default true. - user (
dict
) - Dictionary of user attributes that are used to assign variations - groups (
dict
) - A dictionary of which groups the user belongs to (key is the group name, value is boolean) - url (
string
) - The URL of the current request (if applicable) - overrides (
dict
) - Nested dictionary of experiment property overrides (used for Remote Config) - forcedVariations (
dict
) - Dictionary of forced experiment variations (used for QA) - qaMode (
boolean
) - If true, random assignment is disabled and only explicitly forced variations are used. - trackingCallback (
callable
) - A function that takesexperiment
andresult
as arguments.
Experiment class
Below are all of the possible properties you can set for an Experiment:
- key (
string
) - The globally unique tracking key for the experiment - variations (
any[]
) - The different variations to choose between - weights (
float[]
) - How to weight traffic between variations. Must add to 1. - status (
string
) - "running" is the default and always active. "draft" is only active during QA and development. "stopped" is only active when forcing a winning variation to 100% of users. - coverage (
float
) - What percent of users should be included in the experiment (between 0 and 1, inclusive) - url (
string
) - Users can only be included in this experiment if the current URL matches this regex - include (
callable
) - A function that returns true if the user should be part of the experiment and false if they should not be - groups (
string[]
) - Limits the experiment to specific user groups - force (
int
) - All users included in the experiment will be forced into the specified variation index - hashAttribute (
string
) - What user attribute should be used to assign variations (defaults to "id")
Running Experiments
Run experiments by calling gb.run(experiment)
which returns an object with a few useful properties:
result = gb.run(Experiment(
key = "my-experiment",
variations = ["A", "B"]
))
# If user is part of the experiment
print(result.inExperiment) # true or false
# The index of the assigned variation
print(result.variationId) # 0 or 1
# The value of the assigned variation
print(result.value) # "A" or "B"
# The user attribute used to assign a variation
print(result.hashAttribute) # "id"
# The value of that attribute
print(result.hashValue) # e.g. "123"
The inExperiment
flag is only set to true if the user was randomly assigned a variation. If the user failed any targeting rules or was forced into a specific variation, this flag will be false.
Example Experiments
3-way experiment with uneven variation weights:
gb.run(Experiment(
key = "3-way-uneven",
variations = ["A","B","C"],
weights = [0.5, 0.25, 0.25]
))
Slow rollout (10% of users who opted into "beta" features):
# User is in the "qa" and "beta" groups
gb = GrowthBook(
user = {"id": "123"},
groups = {
"qa": isQATester(),
"beta": betaFeaturesEnabled()
}
)
gb.run(Experiment(
key = "slow-rollout",
variations = ["A", "B"],
coverage = 0.1,
groups = ["beta"]
))
Complex variations and custom targeting
result = gb.run(Experiment(
key = "complex-variations",
variations = [
{'color': "blue", 'size': "large"},
{'color': "green", 'size': "small"}
],
url = "^/post/[0-9]+$"
include = lambda: isPremium or creditsRemaining > 50
))
# Either "blue,large" OR "green,small"
print(result.value["color"] + "," + result.value["size"])
Assign variations based on something other than user id
gb = GrowthBook(
user = {
"id": "123",
"company": "growthbook"
}
)
gb.run(Experiment(
key = "by-company-id",
variations = ["A", "B"],
hashAttribute = "company"
))
# Users in the same company will always get the same variation
Overriding Experiment Configuration
It's common practice to adjust experiment settings after a test is live. For example, slowly ramping up traffic, stopping a test automatically if guardrail metrics go down, or rolling out a winning variation to 100% of users.
For example, to roll out a winning variation to 100% of users:
gb = GrowthBook(
user = {"id": "123"},
overrides = {
"experiment-key": {
"status": "stopped",
"force": 1
}
}
)
result = gb.run(Experiment(
key = "experiment-key",
variations = ["A", "B"]
))
print(result.value) # Always "B"
The full list of experiment properties you can override is:
- status
- force
- weights
- coverage
- groups
- url
If you use the Growth Book App (https://github.com/growthbook/growthbook) to manage experiments, there's a built-in API endpoint you can hit that returns overrides in this exact format. It's a great way to make sure your experiments are always up-to-date.
Django
For Django (and other web frameworks), we recommend adding a simple middleware where you instantiate the GrowthBook object
from growthbook import GrowthBook
def growthbook_middleware(get_response):
def middleware(request):
request.gb = GrowthBook(
# ...
)
response = get_response(request)
request.gb.destroy() # Cleanup
return response
return middleware
Then, you can easily run an experiment in any of your views:
from growthbook import Experiment
def index(request):
result = request.gb.run(Experiment(
# ...
))
# ...
Event Tracking and Analyzing Results
This library only handles assigning variations to users. The 2 other parts required for an A/B testing platform are Tracking and Analysis.
Tracking
It's likely you already have some event tracking on your site with the metrics you want to optimize (Segment, Mixpanel, etc.).
For A/B tests, you just need to track one additional event - when someone views a variation.
# Specify a tracking callback when instantiating the client
gb = GrowthBook(
user = {"id": "123"},
trackingCallback = on_experiment_viewed
})
Below are examples for a few popular event tracking tools:
Segment
def on_experiment_viewed(experiment, result):
analytics.track(userId, "Experiment Viewed", {
'experimentId': experiment.key,
'variationId': result.variationId
})
Mixpanel
def on_experiment_viewed(experiment, result):
mp.track(userId, "$experiment_started", {
'Experiment name': experiment.key,
'Variant name': result.variationId
})
Analysis
For analysis, there are a few options:
- Online A/B testing calculators
- Built-in A/B test analysis in Mixpanel/Amplitude
- Python or R libraries and a Jupyter Notebook
- The Growth Book App (https://github.com/growthbook/growthbook)
The Growth Book App
Managing experiments and analyzing results at scale can be complicated, which is why we built the Growth Book App.
- Query multiple data sources (Snowflake, Redshift, BigQuery, Mixpanel, Postgres, Athena, and Google Analytics)
- Bayesian statistics engine with support for binomial, count, duration, and revenue metrics
- Drill down into A/B test results (e.g. by browser, country, etc.)
- Lightweight idea board and prioritization framework
- Document everything! (upload screenshots, add markdown comments, and more)
- Automated email alerts when tests become significant
Integration is super easy:
- Create a Growth Book API key
- Periodically fetch the latest experiment overrides from the API and cache in Redis, Mongo, etc.
- At the start of your app, pass in the overrides to the GrowthBook constructor
Now you can start/stop tests, adjust coverage and variation weights, and apply a winning variation to 100% of traffic, all within the Growth Book App without deploying code changes to your site.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for growthbook-0.1.1-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | b076d837e5581fc40f7e55af19fde021078c7a0665a572c4a49e4fd468260c97 |
|
MD5 | 099155c4727f27b4e08172aa86c39a19 |
|
BLAKE2b-256 | 7fa42a1f07e4706ba85fee29524a2842f0a1d6d0816105d3ceb69a86569ef86b |