Skip to main content

A Python Performance Testing Framework

Project description

Mite

Python 3.9 Python 3.10 MIT license Code style: black Docs

Load Testing Framework

Mite is a load testing framework with distributed components written in Python. Requests are executed asynchronously, allowing large throughput from relatively small infrastructure.

Installation

python3 -m venv ~/.virtualenvs/mite
source ~/.virtualenvs/mite/bin/activate

pip install -r requirements.txt

This requires that you have libcurl installed on your system (including C header files for development, which are often distributed separately from the shared libraries). On Ubuntu, this can be accomplished with the command:

sudo apt install libcurl4 libcurl4-openssl-dev

(NB we recommend using a version of libcurl linked against openssl rather than gnutls, since the latter has memory leak problems)

You can also use the dockerfile included in this repository to run mite. In order to get a shell in a container with mite installed, run these commands (assuming you have docker installed on your machine):

docker build -t mite .
docker run --rm -it mite sh

Run mite --help for a full list of commands

Your first scenario

Scenarios are a combination of 3 things, a set of journeys to run, a datapool to provide test data for the journey (if applicable), and a volume model.

Journeys

Journeys are async python functions, and are where you put the logic of what you're trying to achieve. Below is a simple example:

import asyncio

async def journey(ctx, arg1, arg2):
    async with ctx.transaction('test1'):
        ctx.send('test_message', content=ctx.config.get('test_msg', 'Not set'), sum=arg1 + arg2)
        await asyncio.sleep(0.5)

This journey just sends a log message to be collected by the framework and waits for half a second. This journey takes 3 arguments, a context and two numbers (which will be supplied by the datapool, see below).

Context

The context is important for every journey. It provides a number of useful methods for timing/naming of transactions, sending messages and defaults to including http functionality from acurl. Functionality can be included in the journey by attaching it to the context.

In the above example we see an example of naming a transaction async with ctx.transaction('test1'):. This will capture timing for the indented block.

We also see that the config is attached to the context with ctx.config.get('test_msg', 'Not set') and in this case, the value of that config value is sent to the log collector with ctx.send('test_message', content=ctx.config.get('test_msg', 'Not set'))

Datapools

To feed data into your test journey, you'll need a datapool. Several of these are already provided in mite and usually read data in from an iterable. To specify a custom datapool implementation, you simply need a class that implements a checkin method which adds data to the pool and a checkout method which removes data from the pool to be used by journeys.

For the above journey, which expects two arguments, we will show an example of the RecyclableIterableDataPool.

from mite.datapools import RecyclableIterableDataPool

datapool = RecyclableIterableDataPool([(i, i+2) for i in range(5000)])

This pool will share data out to runners and check it back in when the runners are done with that block of data. In this case it cycles through a list of tuples which each contain two integers.

Volume Model

A volume model defines how many instances of the journey should be ran within a window of time. The window is definied as a start and an end time (measured in seconds since the beginning of the test), which will be fed to the model by the framework. This allows complex calculations to specify the current volume based on the current time. The volume model can also raise StopScenario to inform mite that the load injection should stop for a given journey. For this example we'll use a simple volume model which merely runs ten journeys simultaneously, forever.

volumemodel = lambda start, end: 10

Scenario

We now need to package the journey, datapool and volume model into a scenario. This is a simple as defining a function which returns a list of triplets of (journey name, datapool, volumemodel).

def scenario():
    return [
        ['mite.example:journey', datapool, volumemodel],
    ]

The journey name should be a string with two parts separated by a colon. The first part is the name of a python module that is importable; the second is the name of a journey (an async function) within that module. It is necessary to specify this as a string, rather than as a python object, because the journey will be executed in a different python process than the scenario function is. Thus, we need a name for the journey that allows any python process to find it.

The volume model and data pool, on the other hand, are only used in the python process where the scenario function runs. They are both python objects.

Testing the journey

Before running the scenario, we should test the journey in isolation as a scenario can be made up of multiple journeys. This can be done with the mite journey test command. We just need to pass the name of the journey and the datapool it requires:

MITE_CONF_test_msg="Hello from mite" mite journey test mite.example:journey mite.example:datapool

If something goes wrong, adding the --debugging flag to this command will drop excution into a debug session. The choice of debugger used can be managed by setting the PYTHONBREAKPOINT environment variable before running mite. Python's built-in pdb) debugger is invoked by default, but this can be changed to use, say, the ipdb debugger:

pip install ipdb
export PYTHONBREAKPOINT=ipdb.set_trace
export PYTHONPOSTMORTEM=ipdb.post_mortem

PYTHONPOSTMORTEM is a mite-specific extension to PEP 553 which defines the PYTHONBREAKPOINT functionality.

Run the scenario

In order to run the finished scenario locally, which will include all the necessary fixtures, run the following command:

MITE_CONF_test_msg="Hello from mite" mite scenario test mite.example:scenario

Distributed deployments

In order to scale up miteʼs capability to inject load, you will need to run it as a distributed suite of components. You can learn more about how to accomplish this in the documentation.

Deploy distributed mite with docker compose

Build mite image:

docker build -t mite .

Run mite deployments:

Use make from mite/local dir:

make up # start mite containers
make status # check status of mite containers
make clean # remove all mite containers

or

docker-compose -f docker_compose.yml up

For more information on distributed mite usage, info

Maintainers

If you run into any trouble or need support getting to grips with Mite, reach out on Slack if you work at Sky, or contact one of the maintainers if you're an external contributer:

Jordan Brennan
Jordan Brennan

💻
Aaron Ecay
Aaron Ecay

💻
Davide Annunziata
Davide Annunziata

💻
Arron Canham
Arron Canham

💻

Special thanks to the following contributors:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mite-0.1.16.tar.gz (4.5 MB view details)

Uploaded Source

Built Distribution

mite-0.1.16-py3-none-any.whl (69.6 kB view details)

Uploaded Python 3

File details

Details for the file mite-0.1.16.tar.gz.

File metadata

  • Download URL: mite-0.1.16.tar.gz
  • Upload date:
  • Size: 4.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.6

File hashes

Hashes for mite-0.1.16.tar.gz
Algorithm Hash digest
SHA256 e68a58a5cdc227979000408dbaf78dbec10dd5f40e9f78cab7dea715947e01fe
MD5 8ae74047a68fe9a9d7a1a3d65dbd1e76
BLAKE2b-256 32c6875d3a5095edcc7223d80e1c7e2456c560580900dc740df3346f6435d1ff

See more details on using hashes here.

Provenance

File details

Details for the file mite-0.1.16-py3-none-any.whl.

File metadata

  • Download URL: mite-0.1.16-py3-none-any.whl
  • Upload date:
  • Size: 69.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.6

File hashes

Hashes for mite-0.1.16-py3-none-any.whl
Algorithm Hash digest
SHA256 348d215601937fafb54dd642d21a30266c44b2ac0293781ddd6c2040badf3a5b
MD5 fe48f60f6952740bbabccb51605962f5
BLAKE2b-256 8a112c3ee46d2c9bc68a91bd4bceb86bf7e1396865b99b2d895b7d87fe2ee041

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page