Skip to main content

A Python Performance Testing Framework

Project description

Mite

Python 3.10 Python 3.11 MIT license Code style: black Docs

Load Testing Framework

Mite is a load testing framework with distributed components written in Python. Requests are executed asynchronously, allowing large throughput from relatively small infrastructure.

Installation

pip install mite

This requires that you have libcurl installed on your system (including C header files for development, which are often distributed separately from the shared libraries). On Ubuntu, this can be accomplished with the command:

sudo apt install libcurl4 libcurl4-openssl-dev

(NB we recommend using a version of libcurl linked against openssl rather than gnutls, since the latter has memory leak problems)

You can also use the dockerfile included in this repository to run mite. In order to get a shell in a container with mite installed, run these commands (assuming you have docker installed on your machine):

docker build -t mite .
docker run --rm -it mite sh

Run mite --help for a full list of commands

Your first scenario

Scenarios are a combination of 3 things, a set of journeys to run, a datapool to provide test data for the journey (if applicable), and a volume model.

Journeys

Journeys are async python functions, and are where you put the logic of what you're trying to achieve. Below is a simple example:

import asyncio

async def journey(ctx, arg1, arg2):
    async with ctx.transaction('test1'):
        ctx.send('test_message', content=ctx.config.get('test_msg', 'Not set'), sum=arg1 + arg2)
        await asyncio.sleep(0.5)

This journey just sends a log message to be collected by the framework and waits for half a second. This journey takes 3 arguments, a context and two numbers (which will be supplied by the datapool, see below).

Context

The context is important for every journey. It provides a number of useful methods for timing/naming of transactions, sending messages and defaults to including http functionality from acurl. Functionality can be included in the journey by attaching it to the context.

In the above example we see an example of naming a transaction async with ctx.transaction('test1'):. This will capture timing for the indented block.

We also see that the config is attached to the context with ctx.config.get('test_msg', 'Not set') and in this case, the value of that config value is sent to the log collector with ctx.send('test_message', content=ctx.config.get('test_msg', 'Not set'))

Datapools

To feed data into your test journey, you'll need a datapool. Several of these are already provided in mite and usually read data in from an iterable. To specify a custom datapool implementation, you simply need a class that implements a checkin method which adds data to the pool and a checkout method which removes data from the pool to be used by journeys.

For the above journey, which expects two arguments, we will show an example of the RecyclableIterableDataPool.

from mite.datapools import RecyclableIterableDataPool

datapool = RecyclableIterableDataPool([(i, i+2) for i in range(5000)])

This pool will share data out to runners and check it back in when the runners are done with that block of data. In this case it cycles through a list of tuples which each contain two integers.

Volume Model

A volume model defines how many instances of the journey should be ran within a window of time. The window is definied as a start and an end time (measured in seconds since the beginning of the test), which will be fed to the model by the framework. This allows complex calculations to specify the current volume based on the current time. The volume model can also raise StopVolumeModel to inform mite that the load injection should stop for a given journey. For this example we'll use a simple volume model which merely runs ten journeys simultaneously, forever.

volumemodel = lambda start, end: 10

Scenario

We now need to package the journey, datapool and volume model into a scenario. This is a simple as defining a function which returns a list of triplets of (journey name, datapool, volumemodel).

def scenario():
    return [
        ['mite.example:journey', datapool, volumemodel],
    ]

The journey name should be a string with two parts separated by a colon. The first part is the name of a python module that is importable; the second is the name of a journey (an async function) within that module. It is necessary to specify this as a string, rather than as a python object, because the journey will be executed in a different python process than the scenario function is. Thus, we need a name for the journey that allows any python process to find it.

The volume model and data pool, on the other hand, are only used in the python process where the scenario function runs. They are both python objects.

Testing the journey

Before running the scenario, we should test the journey in isolation as a scenario can be made up of multiple journeys. This can be done with the mite journey test command. We just need to pass the name of the journey and the datapool it requires:

MITE_CONF_test_msg="Hello from mite" mite journey test mite.example:journey mite.example:datapool

If something goes wrong, adding the --debugging flag to this command will drop excution into a debug session. The choice of debugger used can be managed by setting the PYTHONBREAKPOINT environment variable before running mite. Python's built-in pdb) debugger is invoked by default, but this can be changed to use, say, the ipdb debugger:

pip install ipdb
export PYTHONBREAKPOINT=ipdb.set_trace
export PYTHONPOSTMORTEM=ipdb.post_mortem

PYTHONPOSTMORTEM is a mite-specific extension to PEP 553 which defines the PYTHONBREAKPOINT functionality.

Run the scenario

In order to run the finished scenario locally, which will include all the necessary fixtures, run the following command:

MITE_CONF_test_msg="Hello from mite" mite scenario test mite.example:scenario

Distributed deployments

In order to scale up miteʼs capability to inject load, you will need to run it as a distributed suite of components. You can learn more about how to accomplish this in the documentation.

Deploy distributed mite with docker compose

Build mite image:

docker build -t mite .

Run mite deployments:

Use make from mite/local dir:

make up # start mite containers
make status # check status of mite containers
make clean # remove all mite containers

or

docker-compose -f docker_compose.yml up

For more information on distributed mite usage, info

Maintainers

If you run into any trouble or need support getting to grips with Mite, reach out on Slack if you work at Sky, or contact one of the maintainers if you're an external contributer:

Jordan Brennan
Jordan Brennan

💻
Aaron Ecay
Aaron Ecay

💻
Davide Annunziata
Davide Annunziata

💻
Ryan Linnit
Ryan Linnit

💻
Cosmin Purcherea
Cosmin Purcherea

💻

Special thanks to the following contributors:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mite-2.4.2.tar.gz (4.5 MB view details)

Uploaded Source

Built Distribution

mite-2.4.2-py3-none-any.whl (73.0 kB view details)

Uploaded Python 3

File details

Details for the file mite-2.4.2.tar.gz.

File metadata

  • Download URL: mite-2.4.2.tar.gz
  • Upload date:
  • Size: 4.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.15

File hashes

Hashes for mite-2.4.2.tar.gz
Algorithm Hash digest
SHA256 e20f40a85b0d22a314ad273ecbe0b3190e73adaff8be10dce7de84fbb3580e23
MD5 a64e895cc88ab6da3b1700d2043f396f
BLAKE2b-256 1d7487044c55801b29d65e2460bb54dd796f761d42041cc6fca2fd57e7b148f1

See more details on using hashes here.

File details

Details for the file mite-2.4.2-py3-none-any.whl.

File metadata

  • Download URL: mite-2.4.2-py3-none-any.whl
  • Upload date:
  • Size: 73.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.15

File hashes

Hashes for mite-2.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f3acd428f9aebfe6873e202758477367315fe645e32745c301e324f98133b4b3
MD5 33e4c21531eb7240ca0871cf049722a4
BLAKE2b-256 37c62e4cf5a7309a5a498bf3c2856f29dc9b0366b64d817c3d8c9913dfa3f55c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page