Skip to main content

Oslo messaging benchmarking orchestrator

Project description

Message bus evaluation framework

Context

This is a framework to benchmark the communication middleware supported by oslo.messaging. It's primary goal is to address the evaluation of https://docs.openstack.org/performance-docs/latest/test_plans/massively_distribute_rpc/plan.html.

It is build on top of:

  • EnOSlib. This library helps to describe the experimental workflow and enforce it : from the deployment to the performance metrics analysis.
  • ombt. This will coordinate the benchmark once all the agents are up and running.

From a high level point of view the framework is able to deploy

  • a communication bus (e.g RabbitMQ, qdr aka qpid-dispatch-router),
  • a set of client/server that will communicate
  • start a benchmark while gathering metrics

A typical test consists in the following components:

Client 1---------+      +----------------------+     +-----> Server 1
                 |      |                      |     |
                 +----> |  Communication       | ----+-----> Server 2
Client 2--------------> |  Middleware          |     |
                 +----> |  (e.g qdr, rabbitms) |     |
...              |      |                      |     |
                 |      +----------------------+     +------> Server n
Client n---------+              |                             /
  \                                                         /
    \                           |                         / 
      \  --  --  --  --  -- Monitoring --  --  --  --  --

Installation

  • Clone the repository:
git clone https://github.com/msimonin/ombt-orchestrator
cd ombt-orchestrator
  • Install the dependencies
pip install -U pip
pip install -e .

On Grid'5000 you can launch this command from any frontend.

Configuration

The default configurations are currently defined in the conf.yaml file.

Command line interface

> oo
Usage: oo [OPTIONS] COMMAND [ARGS]...

Options:
  --help  Show this message and exit.

Commands:
  backup       Backup environment logs [after test_case_*].
  campaign     Perform a TEST according to the (swept)...
  deploy       Claim resources from a PROVIDER and configure...
  destroy      Destroy all the running dockers (keeping...
  g5k          Claim resources on Grid'5000 (frontend).
  inventory    Generate the Ansible inventory [after g5k,...
  prepare      Configure available resources [after g5k,...
  test_case_1  Run the test case 1: one single large...
  test_case_2  Run the test case 2: multiple distributed...
  test_case_3  Run the test case 3: one single large...
  test_case_4  Run the test case 4: multiple distributed...
  vagrant      Claim resources on vagrant (localhost).

Workflow to run a test case

  • Deploying and launching the benchmark (default driver broker is defined in the configuration file)
# default confs.yaml on $PWD will be read
> oo deploy --driver=broker vagrant

# Launch the one benchmark
> oo test_case_1 --nbr_clients 10 --nbr_servers 2

Adapt to the relevant provider (e.g g5k)

  • Real-time metrics visualisation

Grafana is available on the port 3000 of the control node (check the inventory file).

  • Backuping the environment
> oo backup

The files retrieved by this action are located in current/backup dir by default.

  • Some cleaning and preparation for the next run
# Preparing the next run by cleaning the environment
> oo destroy
> oo deploy vagrant

# Next run
> oo test_case_1 --nbr_clients 20 --nbr_servers 2

It's possible to force an experimentation dir with --env mydir

Note also that scripting from python is also possible using the function defined in task.py

## Workflow to run a campaign

  • A campaign is a batch execution of several configurations for a given test case. Deployment and execution of a benchmark is read from a configuration file. For example, to run the first test case enabled on the framework run:
> oo campaign --provider g5k test_case_1
  • Alternatively a campaign can be executed in a incremental mode in which deployments are performed only when a different driver or call_type is defined. Incremental campaigns are executed with a different semantic on the parameters defined in the configuration. With the incremental option the semantics is based on the combination of parameters by means of a dot product between a set of them in the configuration file (i.e., a zip operation between the lists of parameters). These parameters are defined by test case as follows:

    • Test case 1: nbr_clients, nbr_servers and pause
    • Test case 2: nbr_topics and pause
    • Test case 3: nbr_clients, nbr_servers and pause (only rpc-cast calls)
    • Test case 4: nbr_topics and pause (only rpc-cast calls)
  • To execute an incremental campaign be sure to use the ombt version msimonin/ombt:singleton instead of the default and execute:

> oo campaign --incremental --provider g5k test_case_1

Misc.

  • Bound clients or servers to specific bus agents:

To bind ombt-clients to a specific bus instance you can declare the following roles: [bus, bus-client].

Following the same idea ombt-servers can be bound to a specific bus instance using roles: [bus, bus-server]

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

ombt_orchestrator-1.1.1-py2.py3-none-any.whl (66.8 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file ombt_orchestrator-1.1.1-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for ombt_orchestrator-1.1.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 e2384a050ee9d9ea5cadd422441173bee63c06769c5c9f7748588d2bc2a3b803
MD5 efa01468b51f5d38f553e57192f1b24b
BLAKE2b-256 8ce18e39f18d21e3e9b974c81aea321602d59ab6e43eb398312cc49724939487

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page