Skip to main content

Library for testing network virtual appliances using Docker

Project description

Test network virtual appliance using Docker containers

General information

The project

This project is a Python library for testing network virtual appliances.

Author

Alexey Bogdanenko

License

Alpy is licensed under SPDX-License-Identifier: GPL-3.0-or-later. See COPYING for more details.

Description

Alpy manages containers via Docker Python API.

Alpy interacts with QEMU using Python API of the QEMU Monitor Protocol (QMP). QMP is a JSON-based protocol that allows applications to communicate with a QEMU instance.

Alpy gives user Pexpect object to interact with a serial console. The Pexpect object is configured to log console input and output via the standard logging module.

Alpy is packaged and deployed to PyPI. The package can be installed using pip.

There are unit tests (pytest) and integration tests in GitLab CI pipeline. Alpy is tested and works on the latest Ubuntu and the latest Ubuntu LTS release.

Examples

The alpy library repository includes scripts and modules to build a simple appliance called Rabbit. Rabbit is Alpine Linux with a few packages pre-installed. Having this simple DUT allows to demonstrate the library features and capabilities. The tests verify a few features of the network appliance, for example:

  • IPv4 routing (see rabbit/tests/forward-ipv4/main.py)

  • rate-limiting network traffic (see rabbit/tests/rate-limit/main.py)

  • load-balancing HTTP requests (see rabbit/tests/load-balancing/main.py)

The tests are executed automatically in the GitLab CI pipeline.

Example network (test rate-limit):

+-------------------------------------+
|                                     |
|          Device under test          |
|          rate limit = 1mbps         |
+-------+--------------------+--------+
        |                    |
        |                    |
        |                    |
+-------+--------+   +-------+--------+
|                |   |                |
| 192.168.1.1/24 |   | 192.168.1.2/24 |
|                |   |                |
| node0          |   | node1          |
| iperf3 client  |   | iperf3 server  |
+----------------+   +----------------+

Example test output:

INFO     __main__               Test description: Check that rabbit rate-limits traffic.
INFO     alpy.node              Create tap interfaces...
INFO     alpy.node              Create tap interfaces... done
INFO     alpy.qemu              Initialize QMP monitor...
INFO     alpy.qemu              Initialize QMP monitor... done
INFO     alpy.qemu              Start QEMU...
INFO     alpy.qemu              Start QEMU... done
INFO     alpy.qemu              Accept connection from QEMU to QMP monitor...
INFO     alpy.qemu              Accept connection from QEMU to QMP monitor... done
INFO     alpy.node              Create nodes...
INFO     alpy.node              Create nodes... done
INFO     alpy.console           Connect to console...
INFO     alpy.console           Connect to console... done
INFO     alpy.utils             Enter test environment
INFO     __main__               Start iperf3 server on node 1...
INFO     __main__               Start iperf3 server on node 1... done
INFO     alpy.qemu              Start virtual CPU...
INFO     alpy.qemu              Start virtual CPU... done
INFO     alpine                 Wait for the system to boot...
INFO     alpine                 Wait for the system to boot... done
INFO     alpine                 Login to the system...
INFO     alpine                 Login to the system... done
INFO     alpy.remote_shell      Type in script configure-rabbit...
INFO     alpy.remote_shell      Type in script configure-rabbit... done
INFO     alpy.remote_shell      Run script configure-rabbit...
INFO     alpy.remote_shell      Run script configure-rabbit... done
INFO     __main__               Start iperf3 client on node 0...
INFO     __main__               Measure rate...
INFO     __main__               Measure rate... done
INFO     __main__               Parse iperf3 report...
INFO     __main__               Parse iperf3 report... done
INFO     __main__               Start iperf3 client on node 0... done
INFO     alpine                 Initiate system shutdown...
INFO     alpine                 Initiate system shutdown... done
INFO     alpy.qemu              Wait until the VM is powered down...
INFO     alpy.qemu              Wait until the VM is powered down... done
INFO     alpy.qemu              Wait until the VM is stopped...
INFO     alpy.qemu              Wait until the VM is stopped... done
INFO     __main__               Rate received, bits per second: 976321
INFO     __main__               Check rate...
INFO     __main__               Check rate... done
INFO     alpy.utils             Exit test environment with success
INFO     alpy.console           Close console...
INFO     alpy.console           Close console... done
INFO     alpy.qemu              Quit QEMU...
INFO     alpy.qemu              Quit QEMU... done
INFO     alpy.utils             Test passed

The tests for the Rabbit device share a lot of code so the code is organized as a library. The library is called carrot.

Features

The simplest docker to QEMU networking connection

Nothing in the middle. No bridges, no veth pairs, no NAT etc.

Each layer 2 frame emitted is delivered unmodified, reliably.

Reliable packet capture

Each frame is captured reliably thanks to the QEMU filter-dump feature.

First-class Docker container support

Alpy follows and encourages single process per container design.

Logging

Test logs are easy to configure and customize. Alpy consistently uses Python logging module.

Alpy collects serial console log in binary as well as text (escaped) form.

No trash left behind

Alpy cleans up after itself:

  • processes stopped with error codes and logs collected,

  • files, directories unmounted,

  • temporary files removed,

  • sockets closed,

  • interfaces removed…

… reliably.

No root required

Run as a regular user.

API documentation

The documentation is published on GitLab Pages of your GitLab project (if GitLab Pages is enabled on your GitLab instance). For example, upstream project documentation lives at https://abogdanenko.gitlab.io/alpy.

Alpy API documentation is generated using Sphinx. To generate HTML API documentation locally, install Sphinx package and run the following command:

PYTHONPATH=. sphinx-build docs public

To view the generated documentation, open public/index.html in a browser.

Network design

The appliance being tested is referred to as a device under test or DUT.

The DUT communicates with containers attached to each of its network links.

Guest network adapters are connected to the host via tap devices (Figure 1):

+-----QEMU hypervisor------+
|                          |   +-------------+
| +-----Guest OS-----+     |   |             |
| |                  |     |   |  docker     |
| | +--------------+ |     |   |  container  |
| | |              | |     |   |  network    |
| | |  NIC driver  | |     |   |  namespace  |
| | |              | |     |   |             |
| +------------------+     |   |   +-----+   |
|   |              |       |   |   |     |   |
|   | NIC hardware +---+-----------+ tap |   |
|   |              |   |   |   |   |     |   |
|   +--------------+   |   |   |   +-----+   |
|                      |   |   |             |
+--------------------------+   +-------------+
                       |
                       |
                       v
                 +-----------+
                 |           |
                 | pcap file |
                 |           |
                 +-----------+

Figure 1. Network link between QEMU guest and a docker container.

Each tap device lives in its network namespace. This namespace belongs to a dedicated container - a node. The node’s purpose is to keep the namespace alive during the lifetime of a test.

For an application to be able to communicate with the DUT the application is containerized. The application container must be created in a special way: it must share network namespace with one of the nodes.

Figure 2 shows an example where application containers app0 and app1 share network namespace with node container node0. Application container app2 shares another network namespace with node2.

This sharing is supported by Docker. All we have to do is to create the application container with the --network=container:NODE_NAME Docker option. For example, if we want to send traffic to the DUT via its first link, we create a traffic generator container with Docker option --network=container:node0.

+----QEMU---+   +------shared network namespace-----+
|           |   |                                   |
|           |   |    eth0                           |
|   +---+   |   |   +---+   +-----+ +----+ +----+   |
|   |NIC+-----------+tap|   |node0| |app0| |app1|   |
|   +---+   |   |   +---+   +-----+ +----+ +----+   |
|           |   |                                   |
|           |   +-----------------------------------+
|           |
|           |
|           |
|           |   +------shared network namespace-----+
|           |   |                                   |
|           |   |    eth0                           |
|   +---+   |   |   +---+   +-----+                 |
|   |NIC+-----------+tap|   |node1|                 |
|   +---+   |   |   +---+   +-----+                 |
|           |   |                                   |
|           |   +-----------------------------------+
|           |
|           |
|           |
|           |   +------shared network namespace-----+
|           |   |                                   |
|           |   |    eth0                           |
|   +---+   |   |   +---+   +-----+ +----+          |
|   |NIC+-----------+tap|   |node2| |app2|          |
|   +---+   |   |   +---+   +-----+ +----+          |
|           |   |                                   |
+-----------+   +-----------------------------------+

Figure 2. Application containers attached to the DUT links.

Building a network of nodes

Network configuration operations are performed by temporary one-off Docker containers. The containers are created from a busybox image provided by the caller. For example, set:

busybox_image = "busybox:latest"

FAQ

How do I watch serial console?

Use tail:

tail --follow name --retry console.log

The same command, but shorter:

tail -F console.log

How do I watch traffic on an interface?

Use tcpdump:

tail --bytes +0 --follow name --retry link0.pcap | tcpdump -n -r -

The same command, but shorter:

tail -Fc +0 link0.pcap | tcpdump -nr-

Can I use Wireshark to watch traffic on an interface?

Yes, you can:

tail --bytes +0 --follow name --retry link0.pcap | wireshark -k -i -

The same command, but shorter:

tail -Fc +0 link0.pcap | wireshark -ki-

How do I debug my program?

Use The Python Debugger.

How do I enter node network namespace?

  1. Get node pid:

    docker inspect --format '{{.State.Pid}}' node0
  2. Jump into node namespace using that pid:

    nsenter --net --target "$pid"

One-liner:

nsenter --net --target "$(docker inspect --format '{{.State.Pid}}' node0)"

A note about GitLab Container Registry

Many CI jobs use one of the custom images built on the “build-docker-images” stage. The images are stored in the GitLab Container Registry.

The images are pulled from locations specified by GitLab variables. By default, the variables point to the registry of the current GitLab project.

If you forked this project and GitLab Container Registry is disabled in your project, override the variables on a project level so that the images are pulled from some other registry.

For example, set IMAGE_UBUNTU_LTS=registry.gitlab.com/abogdanenko/alpy/ubuntu-lts:latest.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

alpy-5.0.0.tar.gz (33.7 kB view details)

Uploaded Source

Built Distribution

alpy-5.0.0-py3-none-any.whl (33.4 kB view details)

Uploaded Python 3

File details

Details for the file alpy-5.0.0.tar.gz.

File metadata

  • Download URL: alpy-5.0.0.tar.gz
  • Upload date:
  • Size: 33.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.3

File hashes

Hashes for alpy-5.0.0.tar.gz
Algorithm Hash digest
SHA256 70194a5ee9502914580cc2bf01e7b1418b7cb824b84bab964fbbb651c0bd0fff
MD5 4b6a83a751b4e3f4dfe0eb738750d8ed
BLAKE2b-256 f09cb00b1f4e6be40ffdc9b7052e1c5ff6830f162d266467232b654a7906283b

See more details on using hashes here.

File details

Details for the file alpy-5.0.0-py3-none-any.whl.

File metadata

  • Download URL: alpy-5.0.0-py3-none-any.whl
  • Upload date:
  • Size: 33.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.3

File hashes

Hashes for alpy-5.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7aaf6f7a35e5dbeb92d189c7d76794ae0d325b77dff3384fe5f47b362c7134a6
MD5 63464116ba6ea85e7c5c4c79e74bd70a
BLAKE2b-256 750cb164d82ea0f9773e616a7235d37bc8d5b5905706df12a155b4809fa4c070

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page