Skip to main content

A Service Function Chain (SFC) Traffic Scheduling Simulator

Project description

SFC TSS - Traffic Scheduling Simulator

SFC TSS - Service Function Chain (SFC) traffic scheduling simulator - is an Apache2 licensed packet-level discrete event simulator. SFC TSS simulates the SFC traffic scheduling problem as described in our paper "Letting off STEAM: Distributed Runtime Traffic Scheduling for Service Function Chaining". We refer to this paper for more information on RFC 7665 and the features of the simulator. SFC TSS simulates scenarios in compliance with RFC 7665 including link latencies, packet handling at the various SFC components like SFFs, SFIs and server.

If you use SFC TSS in your research, please cite our paper:

@inproceedings{bloecher2020steam,
  title={Letting off STEAM: Distributed Runtime Traffic Scheduling for Service Function Chaining},
  author={Blöcher, Marcel and Khalili, Ramin and Wang, Lin and Eugster, Patrick},
  booktitle={IEEE INFOCOM 2020 - IEEE Conference on Computer Communications},
  pages={824-833},
  doi = {10.1109/INFOCOM41043.2020.9155404},
  year={2020}
}

Getting Started

Install

To setup your environment use either PyPy3 or Python3. We highly recommend you to use PyPy3.

pip install sfctss

Tested with Ubuntu 19.10 / Mac OS 10.15.

Experiment setup

SFC TSS provides the essential parts of a SFC traffic simulation and provides many options to configure an experiment.

A minimal configuration requires the following steps

import numpy as np
import random

import sfctss
from sfctss.scheduler.examples import LoadUnawareRoundRobinScheduler

rand = random.Random()
rand.seed(42) # seed the experiment

sim = sfctss.simulator.Sim(seed=rand.randint(0,1000000))


# create a link latency distribution that is used to connect between SFFs-SFIs
LATENCY_SFF_SFI = 1
sfctss.model.SFF.setup_latency_distribution(sim=sim, 
                                            id=LATENCY_SFF_SFI, 
                                            values=np.random.poisson(500, 5000)) # 3000µs

# create a link latency distribution that is used to connect between SFFs-SFFs
LATENCY_SFF_SFF = 2
sfctss.model.SFF.setup_latency_distribution(sim=sim, 
                                            id=LATENCY_SFF_SFF,
                                            values=np.random.poisson(3000, 5000)) # 3000µs

# initialize data structures, configure number of sf types
sfctss.model.SFI.init_data_structure(sim=sim, 
                                     number_of_sf_types=1, 
                                     latency_provider_sff_sfi=LATENCY_SFF_SFI)

# at least one SFF with a scheduler instance
scheduler_a = LoadUnawareRoundRobinScheduler(sim=sim,
                                             incremental=True, # schedule one step of a chain per scheduling attempt
                                             oracle=True) # scheduler has a global view (all sites)
sff_a = sfctss.model.SFF(sim=sim, 
                         scheduler=scheduler_a)

# at least one Server with a SFI that is connected to the SFF
server = sfctss.model.Server(sim=sim, 
                             processing_cap=120, 
                             cpu_policy=sfctss.model.ServerCpuPolicy.one_at_a_time)
server.add_sfi(of_type=1, 
               with_sff_id=sff_a.id)

# do the same for ssf_b ...
scheduler_b = None # ...
sff_b = None # ...

# configure connections between SFFs
sfctss.model.SFF.setup_connection(sim=sim, 
                                  source_id=sff_a.id, 
                                  destination_id=sff_b.id,
                                  bw_cap=100000,
                                  latency_provider=LATENCY_SFF_SFF,
                                  bidirectional=True)           

# configure processing speed of sf types
# the rate gives the number of packets a sfi of this sf type can process in 1 s when using 1 cpu share
sfctss.model.SFI.setup_sf_processing_rate_per_1s(sim=sim, 
                                                 of_type=1, 
                                                 with_mu=100)

# create at least one packet generator (which could also replay a pcap)
wl_config = sfctss.workload.SyntheticWorkloadGenerator.get_default_config()
wl_gen = sfctss.workload.SyntheticWorkloadGenerator(sim=sim,
                                                    workload_rand=rand,
                                                    config=wl_config)
sim.register_packet_generator(packet_generator=wl_gen,
                              fetch_all=False)

# finally, start simulation
sim.run_sim(show_progress=True, # print progress on bash
            interactive=False, # no interactive mode
            max_sim_time=1500000, # we stop after 1.5s
            ui=False, # do not show bash ui
            stop_simulation_when_workload_is_over=True) # stop when max_sim_time is done or when workload is done 

We refer to the example example/main.py for an example how to use SFC TSS.

You may also want to create your own scheduler. Simply subclass BaseScheduler. Check example schedulers in sfctss.scheduler.examples for more information.

You may also want to create a custom workload provider like a pcap replay. Your workload provider must implement WorkloadGenerator.

Full Example

Start running the example experiment

./example/main.py --show-progress

or with more debugging output or activated statistics dumps

# show progress, write csv logs, active some of the statistics
./example/main.py --show-progres --write-statistics output --statistics-server --statistics-polling-sfi --statistics-latency-cdf-buckets 50

# with more verbose bash ui
./example/main.py --show-ui

# debugging mode
./example/main.py -v --interactive 

The example provides more options...

usage: main.py [-h] [-v] [--sim-time SIM_TIME] [--no-workload-reloading] [--dry] [--show-progress] [--interactive] [--show-ui]
               [--write-statistics STATISTICS_FILENAME] [--statistics-overview] [--statistics-packets] [--statistics-server] [--statistics-polling-sfi]
               [--statistics-polling-sff] [--statistics-polling-server] [--statistics-polling-overview]
               [--statistics-polling-interval STATISTICS_POLLING_INTERVAL] [--statistics-latency-cdf-buckets STATISTICS_PACKETS_CDF_BUCKETS]
               [--dump-full-workload]

optional arguments:
  -h, --help            show this help message and exit
  -v, --verbose         verbose output
  --sim-time SIM_TIME   set simulation time (in ns)
  --no-workload-reloading
                        if set, load full workload before simulation starts
  --dry                 do not run the simulation, but test everything if it is functional
  --show-progress       shows the progress during running the simulation
  --interactive         run each simulation tick one after another
  --show-ui             shows a simple bash-ui when running the simulation
  --write-statistics STATISTICS_FILENAME
                        if filename is set, activate statistics
  --statistics-overview
                        activate overview statistics
  --statistics-packets  activate packet statistics
  --statistics-server   activate server statistics
  --statistics-polling-sfi
                        activate sfi polling statistics
  --statistics-polling-sff
                        activate sff polling statistics
  --statistics-polling-server
                        activate server polling statistics
  --statistics-polling-overview
                        activate overview polling statistics
  --statistics-polling-interval STATISTICS_POLLING_INTERVAL
                        set statistics polling interval (in ns)
  --statistics-latency-cdf-buckets STATISTICS_PACKETS_CDF_BUCKETS
                        activate cdf of packet latencies; set # of buckets for cdf, e.g., 50
  --dump-full-workload  dumps full workload (full packet dump)

Manual Installation / Contribute

Run one of the following lines

./bootstrap-deps-pypy.sh # recommended option
./bootstrap-deps.sh # fallback with standard Python

to setup your environment either with PyPy3 or Python3. We highly recommend you to use PyPy3.

Simply load your python environment by calling one of the following lines

source env-pypy/bin/activate
source env/bin/activate

We are happy for all kind of contributions, including bug fixes and additional features.

Acknowledgement

This work has been co-funded by the Federal Ministry of Education and Research (BMBF) Software Campus grant 01IS17050, the German Research Foundation (DFG) as part of the projects B2 and C7 in the Collaborative Research Center (CRC) 1053 “MAKI” and DFG grant 392046569 (61761136014 for NSFC), and the EU H2020 program under grant ICT-815279 “5G-VINNI” and ERC grant FP7-617805 “LiveSoft”.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sfctss-1.0.0.tar.gz (51.1 kB view details)

Uploaded Source

Built Distribution

sfctss-1.0.0-py3-none-any.whl (55.7 kB view details)

Uploaded Python 3

File details

Details for the file sfctss-1.0.0.tar.gz.

File metadata

  • Download URL: sfctss-1.0.0.tar.gz
  • Upload date:
  • Size: 51.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/49.2.1 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.8.6

File hashes

Hashes for sfctss-1.0.0.tar.gz
Algorithm Hash digest
SHA256 bf8d2aa8fa133216d3a6b1dc7697af6cff60379068811f7743f0c3d52a662d66
MD5 8bd42d61b4d225e5403ff559c7eff0fe
BLAKE2b-256 a4a0c33f8635f407b3dde6a44ee70edbdea124dcc98a17950a2ae1cc2d16ab70

See more details on using hashes here.

File details

Details for the file sfctss-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: sfctss-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 55.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/49.2.1 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.8.6

File hashes

Hashes for sfctss-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e3bcf7b3a461dc0ba9f57e505cb4c5d141848d03b714b1c34b6aeb33128540c6
MD5 1b3fc5c75491b64250f7bfe13ba1bcfc
BLAKE2b-256 d2a0f673b907f49720af83581289abc061c020bd4aabab4834284cf421ea338b

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page