A utility for long running semi-random stress tests of Zenko
Project description
# SFKO
```
usage: sfko [-h] [-c] [-w]
I am become death, destoyer of clusters.
optional arguments:
-h, --help show this help message and exit
-c, --controller Start sfko in controller mode
-w, --worker Start sfko in worker mode
If no options are provided sfko will start in standalone mode
```
# Testing model
SFKO implements a randomized testing strategy. Test configuration is loosely defined in `Scenarios` and specifics are chosen at random during runtime.
### Scenarios
`Scenarios` consist of a list of `tests` and `checks` to execute along with options defining required resources.
Simple `Scenario`
```yaml
- name: Write 0B
required:
buckets: 1
objects:
size: 0B
tests:
- put
checks:
- check-backend
```
`Scenarios` with all the bells and whistles
```
- name: Replicate 10M
required:
buckets:
- replication:
- *AWS
- *GCP
clouds:
- *AWS
objects:
count: 10000
size: 10M
tests:
- put-replication
checks:
- check-replication-mpu
```
#### Scenario options
```
- name: Example Scenario
required:
buckets: 1 # buckets can be a integer sepcify the number of buckets to create
buckets: # buckets can be a list with each element specify a bucket
- replication: True # replication can be set as a boolean
- replication: # or a list specify possible backend clouds
- *AWS
- *GCP
clouds: # clouds specifies a list of backends to be used for created buckets
- *AWS
```
### Tests and Checks
Functions that either drive the cluster or check behavior are split into two groups: `Tests` and `Checks`.
### Anatomy of a `Test` or `Check`
At its root a `Test` or `Check` is simply a function that returns `True/False` based on success.
```python
@register_test('put')
def put_objects(bucket, objs):
for obj, data in objs:
obj.upload_fileobj(data)
return True
```
```python
@register_check('check-backend')
def check_backend(bucket_conf, objs):
if that_op_worked():
return True
return False
```
New `Tests` and `Checks` can be registered with the decorators `register_test` and `register_check` respectively. When called each check is pass two objects: a `Bucket` and `ObjectProxy` instance. These describe the generated bucket and objects chosen for this test.
A `Bucket` instance has the following attributes:
```
name # Name of bucket
backend # Instance of Backend, describes chosen bucket backend
replication # Instance of Backend, describes chosen replication target
trasient # bool whether transient source is enabled
expiration # bool whether lifecycle expiration is enabled
versioning # bool whether versioning is enabled
clouds # List of cloud constants describing possible backend choices
client # A high level boto3 Bucket client
```
A `Backend` has the following attributes
```
name # Human friendly name for this backend
type # A constant describing this backends type
bucket # The cloudside bucket for this backend
```
A `ObjectProxy` has the following attributes
```
objects # An iterator yielding a boto3 Object, and a open file descriptor of content
raw # An iterator yielding a bucket name, key name, and file size
client # A low level boto3 client
resource # A high level boto3 resource
```
```
usage: sfko [-h] [-c] [-w]
I am become death, destoyer of clusters.
optional arguments:
-h, --help show this help message and exit
-c, --controller Start sfko in controller mode
-w, --worker Start sfko in worker mode
If no options are provided sfko will start in standalone mode
```
# Testing model
SFKO implements a randomized testing strategy. Test configuration is loosely defined in `Scenarios` and specifics are chosen at random during runtime.
### Scenarios
`Scenarios` consist of a list of `tests` and `checks` to execute along with options defining required resources.
Simple `Scenario`
```yaml
- name: Write 0B
required:
buckets: 1
objects:
size: 0B
tests:
- put
checks:
- check-backend
```
`Scenarios` with all the bells and whistles
```
- name: Replicate 10M
required:
buckets:
- replication:
- *AWS
- *GCP
clouds:
- *AWS
objects:
count: 10000
size: 10M
tests:
- put-replication
checks:
- check-replication-mpu
```
#### Scenario options
```
- name: Example Scenario
required:
buckets: 1 # buckets can be a integer sepcify the number of buckets to create
buckets: # buckets can be a list with each element specify a bucket
- replication: True # replication can be set as a boolean
- replication: # or a list specify possible backend clouds
- *AWS
- *GCP
clouds: # clouds specifies a list of backends to be used for created buckets
- *AWS
```
### Tests and Checks
Functions that either drive the cluster or check behavior are split into two groups: `Tests` and `Checks`.
### Anatomy of a `Test` or `Check`
At its root a `Test` or `Check` is simply a function that returns `True/False` based on success.
```python
@register_test('put')
def put_objects(bucket, objs):
for obj, data in objs:
obj.upload_fileobj(data)
return True
```
```python
@register_check('check-backend')
def check_backend(bucket_conf, objs):
if that_op_worked():
return True
return False
```
New `Tests` and `Checks` can be registered with the decorators `register_test` and `register_check` respectively. When called each check is pass two objects: a `Bucket` and `ObjectProxy` instance. These describe the generated bucket and objects chosen for this test.
A `Bucket` instance has the following attributes:
```
name # Name of bucket
backend # Instance of Backend, describes chosen bucket backend
replication # Instance of Backend, describes chosen replication target
trasient # bool whether transient source is enabled
expiration # bool whether lifecycle expiration is enabled
versioning # bool whether versioning is enabled
clouds # List of cloud constants describing possible backend choices
client # A high level boto3 Bucket client
```
A `Backend` has the following attributes
```
name # Human friendly name for this backend
type # A constant describing this backends type
bucket # The cloudside bucket for this backend
```
A `ObjectProxy` has the following attributes
```
objects # An iterator yielding a boto3 Object, and a open file descriptor of content
raw # An iterator yielding a bucket name, key name, and file size
client # A low level boto3 client
resource # A high level boto3 resource
```
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
sfko-0.1.4.tar.gz
(27.5 kB
view details)
File details
Details for the file sfko-0.1.4.tar.gz
.
File metadata
- Download URL: sfko-0.1.4.tar.gz
- Upload date:
- Size: 27.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.4.2 requests/2.19.1 setuptools/40.4.3 requests-toolbelt/0.8.0 tqdm/4.26.0 CPython/3.7.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6b6efd4a6c70a31bd31c0bfee41e80928e4fb3ae39a124957ed814e015181168 |
|
MD5 | f2161ab82d8fc3b06b0b005e44ddbf22 |
|
BLAKE2b-256 | 628a9f274960f4ee2cbd1b2abc5aa9ff9dc3e4d8ff25db04c20280147bb1b3d1 |