SPL2 Testing Framework
Project description
SPL2 Testing Framework
Overview
The SPL2 Testing Framework enables running SPL2 tests both locally ( or on any Splunk instance with SPL2 orchestrator), remotely (using external cloud environments) or using cli.
- For Cloud: It uses the Search Service API.
- For Splunk: It uses the Splunk Search API (with SPL2 support)
- For cli - it uses the internal spl2-processor-cli library. This option is not available for public usage, as spl2-processor-cli is a Splunk internal tool.
Prerequisites
1. Install Python and Poetry
- Ensure
python3.xis available - Install testing framework:
- Install
poetryand execute the commandpoetry installto create a virtual environment and install required dependencies. - OR
- Install the library using pip:
pip install spl2-testing-framework
- Install
2. Set configuration. It may be done in spl2_test_config.json file or environment variables
Note: setting configuration is necessary for running tests using splunk or cloud environment. For running tests using cli no configuration is required. The only requirement is to have this library installed, as described below
spl2_test_config.json - local file present in the current working directory (the directory from where tests are executed)
Configuration for running tests using cloud search client (Ingest processor)
cloud_instance- address of Cloud host where the tests can be executed- e.g.:
staging.scs.splunk.com
- e.g.:
tenant- tenant to use for testing- e.g.:
spl2-content
- e.g.:
bearer_token- token used for authentication. To obtain the token, go to:https://console.[cloud_instance]/[tenant]/settings
Configuration for running tests using splunk search client (Splunk instance)
host- address of Splunk host where the tests can be executed- e.g.:
localhostorhttps://10.202.35.219
- e.g.:
port- port of Splunk host where the tests can be executed- usually
8089, but can be different
- usually
user- user to authenticatepassword- password to authenticate
The same configuration can be done using environment variables(however, spl2_test_config.json has higher priority):
cloud search client
SPL2_TF_CLOUD_INSTANCE=>cloud_instanceSPL2_TF_TENANT=>tenantSPL2_TF_BEARER_TOKEN=>bearer_token
splunk search client
SPL2_TF_HOST=>hostSPL2_TF_PORT=>portSPL2_TF_USER=>userSPL2_TF_PASSWORD=>password
3. Installing spl2-processor-cli (Splunk internal tool)
spl2-processor-cli can be installed using brew:
brew install spl2-processor-cli
Before installation, it may be necessary to authenticate to artifactory by running:
okta-artifactory-login -t generic
Running tests
To run tests, execute the command:
spl2_tests_run [cli|splunk|cloud]
In the directory where the tests are located. Test discovery is recursive, so it's possible to run tests even from the root directory of the project.
It is possible to pass more options to the command, which works also with pytest, e.g.:
-k "filter"- to run only tests which name contains "filter"-v[vv]- to see more verbose output-n [auto|<number>]- to run tests in parallelauto- to use all available cores,<number>- to use specific number of cores- however, it's recommended to run tests in parallel on cli mostly, as running on splunk or cloud doesn't give significant performance improvement
-x- to stop on first failure-pdb- to enter debugger on failure- ... and much more, whatever is supported by pytest
Additionally, the following options are supported:
-
--test_dir- directory where test files (.test.json,.test.spl2,module.json) are located. Defaults to the current working directory. Discovery is recursive. -
--code_dir- directory where SPL2 code modules (.spl2) are located. Defaults to the current working directory. Falls back to--test_dirif the module is not found. See Separate code and test directories for details. -
--ignore_empty_strings- to ignore empty strings in the results -
--ignore_additional_fields_in_actual- to ignore fields present in actual results but not in expected results (useful when actual results contain extra fields that should not affect comparison) -
--create_comparison_sheet- to create a comparison sheet incomparison_box_testfolder using actual and expected outputs (works only when running box tests) -
--cli_bench- when running tests with CLI, this option enables benchmarking mode by adding-band-nflags to the spl2-processor-cli command. The value specifies the number of events to test (e.g.,--cli_bench=1000will add-b -n 1000to the CLI command). In bench mode, all tests always succeed and CLI output is printed to logs - this is useful for performance testing and benchmarking. This is only applicable when using--type cli. -
--check_splunk_results- option to validate box test output by sending events to a Splunk instance (using HEC) and querying them back, with support for CIM (Common Information Model) and TA (Technology Add-on) checks. -
Note: The
pytest.ini.samplefile allows you to define command parameters. Just update the configurations, rename the file by removing the.sampleextension, and execute the command.
Run tests in IDE [PyCharm]
It's also possible to run tests in PyCharm. To do this, it's necessary to set Run Configurations
Sample configuration which may be used:
- Run configuration
- Type:
Python test - Module:
spl2_testing_framework.test_runner - Parameters:
--type [cli | splunk | cloud] --test_dir /tests/resources -o log_cli=true --log-cli-level=INFO --verbose - If test dir is not specified, current working directory will be used
- Use
--code_dirif SPL2 code modules are in a separate directory from tests - If necessary another pytest options can be added
- Type:
Note: It's necessary to set "pytest" as default test runner in PyCharm settings
Separate code and test directories
By default, the framework expects SPL2 code modules (.spl2 files) to live alongside their test files. The --code_dir option allows you to keep code and tests in separate directory trees.
How it works
| What is searched | Primary directory | Fallback directory |
|---|---|---|
Code modules (.spl2) |
--code_dir |
--test_dir |
Test files (.test.json, .test.spl2) |
--test_dir |
--code_dir |
Both searches are recursive. If the primary directory has no results, the framework falls back to the other directory and logs the action.
When multiple files with the same name are found within a directory, a warning is logged and the lexicographically first match is used.
Example project layout
my-project/
├── src/ # --code_dir
│ ├── network_traffic.spl2
│ └── dns_lookup.spl2
└── tests/ # --test_dir
├── network_traffic.test.json
├── dns_lookup.test.json
└── dns_lookup.test.spl2
Run with:
spl2_tests_run cli --test_dir tests --code_dir src
Backward compatibility
When --code_dir is not specified (or both flags point to the same directory), the framework behaves exactly as before — code modules are found next to their test files via the same recursive search.
Applies to all test types
The --code_dir option works for box tests, unit tests, and single SPL2 file execution.
Executing a spl2 file
This framework also supports executing a single spl2 file and prints the results in command line as well as a log file. This will help developers to get the results of the spl2 pipeline as and when they are developing the pipeline.
It requires 3 additional parameters:
- --template_file
- --sample_file
- --sample_delimiter
It will execute the template_file provided in the --template_file parameter. It will read samples if --sample_file
parameter is provided and will separate the samples by using --sample_delimiter. If --sample_file is not provided,
then it will look for the samples in the respective module.json file corresponding to the template_file.
To run a single spl2 file, execute the command:
single_spl2_file_run [cli|splunk|cloud]
It is possible to pass more options to the command, which works also with pytest, e.g.:
-
--test_dir- Path in which the template file and module.json are available. If not provided, it will look for the current directory for the template file and module.json file -
--code_dir- Path where SPL2 code modules are located, if different from--test_dir. Falls back to--test_dirwhen not set. See Separate code and test directories. -
--template_file- The spl2 template file to execute -
--sample_file- A file containing all the samples required for the template file. If not provided, it will look for the samples in module.json file of the corresponding template file -
--sample_delimiter- Separator for separating the samples provided in the sample file. If not provided, it will use newline as a default separator. -
--limit_tests- Limit number of tests to execute from the unit, sample file or module.json. If not provided, all will be executed. This can be used for quick testing of spl2 files during development. -
... and much more, whatever is supported by pytest
Note: The pytest.ini.sample file allows you to define command parameters. Just update the configurations, rename the
file by removing the .sample extension, and execute the command.
Performance check
It is possible to measure execution time of spl2 pipeline, or even do more advanced time checks using flag:
--performance_check=time- to run basic time measurements - time of execution of spl2 pipeline will be printed to stdout--performance_check=detailed_time- to do more advanced time checks which injects more timestamps into spl2 pipeline.
Running detailed_time check creates text file with spl2 pipeline code with injected timestamps after every
command ("|")
Content of this file will also be printed to stdout.
This checks can be applied only to box tests, as assertions which are used in unit tests may impact spl2 pipeline performance.
Check splunk results
When set (--check_splunk_results=<option>), box tests (after running the SPL2 pipeline) send pipeline output to the configured Splunk instance and validate results there instead of comparing only in-memory expected vs actual.
| Value | Behavior |
|---|---|
| CIM | For each output event, ingest via HEC, load back from Splunk, then check CIM fields from expected_cim_fields.cim_fields with validate_compatibility(actual, expected). Skips if the box test has no cim_fields. |
| TA | For each output event, ingest via HEC, load back from Splunk, then assert equality with the expected row for that index. |
For TA only, you can name top-level fields to drop on both the queried event and each expected_destination_result row by adding ignore_fields_in_splunk_check: an object mapping field name → short reason. Use the same key under test in each entry in *.test.json (same file as expected_destination_result):
"test": {
"source": "...",
"expected_destination_result": [ ... ],
"ignore_fields_in_splunk_check": {
"_raw": "Raw event differs after Splunk indexing",
"_time": "Timestamp may vary due to ingestion latency"
}
}
If the key is omitted, no fields are ignored. Used only when --check_splunk_results=TA (ignored for other modes and for the default
in-memory box test assert). If the file or key is missing, no fields are ignored.
Configuration:
SPLUNK_INSTANCE in spl2_test_config.json
New top-level key for the Splunk instance used by --check_splunk_results:
{
"SPLUNK_INSTANCE": {
"ip": "<hostname-or-ip>",
"port": 8088,
"api_port": 8089,
"username": "<splunk-user>",
"password": "<splunk-password>",
"index": "<index-name>",
"hec_token": "<hec-token>"
}
}
- ip – Splunk server host (e.g.
localhostor hostname). - port – HEC port (e.g. 8088).
- api_port – Management/API port (e.g. 8089) for Search API and index creation.
- username / password – Used for Search API and index creation (HTTP Basic).
- index – Index where HEC events are sent and then searched.
- hec_token – Token for HEC (
Authorization: Splunk <token>).
Do not commit real credentials. Prefer environment variables (below) or a local config that is not in version control.
Environment variables
Any SPLUNK_INSTANCE field can be set via the environment; config file values override these only when non-empty.
| Variable | Maps to |
|---|---|
SPL2_TF_SPLUNK_INSTANCE_IP |
ip |
SPL2_TF_SPLUNK_INSTANCE_PORT |
port |
SPL2_TF_SPLUNK_INSTANCE_API_PORT |
api_port |
SPL2_TF_SPLUNK_INSTANCE_USERNAME |
username |
SPL2_TF_SPLUNK_INSTANCE_PASSWORD |
password |
SPL2_TF_SPLUNK_INSTANCE_INDEX |
index |
SPL2_TF_SPLUNK_INSTANCE_HEC_TOKEN |
hec_token |
Format all files
To format all files run:
black spl2_testing_framework tests
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file spl2_testing_framework-1.7.1.tar.gz.
File metadata
- Download URL: spl2_testing_framework-1.7.1.tar.gz
- Upload date:
- Size: 41.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.5.1 CPython/3.7.17 Linux/6.8.0-1044-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c77d64f02fd32fa3c6bf52a8e19f5a7a006d3c2532bc1dd1297bcf8b82b5e78b
|
|
| MD5 |
f9981f3d1d89c1d25cb3cad76c86e0ce
|
|
| BLAKE2b-256 |
26e492c2f6771a0a3aeda64a1ce79835e7d021f1cda9a7f5ae8fae07e5208744
|
File details
Details for the file spl2_testing_framework-1.7.1-py3-none-any.whl.
File metadata
- Download URL: spl2_testing_framework-1.7.1-py3-none-any.whl
- Upload date:
- Size: 58.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.5.1 CPython/3.7.17 Linux/6.8.0-1044-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
843f705eb8995036337bbd01af55c130ae29d5d4e988169ad89de37aa78e671b
|
|
| MD5 |
d482cd1f00b05d18a476acb432f8756e
|
|
| BLAKE2b-256 |
2c6533d72ca0eaf1cb51ef2a9485d0b92aae505dbecb5ec5f2f2e28d304f996a
|