Skip to main content

A simple python tool used for validating pseudo random generators output.

Project description

Random Test Tool (RTT)

License: MIT Current Version Python 3 Twitter Follow XMCO

Random Test Tool (also refered RTT) is a Python script designed for testing the randomness of sequences of integers or bits. RTT serves the purpose of evaluating random number generators.

This project was primarily motivated by the following objectives:

  • Evaluate series of integers: RTT allows the assessment of series of integers generated by programs based on generators or directly by generators.

  • Facilitate random outputs controls during security audits: RTT provides a structured set of instructions and assessment outputs to enhance the interpretation of statistical results obtained during security audits.

  • Simplify manipulation of test inputs, tests themselves and test outputs: The tool offers an easy-to-use Python implementation, enabling users to manipulate test inputs, outputs, and tests effortlessly.

Installation

Compatibility

Random Test Tool runs in Python 3!

It has minimal dependencies, all of which can be installed with the following commands below.

Install through package manager

Available here: https://pypi.org/project/random-test-tool/

pip install random-test-tool

Install through repository

git clone https://github.com/xmco/random-test-tool
cd random-test-tool
pip install -r requirements.txt

Usage

Running Random Test Tool

Random Test Tool can be executed using the following commands:

python random_test_tool.py -i <file_path>
python random_test_tool.py -d <dir_path>

For example:

python random_test_tool.py -i random_generator_samples/python_random_integer/20230816-105301_RANDOM_NUMBERS.txt

Random Test Tool supports three input formats (within 20230816-105301_RANDOM_NUMBERS.txt):

  1. "bitstring" series, i.e., a sequence of 0s and 1s:
01010111010011001...
  1. list of integers starting at 0 or 1 separated by commas, spaces, semicolons or line breaks:
25
12
1
4
  1. "bytestring" series, a sequence of bytes

Random Test Tool is capable of testing multiple files in a row:

python random_test_tool.py -i test_file_1.txt test_file_2.txt

Additionally, it can test all files within a directory:

python random_test_tool.py -d test_files

Outputs

By default, Random Test Tool returns results in the terminal.

You can use the -o option to specify a return file or generate graphs.

Other options

For a comprehensive understanding of available options, use the following command:

python random_test_tool.py -h 

Script testing the randomness of a serie of integers bits via statistical statistical_tests.

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT_FILES [INPUT_FILES ...], --input_files INPUT_FILES [INPUT_FILES ...]
                        List of files to test.
  -d INPUT_DIR, --input_dir INPUT_DIR
                        Input directory, statistical_tests will be launched on each file.
  -o {terminal,file,graph,all}, --output {terminal,file,graph,all}
                        Output report options.
  -j {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31}, --n_cores {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31}
                        Number of processes used, 1 by default, maximum 31
  -t [STATISTICAL_TESTS ...], --test [STATISTICAL_TESTS ...]
                        Specifies which statistical_tests to launch. By default all statistical_tests are launched.
  -dt {int,bits,bytes}, --data_type {int,bits,bytes}
                        Used to select data type of sample, by default integer (int)
  -s {\n, ,,,;}, --separator {\n, ,,,;}
                        Separator used for integer files.
  -ll {ALL,DEBUG,INFO,WARN,ERROR,FATAL,OFF,TRACE}, --log_level {ALL,DEBUG,INFO,WARN,ERROR,FATAL,OFF,TRACE}
                        Log level (default: INFO).

Usage

Interpretation of the generated results

The tests in this tool calculate a p-value.

The p-value represents the probability of obtaining a distribution at least as extreme as that observed.

We compare this p-value to a threshold (0.01). If the p-value is lower than this threshold, it implies that the probability of the observed behavior occurring by chance 1 - 0.01 (99%).

For this example (p-value < 0.01), a perfectly random sample is expected to fail this test only 1% of the time.

To account for this behavior, the recommended use of Random Test Tool is:

  1. Run it on multiple samples from the same source;
  2. consider a test to fail if it fails on a sufficient number of samples compared to the expected number (1% of samples in the example).

For example, if we run the tests on 100 samples and the binary rank test fails 12 times, as 12% > 1% (threshold of 0.01), we can consider that the source fails the test in question.

Conversely, if there are only 2 failures out of 100, the test would be considered a success.

Comparison with Dieharder, NIST Test Suite and TestU01

The primary motivation for implementing this Random Test Tool was to provide to the community a tool highly user friendly, easy to manipulate, capable of delivering clear and explicit results data, requiring minimal configuration.

While there are, of course, other tools that implement statistical tests:

Implemented statistical tests comparison

The table below lists the implemented statistical tests by Tool.

Statistical test Diehard NIST Test Suite Diharder TestU01 Random Test Tool
Monobit (Chi2) XX XX XX XX XX
Frequency in block XX XX XX XX
Run Test XX XX XX XX XX
Longest run of Ones XX XX XX
Binary Rank XX XX XX XX XX
DFT XX XX XX XX
Non-overlapping template matching XX XX XX XX
Overlapping template matching XX XX XX XX
Maurer test XX XX XX XX
Lempel-Ziv XX XX XX
Linear Complexity XX XX XX XX
Serial XX XX XX XX
Approximate entropy XX XX XX
Cumulative Sums XX XX XX
Random excursions XX XX XX XX
Birthday Spacing XX XX XX
5-Permutation XX XX XX
OPSO/OQSO XX XX XX
DNA XX XX XX
Parking Lot XX XX XX
Minimum Distance XX XX XX
3-D Spheres XX XX XX
Craps XX XX XX
Squeeze XX XX XX
Other "Crush Tests" (multiple tests) XX
Other "BigCrush" tests (multiple tests) XX

It is important to note that TestU01 implements three test batteries: SmallCrush, Crush, and BigCrush. Due to the substantial number of tests within these batteries, not all of them are listed in the above table.

From a purely statistical perspective, TestU01 is currently the most comprehensive test suite available .

Usage comparison

The table below provides a comparison of the previously discussed tools, focusing on criteria and features chosen by auditors (from an audit perspective rather than a statistical one).

The aim of this comparison is to help one user to identify the most relevant tool to pick based on the users' needs.

Theme Control Diharder NIST Test Suite TestU01 Random Test Tool
Ease of installation Via native operating system package manager XX O O O
" Via non-native package manager O O O XX (pip)
" Via distributed executable O O O O
" Via code compilation XX XX XX XX
" Simple configuration (no/few non-standard installation tasks) XX XX O XX
Sharing / Transparency Open-source tool XX XX XX XX
Documentation Quality of installation documentation XX O X XX
" Quality of "quick start" / "out-of-the-box" usage documentation X O O XX
" Documentation usability quality (ease of search and presentation of topics) X O X X
" Quality of mathematical documentation (precise description of statistical tests) X XX X O
Ease of use Overall ease of use / intuitiveness X XX X XX
" Automatable (e.g., no interactivity required) XX O XX XX
Inputs/Outputs Inputs: ASCII binary XX XX X XX
" Inputs: Raw binary (files) XX XX X XX
" Inputs: Floating numbers [0; 1] O O XX O
" Inputs: Integers / Range of integers O O O XX
" Outputs: Results within the terminal XX XX XX XX
" Outputs: Structured outputs XX XX X XX
" Outputs: Interpreted statistical results XX O XX XX
" Outputs: Measurement and numerical results returned XX XX X XX
Relevance of statistical tests Completeness of tests algorithms and precision of the configuration X X XX X

Table caption

Color Qualification
O Nonexistent / Difficult to identify / Complex
X Partially addressed
XX Complete / Adequate

Please note that the provided qualification represents a subjective perspective based on few hours of usage/research for each tool, within the context of random-related technical audits.

It's important to highlight that no performance comparison was conducted between the tools

Contribute to this project!

Feedback, contributions and ideas are very Welcome !

Need some feature or encountering a bug?

Please open an issue and describe the encountered bug and or share any awesome ideas you may have related to the Random Test Tool project.

Steps for submitting code

  1. Fork the current repository.

  2. Write your feature. Please follow-up the PEP 8 coding style.

  3. Send a GitHub Pull Request on the develop branch. Contributions will be merged after a code review. Branches will be moved to main when required.

High level Todolist

  • Implement, enhance and complete unit-tests (based on the standard)
  • Implement an export feature including results and figures (HTML and/or Markdown)
  • Integrate additional statistical tests (starting with NIST)

Related work

French blog posts

English blog posts

We intend to translate both of the above blog posts into English in the near future once our English blog is available .

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

random_test_tool-1.0.5.tar.gz (20.2 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page