Skip to main content

A simple python tool used form validating pseudo random generators output.

Project description

Random Test Tool (RTT)

License: MIT Current Version Twitter Follow XMCO

Random Test Tool (also refered RTT) is a Python script designed for testing the randomness of sequences of integers or bits. RTT serves the purpose of evaluating random number generators.

This project was primarily motivated by the following objectives:

  • Evaluate series of integers: RTT allows the assessment of series of integers generated by programs based on generators or directly by generators.

  • Facilitate random outputs controls during security audits: RTT provides a structured set of instructions and assessment outputs to enhance the interpretation of statistical results obtained during security audits.

  • Simplify manipulation of test inputs, tests themselves and test outputs: The tool offers an easy-to-use Python implementation, enabling users to manipulate test inputs, outputs, and tests effortlessly.

:building_construction: Installation

Compatibility

Random Test Tool runs in Python 3!

It has minimal dependencies, all of which can be installed with the following commands below.

Install through repository

git clone https://github.com/xmco/random-test-tool
cd random-test-tool
pip install -r requirements.txt

Install through package manager

We are working on updating the project to propose a seamless and straightforward installation through pip package manager (https://pypi.org), stay tuned :slightly_smiling_face:

:arrow_forward: Usage

Running Random Test Tool

Random Test Tool can be executed using the following commands:

python random_test_tool.py -i <file_path>
python random_test_tool.py -d <dir_path>

For example:

python random_test_tool.py -i random_generator_samples/python_random_integer/20230816-105301_RANDOM_NUMBERS.txt

Random Test Tool supports three input formats (within 20230816-105301_RANDOM_NUMBERS.txt):

  1. "bitstring" series, i.e., a sequence of 0s and 1s:
01010111010011001...
  1. list of integers starting at 0 or 1 separated by commas, spaces, semicolons or line breaks:
25
12
1
4
  1. "bytestring" series, a sequence of bytes

Random Test Tool is capable of testing multiple files in a row:

python random_test_tool.py -i test_file_1.txt test_file_2.txt

Additionally, it can test all files within a directory:

python random_test_tool.py -d test_files

Outputs

By default, Random Test Tool returns results in the terminal.

You can use the -o option to specify a return file or generate graphs.

Other options

For a comprehensive understanding of available options, use the following command:

python random_test_tool.py -h 

Script testing the randomness of a serie of integers bits via statistical statistical_tests.

optional arguments:
  -h, --help            show this help message and exit
  -i INPUT_FILES [INPUT_FILES ...], --input_files INPUT_FILES [INPUT_FILES ...]
                        List of files to test.
  -d INPUT_DIR, --input_dir INPUT_DIR
                        Input directory, statistical_tests will be launched on each file.
  -o {terminal,file,graph,all}, --output {terminal,file,graph,all}
                        Output report options.
  -j {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31}, --n_cores {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31}
                        Number of processes used, 1 by default, maximum 31
  -t [STATISTICAL_TESTS ...], --test [STATISTICAL_TESTS ...]
                        Specifies which statistical_tests to launch. By default all statistical_tests are launched.
  -dt {int,bits,bytes}, --data_type {int,bits,bytes}
                        Used to select data type of sample, by default integer (int)
  -s {\n, ,,,;}, --separator {\n, ,,,;}
                        Separator used for integer files.
  -ll {ALL,DEBUG,INFO,WARN,ERROR,FATAL,OFF,TRACE}, --log_level {ALL,DEBUG,INFO,WARN,ERROR,FATAL,OFF,TRACE}
                        Log level (default: INFO).

Usage

:school_satchel: Interpretation of the generated results

The tests in this tool calculate a p-value.

The p-value represents the probability of obtaining a distribution at least as extreme as that observed.

We compare this p-value to a threshold (0.01). If the p-value is lower than this threshold, it implies that the probability of the observed behavior occurring by chance 1 - 0.01 (99%).

For this example (p-value < 0.01), a perfectly random sample is expected to fail this test only 1% of the time.

To account for this behavior, the recommended use of Random Test Tool is:

  1. Run it on multiple samples from the same source;
  2. consider a test to fail if it fails on a sufficient number of samples compared to the expected number (1% of samples in the example).

For example, if we run the tests on 100 samples and the binary rank test fails 12 times, as 12% > 1% (threshold of 0.01), we can consider that the source fails the test in question.

Conversely, if there are only 2 failures out of 100, the test would be considered a success.

:arrow_upper_right: :arrow_lower_right: Comparison with Dieharder, NIST Test Suite and TestU01

The primary motivation for implementing this Random Test Tool was to provide to the community a tool highly user friendly, easy to manipulate, capable of delivering clear and explicit results data, requiring minimal configuration.

While there are, of course, other tools that implement statistical tests:

Implemented statistical tests comparison

The table below lists the implemented statistical tests by Tool.

Statistical test Diehard NIST Test Suite Diharder TestU01 Random Test Tool
Monobit (Chi2) :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Frequency in block :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Run Test :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Longest run of Ones :white_check_mark: :white_check_mark: :white_check_mark:
Binary Rank :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
DFT :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Non-overlapping template matching :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Overlapping template matching :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Maurer test :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Lempel-Ziv :white_check_mark: :white_check_mark: :white_check_mark:
Linear Complexity :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Serial :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Approximate entropy :white_check_mark: :white_check_mark: :white_check_mark:
Cumulative Sums :white_check_mark: :white_check_mark: :white_check_mark:
Random excursions :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Birthday Spacing :white_check_mark: :white_check_mark: :white_check_mark:
5-Permutation :white_check_mark: :white_check_mark: :white_check_mark:
OPSO/OQSO :white_check_mark: :white_check_mark: :white_check_mark:
DNA :white_check_mark: :white_check_mark: :white_check_mark:
Parking Lot :white_check_mark: :white_check_mark: :white_check_mark:
Minimum Distance :white_check_mark: :white_check_mark: :white_check_mark:
3-D Spheres :white_check_mark: :white_check_mark: :white_check_mark:
Craps :white_check_mark: :white_check_mark: :white_check_mark:
Squeeze :white_check_mark: :white_check_mark: :white_check_mark:
Other "Crush Tests" (multiple tests) :white_check_mark:
Other "BigCrush" tests (multiple tests) :white_check_mark:

It is important to note that TestU01 implements three test batteries: SmallCrush, Crush, and BigCrush. Due to the substantial number of tests within these batteries, not all of them are listed in the above table.

From a purely statistical perspective, TestU01 is currently the most comprehensive test suite available .

Usage comparison

The table below provides a comparison of the previously discussed tools, focusing on criteria and features chosen by auditors (from an audit perspective rather than a statistical one).

The aim of this comparison is to help one user to identify the most relevant tool to pick based on the users' needs.

Theme Control Diharder NIST Test Suite TestU01 Random Test Tool
Ease of installation Via native operating system package manager :white_check_mark: :x: :x: :x:
" Via non-native package manager :x: :x: :x: :white_check_mark: (pip)
" Via distributed executable :x: :x: :x: :x:
" Via code compilation :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
" Simple configuration (no/few non-standard installation tasks) :white_check_mark: :white_check_mark: :x: :white_check_mark:
Sharing / Transparency Open-source tool :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Documentation Quality of installation documentation :white_check_mark: :x: :heavy_check_mark: :white_check_mark:
" Quality of "quick start" / "out-of-the-box" usage documentation :heavy_check_mark: :x: :x: :white_check_mark:
" Documentation usability quality (ease of search and presentation of topics) :heavy_check_mark: :x: :heavy_check_mark: :heavy_check_mark:
" Quality of mathematical documentation (precise description of statistical tests) :heavy_check_mark: :white_check_mark: :heavy_check_mark: :x:
Ease of use Overall ease of use / intuitiveness :heavy_check_mark: :white_check_mark: :heavy_check_mark: :white_check_mark:
" Automatable (e.g., no interactivity required) :white_check_mark: :x: :white_check_mark: :white_check_mark:
Inputs/Outputs Inputs: ASCII binary :white_check_mark: :white_check_mark: :heavy_check_mark: :white_check_mark:
" Inputs: Raw binary (files) :white_check_mark: :white_check_mark: :heavy_check_mark: :white_check_mark:
" Inputs: Floating numbers [0; 1] :x: :x: :white_check_mark: :x:
" Inputs: Integers / Range of integers :x: :x: :x: :white_check_mark:
" Outputs: Results within the terminal :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
" Outputs: Structured outputs :white_check_mark: :white_check_mark: :heavy_check_mark: :white_check_mark:
" Outputs: Interpreted statistical results :white_check_mark: :x: :white_check_mark: :white_check_mark:
" Outputs: Measurement and numerical results returned :white_check_mark: :white_check_mark: :heavy_check_mark: :white_check_mark:
Relevance of statistical tests Completeness of tests algorithms and precision of the configuration :heavy_check_mark: :heavy_check_mark: :white_check_mark: :heavy_check_mark:

Table caption

Color Qualification
:x: Nonexistent / Difficult to identify / Complex
:heavy_check_mark: Partially addressed
:white_check_mark: Complete / Adequate

Please note that the provided qualification represents a subjective perspective based on few hours of usage/research for each tool, within the context of random-related technical audits.

It's important to highlight that no performance comparison was conducted between the tools

:busts_in_silhouette: Contribute to this project!

Feedback, contributions and ideas are very Welcome :slightly_smiling_face:!

Need some feature or encountering a bug?

Please open an issue and describe the encountered bug and or share any awesome ideas you may have related to the Random Test Tool project.

Steps for submitting code

  1. Fork the current repository.

  2. Write your feature. Please follow-up the PEP 8 coding style.

  3. Send a GitHub Pull Request on the develop branch. Contributions will be merged after a code review. Branches will be moved to main when required.

High level Todolist

  • Publish the project on pypi
  • Implement, enhance and complete unit-tests (based on the standard)
  • Implement an export feature including results and figures (HTML and/or Markdown)
  • Integrate additional statistical tests (starting with NIST)

:books: Related work

French blog posts

English blog posts

We intend to translate both of the above blog posts into English in the near future once our English blog is available :slightly_smiling_face:.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

random_test_tool-1.0.1.tar.gz (21.2 kB view details)

Uploaded Source

File details

Details for the file random_test_tool-1.0.1.tar.gz.

File metadata

  • Download URL: random_test_tool-1.0.1.tar.gz
  • Upload date:
  • Size: 21.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.5

File hashes

Hashes for random_test_tool-1.0.1.tar.gz
Algorithm Hash digest
SHA256 8bb410abf90a530b678eaec478b22efc2d2eed75b14711d723c0682662db02c5
MD5 9552b63e6eca3f0cc106fba5c8be2a4b
BLAKE2b-256 67034a4120b6efa69e8313eaf9e32a8ac17fb2ffe3d080bb3c1c537772cfba12

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page