Skip to main content

CuRT allows to generate a specific number of RPS to a webservice. Permite generar un número específico de peticiones por segundo a un servicio web.

Project description

CuRT

Table of contents

  1. CuRT
    1. Introduction
    2. Requirements
    3. Get CuRT
    4. Configuration
    5. Using source code
    6. Maintenance

Introduction

Custom RPS Tester is a tool to make specific requests per second to a webservice.

Requirements

  • Python version

    • Python 3
  • Modules

    • Pandas 1.5.2
    • Plotly 5.11.0
    • Requests 2.28.1

Get CuRT

pip install curt

Configuration

Git repository CuRT.py file is just a main file example.

Example

There are two file in ejemplos directory. properties.py file has example variables for CuRT.py file (i.e. host for https://jsonplaceholder.typicode.com/posts). post.json file is a sample json file for host.

In this example, the main file is CuRT.py, and it can import properties.py file:

try:
    from resources import properties as p
except (ModuleNotFoundError, ImportError):
    print('Error: Propiedades no encontradas.')

CuRT can be imported too:

from curt import dictionary_creator
from curt import threads_manager
from curt import reports
from curt import simple_rest

The HTTP methods to be used must be defined, and the request can be made using the imported post_request functions for POST requests, and get_request for GET requests.

Since the responses for each request are generated independently, there must be a way to store them. In the example a df_list is created where each request made is added. The dictionary_creator module contains a single function, which creates a dictionary taking the start time, end time, the json method used and the response obtained, so that, joining everything together, the function declaration part for the methods to be consumed would look something like this in the CuRT.py:

df_list = []


def post_ex():
    start = dt.datetime.now()
    # Método:
    json_method = "/posts"
    # Ubicación de la petición:
    request_file = 'resources/post.json'

    with open(request_file) as json_file:
        payload = json.load(json_file)

        # POST request
        response = simple_rest.post_request(p.HOST + json_method, payload, p.HEADERS)

        end = dt.datetime.now()

        df_list.append(dictionary_creator.new_dict(start, end, json_method, response))


def get_ex():
    start = dt.datetime.now()
    # Método:
    json_method = "/posts/1"

    # POST request
    response = simple_rest.get_request(p.HOST + json_method, p.HEADERS)

    end = dt.datetime.now()

    df_list.append(dictionary_creator.new_dict(start, end, json_method, response))

CuRT consumes the methods of the indicated web service and then generates a report with the results obtained. As all the tools to do this are included within CuRT itself, then they can be used to generate reports based on a previous .csv file. So in the example there are two functions that generate a report: do_test and generate_html.

The reports module contains only one function, called loadtest, which is the one that contains the main functionality of the tool: making the reports. So regardless of what the data source is, it is this method that must be consumed to generate the report.

The module that is in charge of loadtesting as such is threads_manager. It contains a single function that needs three values: the duration of the test in seconds, the functions that perform consume the web service and the number of requests per second.

Putting these together, the functions look like this:

def hacer_pruebas(length, test_name, dark_mode, base_dir, functions, rps):
    print('## Custom RPS Tester ##')
    print('# Tester #')
    start_time = dt.datetime.now()
    threads_manager.start_threads(int(length), functions, int(rps))
    end_time = dt.datetime.now()
    df = pd.DataFrame(df_list)
    report_dir = test_name + '_' + str(dt.datetime.now().timestamp())
    dir_name = base_dir + '/' + report_dir
    if not os.path.exists(dir_name):
        os.makedirs(dir_name)
    df.to_csv(dir_name + '/data.csv', index=False, encoding='utf-8-sig')
    df.sort_values(by='End', inplace=True)
    reports.loadtest(dark_mode=dark_mode, dir_name=dir_name, functions=functions, df=df, start_time=start_time,
             end_time=end_time, testing=True)


def generar_html(dark_mode, dir_name, csv_location):
    print('## Custom RPS Tester ##')
    print('# Generador de reportes #')
    print('##########################################################################')
    if dir_name == 'NULL':
        dir_name = os.getcwd()
    print('Cargando archivo: ' + csv_location)
    df = pd.read_csv(csv_location, encoding='utf-8')
    print('Hecho.')
    print('##########################################################################')
    function_names = df['MethodName'].drop_duplicates().tolist()
    functions = []
    for function in function_names:
        functions.append(globals()[function])
    datetime_format = '%Y-%m-%d %H:%M:%S.%f'
    start_time = dt.datetime.strptime(df['Start'].min(), datetime_format)
    end_time = dt.datetime.strptime(df['End'].max(), datetime_format)
    df.sort_values(by='End', inplace=True)
    reports.loadtest(dark_mode=dark_mode, dir_name=dir_name, functions=functions, df=df, start_time=start_time,
             end_time=end_time, testing=False)

The testing flag is used to indicate the name of the report. For a new report, the final file will be index.html. For a report based on previous data, the final file will be index-[timestamp].html.

Finally, the main function will define which function will be called, so there are two possibilities:

  • The loadtest is performed and then the report is generated:
def main():
    length = 20
    rps = 20
    test_name = 'Ejemplo'
    base_dir = 'Reports'
    functions = [post_ex, get_ex]
    dark_mode = True
    hacer_pruebas(length=length, test_name=test_name, dark_mode=dark_mode, base_dir=base_dir, functions=functions, rps=rps)
  • The report is made based on existing data:
def main():
    dir_name = p.DIRECTORY + '/CuRT/Reports/Ejemplo_1670629990.446974'
    csv_location = dir_name + '/data.csv'
    dark_mode = True
    generar_html(dark_mode=dark_mode, dir_name=dir_name, csv_location=csv_location)

Finally the CuRT.py file is executed and the report is generated:

Using source code

Installation

Prerequisites

  • Install Python 3.
  • Add Python installation path to path environment variable.

Get started

  • Create a virtual environment (pipenv, virtualenv, etc.).
  • Activate the virtual environment.
  • Install required modules (requirements.txt):
pip install -r requirements.txt

Maintenance

Current developers who maintain the code:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

curt-0.0.4.tar.gz (16.5 kB view details)

Uploaded Source

Built Distribution

curt-0.0.4-py3-none-any.whl (16.9 kB view details)

Uploaded Python 3

File details

Details for the file curt-0.0.4.tar.gz.

File metadata

  • Download URL: curt-0.0.4.tar.gz
  • Upload date:
  • Size: 16.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.5

File hashes

Hashes for curt-0.0.4.tar.gz
Algorithm Hash digest
SHA256 1372f0d157c4485f817bc32b74a6b3a1ab40d5c2b0a9d560092d3dab684bb7b1
MD5 bd707619bd1e24b12f60630c752b2ff0
BLAKE2b-256 2cb6dfda04743d783637843e7222596c324dce85de1ca8c8ce6887121ebaff71

See more details on using hashes here.

File details

Details for the file curt-0.0.4-py3-none-any.whl.

File metadata

  • Download URL: curt-0.0.4-py3-none-any.whl
  • Upload date:
  • Size: 16.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.5

File hashes

Hashes for curt-0.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 b4327be606557ad2ac53ea5f3ec8b976c025452d9727c765efdb20ecceee5025
MD5 30d08e83ba91bd1f27659cc9a1234c90
BLAKE2b-256 0f0de03c274f1bb3e723ba6817270409fec76ed4674a7239184cfd9eb9505803

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page