Skip to main content

Measure of python performance in local.

Project description

Local Python performance measurement

This project aims to create an easy-to-use package for measuring the performance of python in any machine, in terms of CPU, multiprocessing and GPU (pytorch with CUDA), and also verify that the GPU is used.

Installation

Install the requirements in requirements.txt, then install the package:

pip install -r requirements.txt
pip install localperf

Usage

You can measure the performance of your machine in terms of CPU, multiprocessing and GPU (pytorch only for now) by running the commands below.

Relevant arguments for visualization are:

  • --plot : plot the results (default False)
  • --log_dir [log directory] : directory where to save the results (default no logging)
  • --image_dir [image directory]: directory where to save the images (default no image saving)
  • --no-progess : do not show progress bar (default behavior is to show)

CPU

CPU performances

To measure the performance of your machine in terms of CPU, run the following command:

python -m localperf.cpu

Relevant arguments for the benchmark are:

  • log_n_data [log n data] : maximum number of data to do the benchmark (in log10 scale). The treatment of 1 data is defined as the sum of integers from 1 to 1000 (with a for loop), it is used as a base unit of computation.
  • n_measures [n measures] : number of measures to do for each data size

Parallelization

Parallelization performances

To measure the performance of your machine in terms of parallelization, run the following command:

python -m localperf.parallel

Relevant arguments for the benchmark are:

  • log_n_data [log n data] : maximum number of data to do the benchmark (in log10 scale)
  • log2_n_process [log2 n process] : maximum number of processes to do the benchmark (in log2 scale)
  • n_measures [n measures] : number of measures to do for each data size
  • lib [lib] : library to use for parallelization. Default is joblib. Currently supported libraries are multiprocessing (mp), joblib (joblib) and ray (ray). For ray you will need to install it with pip before running the benchmark.

Compare parallelization libraries

To compare the performances of the different libraries, run the following command:

python -m localperf.parallel_benchmark

This will compare the performances of multiprocessing, joblib and ray. Relevant arguments are:

  • log_n_data [log n data] : maximum number of data to do the benchmark (in log10 scale)
  • n_process [n process] : number of processes to do the benchmark. Default behavior is to use your number of CPUs, given by multiprocessing.cpu_count()
  • n_measures [n measures] : number of measures to do for each data size

GPU (pytorch)

Install CUDA for pytorch

GPU with torch performances

First, install pytorch with CUDA following the instructions on the pytorch website. If this code returns True, it means pytorch and CUDA are installed and its a good sign your GPU is used but this is not a guarantee:

import torch
print(torch.cuda.is_available())

You may use the command nvidia-smi to check if your GPU is recognized by the system.

nvidia-smi

nvidia-smi sceenshot

This give you information about each GPU recognized by the system : the name, the VRAM used. You can run this command in a separate terminal to see the GPU usage during your code :
watch -n 0.1 nvidia-smi

Measure performance

To measure the performance of your machine in terms of GPU, run the following command:

python -m localperf.gpu_torch

Relevant arguments for the benchmark are:

  • log_n_data [log n data] : maximum number of data to do the benchmark (in log10 scale)
  • n_measures [n measures] : number of measures to do for each data size
  • n_measures_gpu [n measures gpu] : number of measures to do for each data size, on the GPU. If not specified, the same number of measures as on the CPU is done.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

localperf-1.4.1.tar.gz (9.9 kB view details)

Uploaded Source

Built Distribution

localperf-1.4.1-py3-none-any.whl (13.8 kB view details)

Uploaded Python 3

File details

Details for the file localperf-1.4.1.tar.gz.

File metadata

  • Download URL: localperf-1.4.1.tar.gz
  • Upload date:
  • Size: 9.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.7

File hashes

Hashes for localperf-1.4.1.tar.gz
Algorithm Hash digest
SHA256 ccaafc4576edb91c1fd65245b432ba5187353575f0b7b2e0c65101797bc85bc3
MD5 8bbe35ef97be2bbd4b268b783d0882cc
BLAKE2b-256 e49dc3a961760710a45fe1a59aaa55832f36e3ff15c1974f4db4b8c0a4e98ce3

See more details on using hashes here.

File details

Details for the file localperf-1.4.1-py3-none-any.whl.

File metadata

  • Download URL: localperf-1.4.1-py3-none-any.whl
  • Upload date:
  • Size: 13.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.7

File hashes

Hashes for localperf-1.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 8327d13e23c95011f5d080961c762fd458d657690a2ec47cc1a0611f9147aab1
MD5 4c600fa25f9b9c78416d54ee12aef7c0
BLAKE2b-256 42448cb9448db02442555e78a2a1645715c39244e88e9ba1e4ba0efc2e2340ef

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page