Skip to main content

Scalene: A high-resolution, low-overhead CPU and memory profiler for Python

Project description

scalene

scalene: a high-performance CPU and memory profiler for Python

by Emery Berger


中文版本 (Chinese version)

About Scalene

Scalene is a high-performance CPU and memory profiler for Python that does a number of things that other Python profilers do not and cannot do. It runs orders of magnitude faster than other profilers while delivering far more detailed information.

  1. Scalene is fast. It uses sampling instead of instrumentation or relying on Python's tracing facilities. Its overhead is typically no more than 10-20% (and often less).
  2. Scalene is precise. Unlike most other Python profilers, Scalene performs CPU profiling at the line level, pointing to the specific lines of code that are responsible for the execution time in your program. This level of detail can be much more useful than the function-level profiles returned by most profilers.
  3. Scalene separates out time spent running in Python from time spent in native code (including libraries). Most Python programmers aren't going to optimize the performance of native code (which is usually either in the Python implementation or external libraries), so this helps developers focus their optimization efforts on the code they can actually improve.
  4. Scalene profiles memory usage. In addition to tracking CPU usage, Scalene also points to the specific lines of code responsible for memory growth. It accomplishes this via an included specialized memory allocator.
  5. Scalene produces per-line memory profiles, making it easier to track down leaks.
  6. Scalene profiles copying volume, making it easy to spot inadvertent copying, especially due to crossing Python/library boundaries (e.g., accidentally converting numpy arrays into Python arrays, and vice versa).
  7. NEW! Scalene now reports the percentage of memory consumed by Python code vs. native code.
  8. NEW! Scalene now highlights hotspots (code accounting for significant percentages of CPU time or memory allocation) in red, making them even easier to spot.

Installation

Homebrew (Mac OS X)

You can use Homebrew to install the full version of Scalene (with memory profiling). Instead of using pip as described below, just do this:

  % brew tap emeryberger/scalene
  % brew install --head libscalene

This will install a scalene script you can use (see below).

Linux (Ubuntu and others)

Scalene is also distributed as a pip package and works on Mac OS X and Linux platforms (including Ubuntu in Windows WSL2).

You can install it as follows:

  % pip install scalene

or

  % python -m pip install scalene

ArchLinux

NEW: You can now install the full Scalene library and script on Arch Linux via the AUR package. Use your favorite AUR helper, or manually download the PKGBUILD and run makepkg -cirs to build. Note that this will place libscalene.so in /usr/lib; modify the below usage instructions accordingly.

Usage

The following command will run Scalene on a provided example program.

  % scalene test/testme.py

To see all the options, run with --help.

% scalene --help
usage: scalene [-h] [-o OUTFILE] [--profile-interval PROFILE_INTERVAL]
               [--wallclock]
               prog

Scalene: a high-precision CPU and memory profiler.
            https://github.com/emeryberger/Scalene

positional arguments:
  prog                  program to be profiled

optional arguments:
  -h, --help            show this help message and exit
  -o OUTFILE, --outfile OUTFILE
                        file to hold profiler output (default: stdout)
  --profile-interval PROFILE_INTERVAL
                        output profiles every so many seconds.
  --wallclock           use wall clock time (default: virtual time)
  --cpu-only            only profile CPU time (default: profile CPU, memory, and copying)

Comparison to Other Profilers

Performance and Features

Below is a table comparing the performance of various profilers to scalene, running on an example Python program (benchmarks/julia1_nopil.py) from the book High Performance Python, by Gorelick and Ozsvald. All of these were run on a 2016 MacBook Pro.

Profiler Time Slowdown
original program 6.71s 1.0x
cProfile 11.04s 1.65x
Profile 202.26s 30.14x
pyinstrument 9.83s 1.46x
line_profiler 78.0s 11.62x
pprofile (deterministic) 403.67s 60.16x
pprofile (statistical) 7.47s 1.11x
yappi (CPU) 127.53s 19.01x
yappi (wallclock) 21.45s 3.2x
py-spy 7.25s 1.08x
memory_profiler > 2 hours >1000x
scalene (CPU only) 6.98s 1.04x
scalene (CPU + memory) 7.68s 1.14x

And this table compares the features of other profilers vs. Scalene.

Profiler Line-level? CPU? Wall clock vs. CPU time? Python vs. native? Memory? Unmodified code? Threads?
cProfile wall clock
Profile CPU time
pyinstrument wall clock
line_profiler wall clock
pprofile (deterministic) wall clock
pprofile (statistical) wall clock
yappi (CPU) CPU time
yappi (wallclock) wall clock
py-spy both
memory_profiler
scalene (CPU only) both
scalene (CPU + memory) both

Output

Scalene prints annotated source code for the program being profiled and any modules it uses in the same directory or subdirectories. Here is a snippet from pystone.py, just using CPU profiling:

    benchmarks/pystone.py: % of CPU time = 100.00% out of   3.66s.
          	 |     CPU % |     CPU % |   
      Line	 |  (Python) |  (native) |  [benchmarks/pystone.py]
    --------------------------------------------------------------------------------
    [... lines omitted ...]
       137	 |     0.27% |     0.14% | def Proc1(PtrParIn):
       138	 |     1.37% |     0.11% |     PtrParIn.PtrComp = NextRecord = PtrGlb.copy()
       139	 |     0.27% |     0.22% |     PtrParIn.IntComp = 5
       140	 |     1.37% |     0.77% |     NextRecord.IntComp = PtrParIn.IntComp
       141	 |     2.47% |     0.93% |     NextRecord.PtrComp = PtrParIn.PtrComp
       142	 |     1.92% |     0.78% |     NextRecord.PtrComp = Proc3(NextRecord.PtrComp)
       143	 |     0.27% |     0.17% |     if NextRecord.Discr == Ident1:
       144	 |     0.82% |     0.30% |         NextRecord.IntComp = 6
       145	 |     2.19% |     0.79% |         NextRecord.EnumComp = Proc6(PtrParIn.EnumComp)
       146	 |     1.10% |     0.39% |         NextRecord.PtrComp = PtrGlb.PtrComp
       147	 |     0.82% |     0.06% |         NextRecord.IntComp = Proc7(NextRecord.IntComp, 10)
       148	 |           |           |     else:
       149	 |           |           |         PtrParIn = NextRecord.copy()
       150	 |     0.82% |     0.32% |     NextRecord.PtrComp = None
       151	 |           |           |     return PtrParIn

And here is an example with memory profiling enabled. The "sparklines" summarize memory consumption over time (at the top, for the whole program).

    Memory usage: ▂▂▁▁▁▁▁▁▁▁▁▅█▅ (max: 1617.98MB)
    phylliade/test2-2.py: % of CPU time =  40.68% out of   4.60s.
           |    CPU % |    CPU % |  Net  | Memory usage   | Copy  |
      Line | (Python) | (native) |  (MB) | over time /  % | (MB/s)| [phylliade/test2-2.py]
    --------------------------------------------------------------------------------
         1 |          |          |       |                |       | import numpy as np
         2 |          |          |       |                |       | 
         3 |          |          |       |                |       | @profile
         4 |          |          |       |                |       | def main():
         5 |          |          |    92 | ▁▁▁▁▁▁▁▁▁  11% |       |     x = np.array(range(10**7))
         6 |    0.43% |   40.24% |   762 | ▁▁▄█▄      89% |   168 |     y = np.array(np.random.uniform(0, 100, size=(10**8)))
         7 |          |          |       |                |       | 
         8 |          |          |       |                |       | main()

Positive net memory numbers indicate total memory allocation in megabytes; negative net memory numbers indicate memory reclamation.

The memory usage sparkline and copy volume make it easy to spot unnecessary copying in line 6.

Success Stories

If you use Scalene to successfully debug a performance problem, please add a comment to this issue!

Acknowledgements

Logo created by Sophia Berger.

Project details


Release history Release notifications | RSS feed

This version

0.9.4

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scalene-0.9.4.tar.gz (148.5 kB view hashes)

Uploaded Source

Built Distribution

scalene-0.9.4-py3-none-any.whl (145.6 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page