A low-overhead sampling profiler for PySpark, that outputs Flame Graphs
Project description
pyspark-flame
A low-overhead profiler for Spark on Python
Pyspark-flame hooks into Pyspark's existing profiling capabilities to provide a low-overhead stack-sampling profiler, that outputs performance data in a format compatible with Brendan Gregg's FlameGraph Visualizer.
Because pyspark-flame hooks into Pyspark's profiling capabilities, it can profile the entire execution of an RDD, across the whole of the cluster, and provides RDD-level visibility of performance.
Unlike the cProfile-based profiler included with Pyspark, pyspark-flame uses stack sampling. It takes stack traces at regular (configurable) intervals, which allows its overhead to be low and tunable, and doesn't skew results, making it suitable for use in performance test environments at high volumes.
Installation
pip install pyspark-flame
Usage
from pyspark_flame import FlameProfiler
from pyspark import SparkConf, SparkContext
conf = SparkConf().set("spark.python.profile", "true")
conf = conf.set("spark.python.profile.dump", ".") # Optional - if not, dumps to stdout at exit
sc = SparkContext(
'local', 'test', conf=conf, profiler_cls=FlameProfiler,
environment={'pyspark_flame.interval': 0.25} # Optional - default is 0.2 seconds
)
# Do stuff with Spark context...
sc.show_profiles()
# Or maybe
sc.dump_profiles('.')
For convenience, flamegraph.pl is vendored in, so you can produce a flame graph with:
flamegraph.pl rdd-1.flame > rdd-1.svg
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for pyspark_flame-0.2.9-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 69eb2ec93742d972fb96846cc7d69b5228c2f6c23464b7440f80137da34fcd2d |
|
MD5 | cbfd9f1c7b9587c9289c8cf22509a7ef |
|
BLAKE2b-256 | 5f9f45a7d2a4838b8c06595f7e7273317ee16aaefdfc58a26484ee8037b919ce |