A CLI tool for compiling python
Project description
pycompile
""" _ _
_ __ _ _ ___ ___ _ __ ___ _ __ (_) | ___
| '_ \| | | |/ __/ _ \| '_ ` _ \| '_ \| | |/ _ \
| |_) | |_| | (_| (_) | | | | | | |_) | | | __/
| .__/ \__, |\___\___/|_| |_| |_| .__/|_|_|\___|
|_| |___/ |_|
"""
A CLI tool for compiling python source code using Cython or Nuitka.
Latest docs 📝
Table of contents
Local-development
For local development run the following command
make setup-local-dev
All available make
commands
make help
Compile
Syntax | Description |
---|---|
--input-path PATH |
by default it will exclude any test and __init__.py files |
--clean-source |
Deletes the sources files. |
--keep-builds |
Keeps the temp build files. |
--clean-executables |
Deletes the shared objects (.so ) files. |
--engine |
Can be cython or nuitka . |
--exclude-glob-paths |
Glob file patterns for excluding specific files. |
--verbose |
Increase log messages. |
pycompile -i your_python_files --clean-source --engine nuitka
By default, the Cython is being used as the default compiler.
For compiling the examples
use the following command:
pycompile -i input_path --engine cython
which by default, deletes any temp build files and keeps the source files.
or
pycompile -i input_path --engine nuitka
After the compilation the input
dir will have the following structure.
examples
├── fib.py.py
├── fib.cpython-310-darwin.so
├── test_fib.py
Benchmark
Syntax | Description |
---|---|
--input-path PATH |
by default it will exclude any test and __init__.py files |
--engine |
Can be cython , nuitka , all or none . |
--type |
Can be memory , cpy , or both |
--verbose |
Increase log messages. |
--profile_func_pattern TEXT |
function name pattern for profiling defaults to benchmark |
For running a benchmark on the examples
use the following command:
pycompile benchmark -i src/examples -vvv --engine cython
which by default will start a memory
and a cpu
benchmark, starting with
python
and then with cython
and nuitka
The python package must have a
test_module.py
because both benchmark types are invoked withpytest
runs
For memory profiling the script will decorate all the functions in main.py
with the profile
decorator from memory-profiler
. This is not optimal memory profiling,
because we don't actually profile
the function itself, instead we profile the caller
but it's necessary
if we want to profile
also the compiled code.
Memory benchmark using:3.10.9 (main, Feb 2 2023, 12:59:36) [Clang 14.0.0 (clang-1400.0.29.202)
Line # Mem usage Increment Occurrences Line Contents
=============================================================
7 49.4 MiB 49.4 MiB 1 @profile
8 def samples_benchmark():
9 127.7 MiB 78.4 MiB 1 sum_of_squares()
10 166.0 MiB 38.3 MiB 1 harmonic_mean()
11 166.0 MiB 0.0 MiB 1 fibonacci(30)
12 204.2 MiB 38.2 MiB 1 sum_numbers()
13 57.7 MiB -146.5 MiB 1 sum_strings()
46.03s call test_examples.py::test_examples
Memory benchmark using cython
Line # Mem usage Increment Occurrences Line Contents
=============================================================
7 66.5 MiB 66.5 MiB 1 @profile
8 def samples_benchmark():
9 103.7 MiB 37.3 MiB 1 sum_of_squares()
10 102.9 MiB -0.8 MiB 1 harmonic_mean()
11 102.9 MiB 0.0 MiB 1 fibonacci(30)
12 104.9 MiB 2.0 MiB 1 sum_numbers()
13 65.6 MiB -39.3 MiB 1 sum_strings()
4.33s call test_examples.py::test_examples
Memory benchmark using nuitka
Line # Mem usage Increment Occurrences Line Contents
=============================================================
7 71.6 MiB 71.6 MiB 1 @profile
8 def samples_benchmark():
9 148.8 MiB 77.1 MiB 1 sum_of_squares()
10 186.5 MiB 37.7 MiB 1 harmonic_mean()
11 186.5 MiB 0.0 MiB 1 fibonacci(30)
12 225.1 MiB 38.6 MiB 1 sum_numbers()
13 225.1 MiB 0.0 MiB 1 sum_strings()
3.45s call test_examples.py::test_examples
For cpu profiling the same approached is being used, but instead of decorating the calling functions
it decorates
the test cases with the benchmark
from pytest-benchmark
.
CPU benchmark using:3.10.9 (main, Feb 2 2023, 12:59:36) [Clang 14.0.0 (clang-1400.0.29.202)]
------------------------------------------- benchmark: 1 tests ------------------------------------------
Name (time in s) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations
---------------------------------------------------------------------------------------------------------
test_examples 3.9257 4.0640 3.9731 0.0605 3.9387 0.0917 1;0 0.2517 5 1
---------------------------------------------------------------------------------------------------------
Legend:
Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
OPS: Operations Per Second, computed as 1 / Mean
=================================================================================================================
29.40s call test_examples.py::test_examples
CPU benchmark using cython
Name (time in s) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations
---------------------------------------------------------------------------------------------------------
test_examples 4.4198 4.6645 4.4945 0.1048 4.4340 0.1376 1;0 0.2225 5 1
---------------------------------------------------------------------------------------------------------
Legend:
Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
OPS: Operations Per Second, computed as 1 / Mean
===================================================================================================================
31.80s call test_examples.py::test_examples
CPU benchmark using nuitka
------------------------------------------- benchmark: 1 tests ------------------------------------------
Name (time in s) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations
---------------------------------------------------------------------------------------------------------
test_examples 3.2931 3.5091 3.4278 0.0972 3.4875 0.1571 1;0 0.2917 5 1
---------------------------------------------------------------------------------------------------------
Legend:
Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
OPS: Operations Per Second, computed as 1 / Mean
===================================================================================================================
24.02s call test_examples.py::test_examples
Hence, the following structure are required for the benchmark
subcommand.
module
├── sample_funcs.py # implementation
├── main.py # entrypoint
├── test_sample_funcs.py # test cases
Dry run
Syntax | Description |
---|---|
--input-path PATH |
by default it will exclude any test and __init__.py files |
--exclude-glob-paths |
Glob file patterns for excluding specific files. |
--verbose |
Increase log messages. |
pycompile dry_run -i ./src
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for pycompile-0.1.6-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | d83851e586350ea97a733ba5fd106f9610599a0e5970508b5cbfe69563e2ee40 |
|
MD5 | 8016318e4baa2652fc517a169e3285d7 |
|
BLAKE2b-256 | e763a89ea9bcaabbc790b47411e9ee14c945fa6ffb66a09c9fd1b5e40004f973 |