Skip to main content

An easy way to run OpenCL kernel files

Project description

OpenCL Kernel Python Wrapper

Install

Requirements

  • OpenCL GPU hardware
  • numpy
  • cmake(if compile from source)

Install from wheel

pip install pyoclk

or download wheel from release and install

Compile from source

Clone this repo

clone by http

git clone --recursive https://github.com/jinmingyi1998/opencl_kernels.git

with ssh

git clone --recursive git@github.com:jinmingyi1998/opencl_kernels.git

Install

cd opencl_kernels
python setup.py install

DO NOT move this directory after install

Usage

Kernel File:

a file named add.cl

kernel void add(global float*a, global float*out, int int_arg, float float_arg){
    int x = get_global_id(0);
    if(x==0){
        printf(" accept int arg: %d, accept float arg: %f\n",int_arg,float_arg);
    }
    out[x] = a[x] * float_arg + int_arg;    
}

Python Code

OOP Style

import numpy as np
import oclk

a = np.random.rand(100, 100).reshape([10, -1])
a = np.float32(a)
out = np.zeros(a.shape)
out = np.float32(out)

runner = oclk.Runner()
runner.load_kernel("add.cl", "add", "")

timer = oclk.TimerArgs(
    enable=True,
    warmup=10,
    repeat=50,
    name='add_kernel'
)
runner.run(
    kernel_name="add",
    input=[
        {"name": "a", "value": a, },
        {"name": "out", "value": out, },
        {"name": "int_arg", "value": 1, "type": "int"},
        {"name": "float_arg", "value": 12.34}
    ],
    output=['out'],
    local_work_size=[1, 1],
    global_work_size=a.shape,
    timer=timer
)
# check result
a = a.reshape([-1])
out = out.reshape([-1])
print(a[:8])
print(out[:8])

Call with Functions

import numpy as np
import oclk

a = np.random.rand(100, 100).reshape([10, -1])
a = np.float32(a)

out = np.zeros(a.shape)
out = np.float32(out)
oclk.init()
oclk.load_kernel("add.cl", "add", "")
r = oclk.run(
    kernel_name="add",
    input=[
        {"name": "a", "value": a, },
        {"name": "out", "value": out, },
        {"name": "int_arg", "value": 1, },
        {"name": "float_arg", "value": 12.34}
    ],
    output=['out'],
    local_work_size=[1, 1],
    global_work_size=a.shape
)
# check result
a = a.reshape([-1])
out = out.reshape([-1])
print(a[:8])
print(out[:8])

Python api Usage

API

load_kernel
def loak_kernel(
    cl_file: str, kernel_name: str, compile_option: Union[str, List[str]]
) -> int: ...
  • filename can be absolute or relative path
  • kernel_name is the kernel functions' name
  • compile option can be strings like -DMY_DEF=1, -D is necessary
release_kernel
def release_kernel(kernel_name: str) -> int: ...

unload kernel from context, kernel name cannot be duplicated.

If you want to reload a kernel, you have to release it firstly.

run
def run(*, kernel_name: str,
        input: List[Dict[str, Union[int, float, np.array]]],
        output: List[str],
        local_work_size: List[int],
        global_work_size: List[int],
        wait: bool = True,
        timer: Union[Dict, TimerArgs] = TimerArgsDisabled) -> List[np.ndarray]: ...
  • input: Dictionary to set input args, in the same order as kernel function
    • args from np.array should be contiguous array
    • constant args:
      • python type: float -> c type: float
      • python type: int -> c type: long
      • or specify c type with field "type", support types:
        • [unsigned] int
        • [unsigned] long
        • float
        • double
  • output: List of names to specify which array will be get back from GPU buffer
  • local_work_size/global_work_work: list of integer, specified work sizes. local_work_size can be set to [-1], then will pass nullptr to clEnqueueNDRangeKernel
  • wait: Optional, default true, wait for GPU
  • timer: Optional, arguments to set up a timer for benchmark kernels
    • warmup: recycle times before timing
    • repeat: repeat multiple times and get AVERAGE TIME of multiple times, the result is elapsed time / repeat
    • name: name of a global timer

example

a = np.zeros([16, 16, 16], dtype=np.float32)
b = np.zeros([16, 16, 16], dtype=np.float32)
c = np.zeros([16, 16, 16], dtype=np.float32)
timer = TimerArgs(enable=True,
                  warmup=10,
                  repeat=100,
                  name='timer_name'
                  )
run(kernel_name='add',
    input=[
        {"name": "a", "value": a, },
        {"name": "b", "value": b, },
        {"name": "int_arg", "value": 1, "type": "int"},
        {"name": "float_arg", "value": 12.34},
        {"name": "c", "value": c}
    ],
    output=['c'],
    local_work_size=[1, 1, 1],
    global_work_size=a.shape,
    timer=timer
    )

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

pyoclk-1.1.1-cp312-cp312-manylinux_2_28_x86_64.whl (731.8 kB view hashes)

Uploaded CPython 3.12 manylinux: glibc 2.28+ x86-64

pyoclk-1.1.1-cp311-cp311-manylinux_2_28_x86_64.whl (732.0 kB view hashes)

Uploaded CPython 3.11 manylinux: glibc 2.28+ x86-64

pyoclk-1.1.1-cp310-cp310-manylinux_2_28_x86_64.whl (729.8 kB view hashes)

Uploaded CPython 3.10 manylinux: glibc 2.28+ x86-64

pyoclk-1.1.1-cp39-cp39-manylinux_2_28_x86_64.whl (730.8 kB view hashes)

Uploaded CPython 3.9 manylinux: glibc 2.28+ x86-64

pyoclk-1.1.1-cp38-cp38-manylinux_2_28_x86_64.whl (730.5 kB view hashes)

Uploaded CPython 3.8 manylinux: glibc 2.28+ x86-64

pyoclk-1.1.1-cp37-cp37m-manylinux_2_28_x86_64.whl (732.7 kB view hashes)

Uploaded CPython 3.7m manylinux: glibc 2.28+ x86-64

pyoclk-1.1.1-cp36-cp36m-manylinux_2_28_x86_64.whl (732.6 kB view hashes)

Uploaded CPython 3.6m manylinux: glibc 2.28+ x86-64

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page