VEDA
Project description
VEDA (VE Driver API) and VERA (VE Runtime API)
VEDA and VERA are a CUDA Driver and Runtime API-like APIs for programming the NEC SX-Aurora. It is based on AVEO. Most of the functionality is identical to the CUDA Driver API and CUDA Runtime API.
Sitemap:
- Release Notes
- Differences between VEDA and CUDA Driver API
- Differences between VERA and CUDA Runtime API
- VEDA/VERA Unique Features
- SX-Aurora VE3
- Limitations/Known Problems
- How to build
- How to use
Release Notes
Version | Comment |
---|---|
v2.2 |
Added experimental feature to free VEDAdeviceptr within a kernel. This mechanism has some limitations!
The VEDAdeviceptr needs to be allocated using delayed malloc.
First call
To free the VEDAdeviceptr, first call
This mechanism doesn't work with non-delayed mallocs as they might be registered to NEC MPI, which cannot be deregistered from within the device. |
v2.1.1 |
|
v2.1.0 |
|
v2.0.2 |
|
v2.0.2 |
|
v2.0.1 |
|
v2.0.0 |
|
v1.4.0 |
|
v1.3.5 |
|
v1.3.4 |
|
v1.3.3 |
|
v1.3.2 |
|
v1.3.1 |
|
v1.3.0 |
|
v1.2.0 |
|
v1.1.2 |
|
v1.1.1 |
|
v1.1.0 |
|
v1.0.0 |
First stable release.
|
v0.10.6 | Maintenance release that fixes SegFaults when context has been destroyed before freeing memory. vedaMemFree ignores calls if the context for the particular pointer has already been freed. BugFix for VEDA_CONTEXT_MODE_SCALAR if VE_OMP_NUM_THREADS is not set. |
v0.10.5 | added veda_omp_simd_reduce . MemTrace only get printed when env varVEDA_MEM_TRACE=1 is set. VEDA no longer overrides VEORUN_BIN if already been set by user. Added LICENSE to installation target. |
v0.10.4 | Fixed Identification of VE model. |
v0.10.3 | Filtering negative values from VEDA_VISIBLE_DEVICES . |
v0.10.2 | Correct veda-smi RPATH to work without setting LD_LIBRARY_PATH. |
v0.10.1 | Added aveorun-ftrace . Can be activated using VEDA_FTRACE=1 env var. Renamed RPM packages to only include major version in package name, i.e. veda-0.10 . |
v0.10.0 | Renamed and improved VEDAmpiptr to VEDAptr . Removed VEDAdeviceptr->X functions, as they are now part of VEDAptr . Added veda-smi executable. |
v0.10.0rc5 | Added boundary checks for Memcopy and MemSet. Added vedaArgsSetHMEM . Added veda_device_omp.h parallelization primitives for C++. Added experimental VEDAmpiptr for easier usage with VE-MPI. Added/corrected some of the sensor readings, i.e. LLC Cache, Total Device Memory, ... |
v0.10.0rc4 | Increased VEDA offset limit to 128GB. Added VEDAdeviceptr->X functions in C++. Renamed vedaArgsSetPtr to vedaArgsSetVPtr . Added vedaArgsSetPtr to automatically translate VEDAdeviceptr to void* . Fixed VEDA_VISIBLE_DEVICES to obey NUMA mode. |
v0.10.0rc3 | Added AVEO symlinks. Fixed wrong include. |
v0.10.0rc2 | Fixed problem in veda_types.h when compiling with C. Linking against shared AVEO instead of static. |
v0.10.0rc1 | Fixed 0°C core temperatures. Added NUMA support. Each NUMA node becomes a separate VEDAdevice. Added vedaDeviceDistance(float**, VEDAdevice, VEDAdevice) to determine the relationship between two VEDAdevices (0.0 == same device, 0.5 == same physical device but different NUMA node, 1.0 == different physical device). Added vedaMemGetHMEMPointer(void**, VEDAdeviceptr) to translate VEDA pointer to HMEM pointer. |
v0.9.5.2 | Bugfixes |
v0.9.5.1 | Bugfixes |
v0.9.5 | Bugfixes |
v0.9.4 | Bugfixes |
v0.9.3 | Bugfixes |
v0.9.2 | Added FindMPI. Set all CMake vars as advanced. |
v0.9.1 | Added FindBLAS, FindLAPACK, FindASL and FindNCL to CMake. |
v0.9 | Enhanced VEDA CMake Scripts, to also support native NCC compilation. |
v0.8.1 | updated AVEO. Using VE_NODE_NUMBER as fallback if VEDA_VISIBLE_DEVICES is not set. |
v0.8 | Implemented multi-stream support (experimental). Automatic setting of required env vars. |
v0.7.1 | Bugfix release |
v0.7 | initial VERA release |
v0.6 | initial VEDA release |
Differences between VEDA and CUDA Driver API:
- [VEDA] Additionally to
vedaInit(0)
in the beginning,vedaExit()
needs to be called at the end of the application, to ensure that no dead device processes stay alive. - All function calls start with: [VEDA]
veda*
instead ofcu*
and [VERA]vera*
instead ofcuda*
- Objects start with [VEDA]
VEDA*
instead ofCU*
andvera*
instead ofcuda*
- VEDA supports asynchronous malloc and free:
VEDA supports asynchronous
vedaMemAllocAsync
andvedaMemFreeAsync
. They can be used like the synchronous calls, but don't need to synchronize the execution between device and host. vedaDeviceGetPower(float* power, VEDAdevice dev)
andvedaDeviceGetTemp(float* tempC, const int coreIdx, VEDAdevice dev)
allow to fetch the power consumption (in W) and temperature (in C).- As the programming model of the SX-Aurora differs from NVIDIA GPUs, launching kernels looks different:
// Device Code ------------------------------------------------------------- extern "C" void my_function(float myFloat, uint8_t myUnsignedChar, float* array) { ... } // C ----------------------------------------------------------------------- float myFloat; uint8_t myUnsignedChar; VEDAargs args; vedaArgsCreate(&args); // Scheme: vedaArgsSet[TYPE](&args, [PARAM_INDEX], [VARIABLE]); vedaArgsSetF32(args, 0, myFloat); vedaArgsSetU8(args, 1, myUnsignedChar); // Copy entire arrays as function parameter float array[32]; vedaArgsSetStack(args, 2, array, VEDA_ARGS_INTENT_INOUT, sizeof(array)); VEDAmodule mod; VEDAfunction func; vedaModuleLoad(&mod, "mylib.vso"); vedaModuleGetFunction(&func, mod, "my_function"); // Kernel Call Version 1: allows to reuse VEDAargs object VEDAstream stream = 0; vedaLaunchKernel(func, stream, args); // args are not allowed to be destroyed before synchronizing! vedaStreamSynchronize(stream); vedaArgsDestroy(&args); // Kernel Call Version 2: automatically destroys VEDAargs object after execution (can't be reused for other calls!) vedaLaunchKernelEx(func, stream, args, 1, 0); // CPP --------------------------------------------------------------------- vedaLaunchKernel(func, stream, myFloat, myUnsignedChar, VEDAstack(array, VEDA_ARGS_INTENT_INOUT, sizeof(array)));
- VEDAdeviceptr need to be dereferenced first on device side:
// Host Code --------------------------------------------------------------- VEDAdeviceptr ptr; vedaMemAllocAsync(&ptr, sizeof(float) * cnt); vedaLaunchKernel(func, 0, ptr, cnt); vedaMemFreeAsync(ptr); // Device Code ------------------------------------------------------------- void mykernel(VEDAdeviceptr vptr, size_t cnt) { float* ptr; vedaMemPtr(&ptr, vptr); for(size_t i = 0; i < cnt; i++) ptr[cnt] = ...; }
- VEDA streams differ from CUDA streams. See chapter "OMP Threads vs Streams" for more details.
- VEDA uses the env var
VEDA_VISIBLE_DEVICES
in contrast toCUDA_VISIBLE_DEVICES
. The behavior ofVEDA_VISIBLE_DEVICES
is slightly different:VEDA_VISIBLE_DEVICES=
enables all devices,CUDA_VISIBLE_DEVICES=
disables all devices.- For enabling VE's in NUMA mode, use
{ID}.0
and{ID}.1
. VEDA_VISIBLE_DEVICES
ids correspond to VE hardware ids,CUDA_VISIBLE_DEVICES
corresponds to the CUDA specific ids.
Differences between VERA and CUDA Runtime API:
- All function calls start with
vera*
instead ofcuda*
- Objects start with
vera*
instead ofcuda*
- VERA supports asynchronous malloc and free, see VEDA.
VEDA supports asynchronous
vedaMemAllocAsync
andvedaMemFreeAsync
. They can be used like the synchronous calls, but don't need to synchronize the execution between device and host. vedaDeviceGetPower(float* power, VEDAdevice dev)
andvedaDeviceGetTemp(float* tempC, const int coreIdx, VEDAdevice dev)
allow to fetch the power consumption (in W) and temperature (in C).- As the programming model of the SX-Aurora differs from NVIDIA GPUs, launching kernels looks different.
- Similar to CUDA Runtime API, calls from VEDA and VERA can be mixed!
VEDA/VERA Unique Features:
Delayed Memory Allocation
VEDA does not need to allocate memory from the host, but can do that directly from the device. For this, the host only needs to create an empty VEDAdeviceptr.
// Host Code ---------------------------------------------------------------
VEDAdeviceptr vptr;
vedaMemAllocAsync(&vptr, 0, 0);
vedaLaunchKernel(func, 0, vptr, cnt);
vedaMemcpyDtoHAsync(host, vptr, sizeof(float) * cnt, 0);
vedaMemFreeAsync(vptr, 0);
// Device Code -------------------------------------------------------------
void mykernel(VEDAdeviceptr vptr, size_t cnt) {
float* ptr;
vedaMemAllocPtr((void**)&ptr, vptr, cnt * sizeof(float));
for(size_t i = 0; i < cnt; i++)
ptr[cnt] = ...;
}
OMP Threads vs Streams (experimental):
In CUDA streams can be used to create different execution queues, to overlap
compute with memcopy. VEDA supports two stream modes which differ from the CUDA
behavior. These can be defined by vedaCtxCreate(&ctx, MODE, device)
.
VEDA_CONTEXT_MODE_OMP
(default): All cores will be assigned to the default stream (=0). This mode only supports a single stream.VEDA_CONTEXT_MODE_SCALAR
: Every core gets assigned to a different stream. This mode allows to use each core independently with different streams. Use the functionvedaCtxStreamCnt(&streamCnt)
to determine how many streams are available.
Both methods use the env var VE_OMP_NUM_THREADS
to determine the maximal
number of cores that get use for either mode. If the env var is not set, VEDA
uses all available cores of the hardware.
Advanced VEDA C++ Ptr
When you use C++, you can use the VEDAptr<typename>
that gives you more
directly control over the VEDAdeviceptr
, i.e. you can use
vptr.size()
, vptr.device()
, ... . The typename
is used to
automatically determine the correct offsets when executing vptr += offset;
.
VEDA-NEC MPI integration
The VEO-aware NEC MPI ( https://www.hpc.nec/forums/topic?id=pgmcA8 ) enables to
much easier implement hybrid VE applications. For this, so called HMEM pointers
have been introduced in VEO. Starting with v1.4.0 VEDA introduced a new HMEM
API: vedaHMEM*
. See following example:
VEDAhmemptr hmem;
vedaHMemAlloc(&hmem, size);
vedaHMemcpy(hmem, host_ptr, size);
mpi_send(hmem, ...);
NUMA Support
VEDA supports VE NUMA nodes since v0.10. To enable NUMA on your system you need
to execute (set -N ?
to specific device index):
VCMD="sudo /opt/nec/ve/bin/vecmd -N ?"
$VCMD vconfig set partitioning_mode on
$VCMD state set off
$VCMD state set mnt
$VCMD reset card
VEDA then recognizes each NUMA node as a separate device, i.e. with 2 physical
devices in NUMA mode, VEDA would show 4 devices. You can use VEDAresult vedaDeviceDistance(float* distance, VEDAdevice devA, VEDAdevice devB)
to
determine the relationship of two VEDAdevices.
distance == 0.0; // same device
distance == 0.5; // same physical device, different NUMA node
distance == 1.0; // differeny physical device
VEDA-smi
The executable veda-smi
displays available VEDA devices in your system. It
uses the VEDA_VISIBLE_DEVICES
env var and therefore only shows the devices
that your VEDA application would be able to use. Use VEDA_VISIBLE_DEVICES= veda-smi
to ensure that you see all installed devices.
╔ veda-smi ═════════════════════════════════════════════════════════════════════╗
║ VEDA Version: 0.10.0 AVEO Version: 0.9.15 ║
╚═══════════════════════════════════════════════════════════════════════════════╝
┌── #0 NEC SX-Aurora Tsubasa VE10B ────────────────────────────────────────────┐
┌ Physical: 1.0
├ AVEO: 0.0
├ Clock: current: 1400 MHz, base: 800 MHz, memory: 1600 MHz
├ Firmware: 5399
├ Memory: 49152 MiB
├ Cache: LLC: 8192kB, L2: 256kB, L1d: 32kB, L1i: 32kB
├ Temp: 56.4°C 56.4°C 57.0°C 56.1°C
└ Power: 18.0W (11.9V, 1.5A)
└───────────────────────────────────────────────────────────────────────────────┘
┌── #1 NEC SX-Aurora Tsubasa VE10B ────────────────────────────────────────────┐
┌ Physical: 1.1
├ AVEO: 0.1
├ Clock: current: 1400 MHz, base: 800 MHz, memory: 1600 MHz
├ Firmware: 5399
├ Memory: 49152 MiB
├ Cache: LLC: 8192kB, L2: 256kB, L1d: 32kB, L1i: 32kB
├ Temp: 56.1°C 56.4°C 55.9°C 56.0°C
└ Power: 18.0W (11.9V, 1.5A)
└───────────────────────────────────────────────────────────────────────────────┘
┌── #2 NEC SX-Aurora Tsubasa VE10B ────────────────────────────────────────────┐
┌ Physical: 0.0
├ AVEO: 1.0
├ Clock: current: 1400 MHz, base: 800 MHz, memory: 1600 MHz
├ Firmware: 5399
├ Memory: 49152 MiB
├ Cache: LLC: 16384kB, L2: 256kB, L1d: 32kB, L1i: 32kB
├ Temp: 53.8°C 53.5°C 54.1°C 53.8°C 53.8°C 54.1°C 53.2°C 53.5°C
└ Power: 36.3W (11.9V, 3.1A)
└───────────────────────────────────────────────────────────────────────────────┘
Profiling API
Since v1.5.0 VEDA supports to add a profiling callback using
vedaProfilerSetCallback(...)
. The callback needs to have the signature void (*)(VEDAprofiler_data* data, int enter)
. If enter
is non-zero, the callback
got called right before issuing the command. If it's zero, it just ended.
The data provides the following fields:
type
: An enum that identifies which kind function got called (kernel, memcpy, ...)device_id
: VEDA device idstream_id
: VEDA stream idreq_id
: ID of the requestuser_data
:void*
that allows to store data betweenenter
andexit
of the event. This should be deleted by the user whenenter==0
to prevent memleaks.
Depending on the type
, you can cast the data
to one of the following data
types to get access to further information.
type in [VEDA_PROFILER_MEM_ALLOC, VEDA_PROFILER_HMEM_ALLOC]
:VEDAprofiler_vedaMemAlloc
bytes
: number of bytes to be allocated
type in [VEDA_PROFILER_MEM_FREE, VEDA_PROFILER_HMEM_FREE]
:VEDAprofiler_vedaMemFree
ptr
: pointer to be freed
type in [VEDA_PROFILER_MEM_CPY_HTOD, VEDA_PROFILER_MEM_CPY_DTOH, VEDA_PROFILER_HMEM_CPY]
:VEDAprofiler_vedaMemcpy
dst
: destination pointersrc
: source pointerbytes
: number of bytes transfered
type == VEDA_PROFILER_LAUNCH_KERNEL
:VEDAprofiler_vedaLaunchKernel
func
: function pointer that gets calledkernel
: name of the kernel that gets called
type == VEDA_PROFILER_LAUNCH_HOST
:VEDAprofiler_vedaLaunchHostFunc
func
: function pointer that gets called
type == VEDA_PROFILER_SYNC
:VEDAprofiler_data
C++ API (Experimental!)
Starting with v1.5.0 we introduce a new experimental and lightweight C++ API. This API aims for easier usage of VEDA, with much more comfort in C++ applications.
To include the new API just use #include <veda/cpp/api.h>
.
Error Handling
Instead of the C-API, the C++ API uses exceptions, which can be used like this:
try {
...
} catch(const veda::Exception& e) {
std::cerr << e.what() << " @ " << e.file() << " (" << e.line() << ")";
}
Fetching a Device handle
To get a handle to a device, just create an instance using:
veda::Device device(0);
In contrast to the C-API, the veda::Device
incorporates the VEDAdevice
and
VEDAcontext
into a single object. We use a lazy scheme, which will not boot up
the device context until you allocate memory, load a model, or similar.
The device provides the following attributes and metrics: isActive, current, currentEdge, distance, power, temp, voltage, voltageEdge, abi, aveoId, cacheL1d, cacheL1i, cacheL2, cacheLLC, clockBase, clockMemory, clockRate, cores, firmware, model, numaId, physicalId, singleToDoublePerfRatio, streamCnt, vedaId, totalMem, usedMem
.
If your application requires to do the CUDA-style programming, where you bind
the device to a specific thread, you can use device.pushCurrent()
,
device.setCurrent()
and auto device = Device::getCurrent()
or auto device = Device::popCurrent()
.
To synchronize the execution use device.sync()
or device.sync(stream)
.
Loading Modules
Just do:
auto mod = dev.load("libmymodule.vso");
Memory Buffer Objects
The new C++ API uses buffer objects instead of raw pointers. These can be
allocated using dev.alloc<float>(cnt)
, which will allocate sizeof(T) * cnt
bytes of memory.
If you want to use a different stream, just use dev.alloc<float>(cnt, stream)
.
To allocate HMEM memory, use dev.alloc<float, veda::HMEM>(size)
.
To copy data between different Buffers, or the host and the VE, just use:
auto VE = dev.alloc<float>(cnt);
auto VH = malloc(sizeof(float) * cnt);
VE.to(VH); // copies all items from VE to VH
VE.to(VH, 1); // copies the first item from VE to VH
VE[4].to(VH + 4, 1); // copies the 5th item from VE to VH
VE.from(VH); // copies all items from VH to VE
auto V2 = dev.alloc<float>(cnt);
V2.to(VE); // copies all items from V2 to VE
VE.from(V2); // copies all items from V2 to VE
To memset data use:
VE.memset(3.1415); // set all items
VE[5].memset(3.1415); // set all items starting the 6th
VE[5].memset(3.1415, 1);// set only the 6th item
To cast a buffer object to another type:
auto Float = dev.alloc<float>(cnt);
auto Int32 = Float.cast<int32_t>(); // Float.cnt() == Int32.cnt()
auto Int16 = Float.cast<int16_t>(); // Float.cnt() == Int16.cnt()*2
All buffer objects use shared pointer semantics. When all objects using the same source pointer are destroyed, it will be automatically freed.
To pass on pointers between methods just pass on the buffer object:
veda::Ptr<VEDA, float> func(...) {
...
auto ptr = dev.alloc<float>(cnt);
...
return ptr;
}
Fetching Functions
For fetching functions we provide three helper functions.
-
C-style or
extern "C"
functions:// VE extern "C" int name(int, float, VEDAdeviceptr); // VH using namespace veda; auto func = CFunction::Return<int>(mod, "name"); auto result = func(0, 3.14f, ptr); printf("%i\n", int(result));
The
CFunction::Return<int>
returns you an executable object to an C-function on the VE. Whenever you callfunc(...)
it issues a kernel call. By default we use the stream #0, but you can usefunc[stream](...)
to define the stream yourself.result
is a future object. When you callresult.wait()
or fetch the result using(TYPE)result
orresult.get()
, it will synchronize the execution and provide the return value.The
::Return<...>
can be omitted when no return value is expected. -
C++-style functions:
// VE int name(int, float, VEDAdeviceptr); // VH using namespace veda; auto func = Function::Return<int>::Args<int, float, VEDAdeviceptr>(mod, "name"); auto result = func(0, 3.14f, ptr); printf("%i\n", int(result));
For C++-style functions use
Function
instead ofCFunction
. In this case you also need to provide the types of all arguments usingArgs<...>
.Again
::Return<...>
can be omitted when no return value is expected.Also struct types can be used as arguments:
// VE + VH namespace whatever { template<typename T> struct complex { T x, y; }; } // VE void name(VEDAdeviceptr, whatever::complex<float>); // VH auto func = Function::Args<VEDAdeviceptr, whatever::complex<float>>(mod, "name"); whatever::complex<float> x = {3.0f, 4.0f}; func(ptr, x);
-
Template functions:
// VE template<typename T, typename D> T name(T, float, D); template int name<int, VEDAdeviceptr>(int, float, VEDAdeviceptr); // VH using namespace veda; auto func = Template<int, VEDAdeviceptr>::Return<_0>::Args<_0, float, _1>(mod, "name");
Last, we also support to fetch templated functions. Here it is important, that in the VE code, the template gets explicitly instantiated using the
template ... name<...>(...);
syntax. Otherwise the compiler will not generate this specific templated function.On the VH, we first define the template parameters using
Template<...>
. Next, as before the return type. If it is::Return<void>
, it can be omitted. And last the arguments, similar as before for theFunction
.In the code above you see
veda::_0
andveda::_1
. These correspond to the template parameters,_0
is the 0th,_1
the 1st, and so on. It is necessary to use these template placeholders withinReturn<...>
andArgs<...>
at the same locations as within the C++ code.If your template uses literals, such as:
template<int i, typename T> T name(T a) { return a + i; } template float name<0>(float); template float name<5>(float); template int name<5>(int);
You can to use the following code on VH:
auto name_f0 = Template<Literal<0>, float>::Return<_1>::Args<_1>(...); auto name_f5 = Template<Literal<5>, float>::Return<_1>::Args<_1>(...); auto name_i0 = Template<Literal<5>, int> ::Return<_1>::Args<_1>(...);
It's important that the data type you pass to
Literal<...>
matches the data type you use in yourtemplate<...>
. I.e., if you usetemplate<char...>
, then you need to useLiteral('x')
orLiteral(char(15))
.Only integer-like types (char, short, ...) can be used as template literals.
For all function fetching methods it's important, that function arguments match exactly the ones you use in your VE C++ code. Otherwise fetching the function will fail at runtime!
SX-Aurora VE3 support
Since v2.1.0 VEDA supports the SX-Aurora VE3. It's important that your libraries are compatible to the used architecture. Use these compile and linking flags:
Architecture | Flags | File Extension |
---|---|---|
VE1+2 | -march=ve1 -stdlib=libc++ |
*.vso |
VE3 | -march=ve3 -stdlib=libc++ |
*.vso3 |
To load the library you can just use vedaModuleLoad(&mod, "libsomething.vso")
and VEDA will automatically load libsomething.vso
for VE1+2 or
libsomething.vso3
for VE3.
By default VEDA determines automatically which architecture to use. You can
override this behavior by setting the env var VEDA_ARCH=1
or VEDA_ARCH=3
.
Be warned, you cannot run VEDA_ARCH=3
on a VE1, but you can use VEDA_ARCH=1
on a VE3!
If you are unsure which architecture your library is for, you can use nreadelf -h libsomething.vso | grep 'Flags'
. Flags ending with 0 are for VE1, with 1 are
for VE3.
Limitations/Known Problems:
- VEDA only supports one
VEDAcontext
per device. - No unified memory space (yet).
- VEDA by default uses the current workdirectory for loading modules. This
behavior can be changed by using the env var
VE_LD_LIBRARY_PATH
. - Due to compiler incompatibilities it can be necessary to adjust the CMake
variable
${AVEO_NFORT}
to another compiler. - The C++ API can only return fundamental (void, int, short, ...) values.
- The C++ API cannot compile
...::Args<void>
. Use...:::Args<>
instead.
How to build:
git clone https://github.com/SX-Aurora/veda/
mkdir veda/build
cd veda/build
# Build Option 1: Local installation (default: /usr/local/ve (use -DCMAKE_INSTALL_PREFIX=... for other path))
cmake3 -DVEDA_DIST_TYPE=LOCAL ..
cmake3 --build . --target install
# Build Option 2: VEOS installation
cmake3 -DVEDA_DIST_TYPE=VEOS ..
cmake3 --build . --target install
# Build Option 3: Python package
pip3 install illyrian tungl
illyrian cmake3 -DVEDA_DIST_TYPE=PYTHON ..
cmake3 --build . --target dist
How to use:
VEDA has an own CMake find script. This supports 3 modes. The script uses the compilers installed in /opt/nec/ve/bin
. You can modify the CMAKE_[LANG]_COMPILER
flags to change that behavior. See the Hello World examples in the Examples Folder
1. VEDA Hybrid Offloading:
This mode is necessary for VEDA offloading applications. It enables to compile host and device code within the same CMake project. For this it is necessary to use different file extensions for the VE code. All *.vc
files get compiled using NCC, *.vcpp
using NC++ and *.vf
with NFORT.
SET(CMAKE_MODULE_PATH /usr/local/ve/veda/cmake /opt/nec/ve/share/veda/cmake)
FIND_PACKAGE(VEDA)
ENABLE_LANGUAGE(VEDA_C VEDA_CXX)
INCLUDE_DIRECTORIES(${VEDA_INCLUDE_DIRS})
ADD_EXECUTABLE(myApp mycode.vc mycode.vcpp)
TARGET_LINK_LIBRARIES(myApp ${VEDA_LIBRARY})
2. VE Native applications:
This mode enables to compile VE native applications.
SET(CMAKE_MODULE_PATH /usr/local/ve/veda/cmake /opt/nec/ve/share/veda/cmake)
FIND_PACKAGE(VEDA)
ENABLE_LANGUAGE(VEDA_C VEDA_CXX)
ADD_EXECUTABLE(myApp mycode.c mycode.cpp)
3. VE Native Injection:
If you have a CPU application and you don't want to modify the CMake script you can build your project using:
cmake -C /usr/local/ve/veda/cmake/InjectVE.cmake /path/to/your/source
It will replace the CPU C
, CXX
and Fortran
compilers with NCC.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file veda-2.2-py3-none-manylinux_2_17_x86_64.whl
.
File metadata
- Download URL: veda-2.2-py3-none-manylinux_2_17_x86_64.whl
- Upload date:
- Size: 2.3 MB
- Tags: Python 3, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 876430530f3f08811c2f821b999da8887e506dfc83a08432ac9c38234d3afc71 |
|
MD5 | 151b7307f2cb4e032b74f45c2375c856 |
|
BLAKE2b-256 | 22078f3cc8e6e105d162dfb972dbe03c9012f8c9683f06cffcad3ff4ee63fd7c |