You are here: GSI Wiki>Personalpages Web>EvaMartenstein>KernelFunctions (revision 16)EditAttach
6. Kernel (functions)

6.1 Source Code

6.1.1. Include

include iostream: C++ input/output streams [3]

include cuda.h defines the public host functions and types for the CUDA driver API.

include cuda_runtime.h: defines everything cuda_runtime_api.h does, as well as built-in type definitions and function overlays for the CUDA language extensions and device intrinsic functions.

include device_launch_parameters.h: defines functions to launch the kernel on gpu

inlude cmath: The header cmath declares a set of functions to compute common mathematical operations and transformations.

include engine.h is necessary to make a connection to matlab.

include windows.h: is a Windows-specific header file for the C/C++ programming language. [6]

include fstream: This is an input/output stream class and it is necessary to operate on files.

include string: The header introduces string types, character traits and a set of converting functions

include myPerms.h: See 6.1.6

To the top

6.1.2. Connection to Matlab-Engine

The command Engine *m_pEngine = engOpen(NULL); starts a MATLAB engine session. It returns a pointer to an engine handle, or NULL if the open fails. If it does not return NULL, the command std::cout<<"Success. Matlab is there."<<endl; is executed. A window opens, which displays "Sucess. Matlab is there.". See also here.

To the top

6.1.3. Passing the constants from matlab to C++

In pragma region CopyConstantDataToHost the constants are read in from matlab.

Important functions are mxArray, engGetVariable, mxGetScalar, mxGetField and mxGetData.

To the top

6.1.4. Allocate GPU buffers

cudaMalloc (devP, size) allocates size bytes of linear memory on the device and returns in *devP a pointer to the allocated memory. The allocated memory is suitably aligned for any kind of variable. The memory is not cleared. cudaMalloc() returns cudaErrorMemoryAllocation in case of failure. [5]


devP - Pointer to allocated device memory

size - Requested allocation size in bytes

Important: Please pay attention to the difference between single and double precision!

To the top

6.1.5. Copy data to GPU

cudaMemcpy() copies count bytes from the memory area pointed to by source memory adress to the memory area pointed to by the destination with a special type of transfer (here: cudaMemcpyDeviceToHost). [4]

Important: Please pay attention to the difference between single and double precision!

To the top

6.1.6. Launching the kernel on GPU

The number of interation steps are calculated from NumLogs and NumStepsPerLog. These parameters are set in SimConfig_dp_dist.m.

NumLogs $ \cdot $ NumStepsPerLog $ \cdot $ 7 = number of iterations steps


In a three nested for-loop the main kernels are executed.

To understand these kernel, we have to take a look at the paramters in the angle brackets. The first number in those parameters represents the number of parallel blocks in which we would like the device to execute our kernel. In this case, we’re passing the value N for this parameter. For example, if we launch with kernel<<<N,1>>>(), you can think of the runtime creating N copies of the kernel and running them in parallel. We call each of these parallel invocations a block. The CUDA runtime allows these blocks to be split into threads. Inside the angle brackets, the second parameter actually represents the number of threads per block we want the CUDA runtime to create on our behalf. [7]

Current<<< SIM.NumBlocks , SIM.SizeLocal >>> (d_x[perm_curr[5]], d_y[perm_curr[5]], d_Ik );

CurrentTotal<<< 1 , 1 >>>(d_Ik, d_I,


RunIONBash6<<< SIM.NumBlocks, SIM.SizeLocal, SIM.SizeLocal_bytes >>> (d_xP, d_yP, d_z[perm_curr[6]], d_x[perm_curr[5]], d_y[perm_curr[5]], d_z[perm_curr[5]], (...), d_y[perm_curr[1]], d_z[perm_curr[1]], d_y[perm_curr[0]], d_U[perm_curr[5]], SIM.e_2);

RunRLCBash6<<< 1 , 1 >>> (d_UP, d_VP, d_g[perm_curr[6]], d_U[perm_curr[5]], d_V[perm_curr[5]], d_g[perm_curr[5]], (...), d_V[perm_curr[1]], d_g[perm_curr[1]], d_V[perm_curr[0]], d_I, d_dI);

Current<<< SIM.NumBlocks , SIM.SizeLocal >>> (d_xP, d_yP, d_Ik); //implicit step

CurrentTotal<<< 1 , 1 >>>( d_Ik, d_IP, d_dIP );

RunIONBash6_P<<< SIM.NumBlocks , SIM.SizeLocal, SIM.SizeLocal_bytes >>> (d_x[perm_curr[6]], d_y[perm_curr[6]], d_z[perm_curr[6]], d_x[perm_curr[5]], d_y[perm_curr[5]], d_z[perm_curr[5]], d_y[perm_curr[4]], d_z[perm_curr[4]], d_y[perm_curr[3]], d_z[perm_curr[3]], d_y[perm_curr[2]], d_z[perm_curr[2]], d_y[perm_curr[1]], d_xP, d_yP, d_UP, SIM.e_2);

RunRLCBash6_P<<< 1 , 1 >>> (d_U[perm_curr[6]],

d_V[perm_curr[6]], d_g[perm_curr[6]], d_U[perm_curr[5]], d_V[perm_curr[5]], d_g[perm_curr[5]], d_V[perm_curr[4]], d_g[perm_curr[4]], d_V[perm_curr[3]], d_g[perm_curr[3]], d_V[perm_curr[2]], d_g[perm_curr[2]], d_V[perm_curr[1]], d_UP, d_VP, d_IP, d_dIP)

The function cudaDeviceSynchronize(); is between each of these kernel. It blocks until the device has completed all preceding requested tasks. cudaDeviceSynchronize() returns an error if one of the preceding tasks has failed. If the cudaDeviceScheduleBlockingSync flag was set for this device, the host thread will block until the device has finished its work.

A detailed explanation of the kernels can be found in the next section.

Important: Please pay attention to the difference between single and double precision!

To the top

6.1.7. Copy data to CPU

(see 6.1.5) Type of transfer (here: cudaMemcpyHostToDevice)

To the top

6.1.8. Save the Data

The function myFile_Ik.write(reinterpret_cast(SIM.Ik), sizeof(double)*SIM.SizeGlobal) shall attempt to write sizeof(double)*SIM.SizeGlobal bytes from the buffer to the file Ik (e.g. C:\Users\DarkNemo\Documents\MATLAB\CUDA\Cloud8\Output1). Analog for x, y, I, U and V. Then all files will ne closed.


ofstream myFile_Pos5((PATH.NewInputPath +"\\Pos5.bin").c_str(), ios::out | ios::binary );

gpuErrchk(cudaMemcpy( SIM.x5, d_x[perm_curr[6]],

sizeof(double)*SIM.SizeGlobal*3, cudaMemcpyDeviceToHost));

myFile_Pos5.write( reinterpret_cast(SIM.x5), sizeof(double)*SIM.SizeGlobal*3 );


-> analog: Velo0 bis Velo5, I0, U3, V0 bis V5, (e.g. C:\Users\DarkNemo\Documents\MATLAB\CUDA\Cloud8\Input2).

Important: Please pay attention to the difference between single and double precision!

To the top

6.1.9. Delete data and free up disk space

delete SIM.Ik;

engClose(m_pEngine); //Quit MATLAB engine session

To the top

Back Table of Contents Forward

-- EvaMartenstein - 20 Apr 2015
Edit | Attach | Print version |  PDF | History: r19 | r17 < r16 < r15 < r14 | Backlinks | View wiki text | Edit WikiText | More topic actions...
Topic revision: r16 - 2015-05-05, EvaMartenstein
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding GSI Wiki? Send feedback
Imprint (in German)
Privacy Policy (in German)