This a High-Performance Computing Question: Assume you have the code of a Naïve parallel version of Matrix-Matrix multiplication using CUDA and C++ in this way: (A Naïve parallel version of Matrix-Matrix multiplication using CUDA and C++(Note that the kernel should do the multiplication). Use square matrices. Use 1D execution configurations so that a thread loads a whole. The code has 4 different sizes of matrices over 1000, use 2 different block sizes.) Provide the code using CUDA and C++ for an OPTIMIZED parallel version of a Matrix-Matrix multiplication with CUDA and by "Varying the size of your computational grid: change number of CUDA threads and blocks ". Requirements: Compare that results are correct by comparing the results with cubLAS'. Use double-precision for all the program Use square matrices. Calculate the time that it took for the kernel to do the multiplication. Calculate the time that it took since transferring the matrices from host to device up to retrieving the results from the device to the host. Calculate the execution rate(FLOPS) of the kernel For each configuration in regards to the configuration(matrix size, block size, and threads used) explain how the program should behave while executing on a GPU.
This a High-Performance Computing Question:
Assume you have the code of a Naïve parallel version of Matrix-Matrix multiplication using CUDA and C++ in this way:
(A Naïve parallel version of Matrix-Matrix multiplication using CUDA and C++(Note that the kernel should do the multiplication).
Use square matrices.
Use 1D execution configurations so that a thread loads a whole. The code has 4 different sizes of matrices over 1000, use 2 different block sizes.)
Provide the code using CUDA and C++ for an OPTIMIZED parallel version of a Matrix-Matrix multiplication with CUDA and by "Varying the size of your computational grid: change number of CUDA threads and blocks ".
Requirements:
Compare that results are correct by comparing the results with cubLAS'.
Use double-precision for all the program
Use square matrices.
Calculate the time that it took for the kernel to do the multiplication.
Calculate the time that it took since transferring the matrices from host to device up to retrieving the results from the device to the host.
Calculate the execution rate(FLOPS) of the kernel
For each configuration in regards to the configuration(matrix size, block size, and threads used) explain how the program should behave
while executing on a GPU.
Step by step
Solved in 2 steps