🎉CUDA 笔记 / 大模型手撕CUDA / C++笔记,更新随缘: flash_attn、sgemm、sgemv、warp reduce、block reduce、dot product、elementwise、softmax、layernorm、rmsnorm、hist etc.
-
Updated
Jun 23, 2024 - Cuda
🎉CUDA 笔记 / 大模型手撕CUDA / C++笔记,更新随缘: flash_attn、sgemm、sgemv、warp reduce、block reduce、dot product、elementwise、softmax、layernorm、rmsnorm、hist etc.
This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several basic kernel optimizations, including: elementwise, reduce, sgemv, sgemm, etc. The performance of these kernels is basically at or near the theoretical limit.
Strided array math operations.
Standard library strided math functions.
Base strided.
Compute the absolute value.
Standard library strided array special math functions.
Standard library special math functions.
Apply a function to each element in an array and assign the result to an element in an output array, iterating from right to left.
Add a description, image, and links to the elementwise topic page so that developers can more easily learn about it.
To associate your repository with the elementwise topic, visit your repo's landing page and select "manage topics."