llama.cpp/ggml
Gaurav Garg c262beddf2
CUDA: Prefer vector flash decoding kernel for Gemma models (#12738)
* Prefer vector flash decoding kernel for Gemma models

Vector flash decoding kernel was not being picked for models with head dimension 256. Gemma models are in this category.
Removing this limit improves e2e performance by upto 12% in gen phase throughput for Gemm models.

* Update ggml/src/ggml-cuda/fattn.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-04-03 18:20:29 +02:00
..
cmake scripts : update sync + fix cmake merge 2025-03-27 10:09:29 +02:00
include metal : improve FA + improve MoE (#12612) 2025-03-28 20:21:59 +02:00
src CUDA: Prefer vector flash decoding kernel for Gemma models (#12738) 2025-04-03 18:20:29 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : add logging for native build options/vars (whisper/2935) 2025-03-30 08:33:31 +03:00