llama.cpp/ggml
Jeff Bolz f01bd02376
vulkan: Implement split_k for coopmat2 flash attention. (#12627)
When using group query attention, we have one workgroup per KV batch and this
can be very few workgroups (e.g. just 8 in some models). Enable split_k to
spread the work across SMs. This helps a lot when the KV cache is large.
2025-04-02 14:25:08 -05:00
..
cmake scripts : update sync + fix cmake merge 2025-03-27 10:09:29 +02:00
include metal : improve FA + improve MoE (#12612) 2025-03-28 20:21:59 +02:00
src vulkan: Implement split_k for coopmat2 flash attention. (#12627) 2025-04-02 14:25:08 -05:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : add logging for native build options/vars (whisper/2935) 2025-03-30 08:33:31 +03:00