llama.cpp/ggml
Jeff Bolz dc1d2adfc0
vulkan: scalar flash attention implementation (#13324)
* vulkan: scalar flash attention implementation

* vulkan: always use fp32 for scalar flash attention

* vulkan: use vector loads in scalar flash attention shader

* vulkan: remove PV matrix, helps with register usage

* vulkan: reduce register usage in scalar FA, but perf may be slightly worse

* vulkan: load each Q value once. optimize O reduction. more tuning

* vulkan: support q4_0/q8_0 KV in scalar FA

* CI: increase timeout to accommodate newly-supported tests

* vulkan: for scalar FA, select between 1 and 8 rows

* vulkan: avoid using Float16 capability in scalar FA
2025-05-10 08:07:07 +02:00
..
cmake scripts : update sync + fix cmake merge 2025-03-27 10:09:29 +02:00
include CUDA: fix bad asserts for partial offload (#13337) 2025-05-06 13:58:51 +02:00
src vulkan: scalar flash attention implementation (#13324) 2025-05-10 08:07:07 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt whisper: remove MSVC warnings pragmas (whisper/3090) 2025-05-07 17:28:36 +03:00