llama.cpp/ggml
Georgi Gerganov f0995d28ce
metal : use FA-vec kernel up to batch size 20 (#13496)
* batched-bench : fix pp batch contents

* metal : optimize multi-sequence FA vec kernel

ggml-ci

* metal : use FA-vec kernel up to batch size 20

ggml-ci
2025-05-13 18:04:39 +03:00
..
cmake scripts : update sync + fix cmake merge 2025-03-27 10:09:29 +02:00
include llama/ggml: add LLM training support (#10544) 2025-05-12 14:44:49 +02:00
src metal : use FA-vec kernel up to batch size 20 (#13496) 2025-05-13 18:04:39 +03:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt whisper: remove MSVC warnings pragmas (whisper/3090) 2025-05-07 17:28:36 +03:00