llama.cpp/ggml
2025-05-30 01:28:54 +02:00
..
cmake cmake: Factor out CPU architecture detection (#13883) 2025-05-29 12:50:25 +02:00
include ggml : add ggml_repeat_4d (#13824) 2025-05-27 15:53:55 +02:00
src cmake: Guard GGML_CPU_ALL_VARIANTS by architecture (#13890) 2025-05-30 01:28:54 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt vulkan: use timestamp queries for GGML_VULKAN_PERF (#13817) 2025-05-27 18:39:07 +02:00