llama.cpp/ggml
2025-02-22 09:43:24 +01:00
..
cmake cmake: add ggml find package (#11369) 2025-01-26 12:07:48 -04:00
include ggml-cpu: Add CPU backend support for KleidiAI library (#11390) 2025-02-20 15:06:51 +02:00
src cuda: Add Q5_1, Q5_0, Q4_1 and Q4_0 to F32 conversion support. (#12000) 2025-02-22 09:43:24 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml-cpu: Add CPU backend support for KleidiAI library (#11390) 2025-02-20 15:06:51 +02:00