llama.cpp/ggml
Prashant Vithule 05e6f5aad0
ggml: aarch64: implement SVE kernels for q2_k_q8_k vector dot (#12064)
* Added SVE Support for Q2_K Quantized Models

* Use 4-space indentation in the switch cases

* removed comments lines

* Remove the loop Retain the curly bracess for better understanding of code

* Remove the comment like added for q3_k_q8_k kernel

---------

Co-authored-by: vithulep <p.m.vithule1517@gmail.com>
2025-02-28 09:36:12 +02:00
..
cmake cmake: Fix ggml backend dependencies and installation (#11818) 2025-02-27 09:42:48 +02:00
include ggml-cpu: Support s390x SIMD Instruction Set (#12019) 2025-02-22 21:39:24 +00:00
src ggml: aarch64: implement SVE kernels for q2_k_q8_k vector dot (#12064) 2025-02-28 09:36:12 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt cmake: Fix ggml backend dependencies and installation (#11818) 2025-02-27 09:42:48 +02:00