llama.cpp/ggml
Akarshan Biswas ece9745bb8
SYCL: Move CPY kernels to a separate file and add few missing kernels (#12133)
* SYCL: refactor and move cpy kernels to a separate file

* Add few missing cpy kernels

* refactor and add debug logs
2025-03-03 11:07:22 +01:00
..
cmake cmake: Fix ggml backend dependencies and installation (#11818) 2025-02-27 09:42:48 +02:00
include ggml : upgrade init_tensor API to return a ggml_status (#11854) 2025-02-28 14:41:47 +01:00
src SYCL: Move CPY kernels to a separate file and add few missing kernels (#12133) 2025-03-03 11:07:22 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt CUDA: compress mode option and default to size (#12029) 2025-03-01 12:57:22 +01:00