llama.cpp/ggml
Junil Kim f423981ac8
opencl : fix memory allocation size (#12649)
issue:
https://github.com/CodeLinaro/llama.cpp/pull/17#issuecomment-2760611283

This patch fixes the memory allocation size
not exceeding the maximum size of the OpenCL device.
2025-04-01 09:54:34 -07:00
..
cmake scripts : update sync + fix cmake merge 2025-03-27 10:09:29 +02:00
include metal : improve FA + improve MoE (#12612) 2025-03-28 20:21:59 +02:00
src opencl : fix memory allocation size (#12649) 2025-04-01 09:54:34 -07:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt ggml : add logging for native build options/vars (whisper/2935) 2025-03-30 08:33:31 +03:00