llama.cpp/ggml
lhez 71e74a3ac9
opencl: add backend_synchronize (#13939)
* This is not needed by the normal use where the result is read
  using `tensor_get`, but it allows perf mode of `test-backend-ops`
  to properly measure performance.
2025-06-02 16:54:58 -07:00
..
cmake cmake: Factor out CPU architecture detection (#13883) 2025-05-29 12:50:25 +02:00
include ggml : remove ggml_graph_import and ggml_graph_export declarations (ggml/1247) 2025-06-01 13:43:57 +03:00
src opencl: add backend_synchronize (#13939) 2025-06-02 16:54:58 -07:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt vulkan: use timestamp queries for GGML_VULKAN_PERF (#13817) 2025-05-27 18:39:07 +02:00