llama.cpp/ggml
Radoslav Gerganov 6eba72b71c ggml : install dynamic backends (ggml/1240)
* ggml : install dynamic backends

Make sure dynamic backends are installed in $CMAKE_INSTALL_BINDIR
2025-06-01 13:43:57 +03:00
..
cmake cmake: Factor out CPU architecture detection (#13883) 2025-05-29 12:50:25 +02:00
include threading: support for GGML_SCHED_PRIO_LOW, update thread info on Windows to avoid throttling (#12995) 2025-05-31 15:39:19 -07:00
src ggml : install dynamic backends (ggml/1240) 2025-06-01 13:43:57 +03:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt vulkan: use timestamp queries for GGML_VULKAN_PERF (#13817) 2025-05-27 18:39:07 +02:00