llama.cpp/ggml
2025-03-02 22:11:00 +01:00
..
cmake cmake: Fix ggml backend dependencies and installation (#11818) 2025-02-27 09:42:48 +02:00
include ggml : upgrade init_tensor API to return a ggml_status (#11854) 2025-02-28 14:41:47 +01:00
src ggml-backend : keep paths in native string type when possible (#12144) 2025-03-02 22:11:00 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt CUDA: compress mode option and default to size (#12029) 2025-03-01 12:57:22 +01:00