llama.cpp/ggml
cmdr2 87abb7e903 cuda/cpu: Increase support for fp16 unary operations (ggml/1125)
* Support fp16 unary operations in the CUDA backend

* cpu: increase fp16 support for unary operators in the CPU backend

* cuda: increase fp16 support for unary operators in the CUDA backend

* Add test cases for fp16 unary operators

* metal: update supports_op for unary operators that don't support fp16, to prevent test-backend-ops from failing

* metal: fix PR comments for unary op support after fp16 unary tests
2025-03-03 18:18:11 +02:00
..
cmake cmake: Fix ggml backend dependencies and installation (#11818) 2025-02-27 09:42:48 +02:00
include ggml : upgrade init_tensor API to return a ggml_status (#11854) 2025-02-28 14:41:47 +01:00
src cuda/cpu: Increase support for fp16 unary operations (ggml/1125) 2025-03-03 18:18:11 +02:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt Told cmake to install ggml-cpp.h as a public header file. (ggml/1126) 2025-03-03 18:18:11 +02:00