llama.cpp/ggml/src/ggml-metal
cmdr2 87abb7e903 cuda/cpu: Increase support for fp16 unary operations (ggml/1125)
* Support fp16 unary operations in the CUDA backend

* cpu: increase fp16 support for unary operators in the CPU backend

* cuda: increase fp16 support for unary operators in the CUDA backend

* Add test cases for fp16 unary operators

* metal: update supports_op for unary operators that don't support fp16, to prevent test-backend-ops from failing

* metal: fix PR comments for unary op support after fp16 unary tests
2025-03-03 18:18:11 +02:00
..
CMakeLists.txt ggml : do not install metal source when embed library (ggml/1054) 2025-01-04 16:09:53 +02:00
ggml-metal-impl.h ggml: add GGML_SET Metal kernel + i32 CPU kernel (ggml/1037) 2024-12-05 13:27:33 +02:00
ggml-metal.m cuda/cpu: Increase support for fp16 unary operations (ggml/1125) 2025-03-03 18:18:11 +02:00
ggml-metal.metal metal : copy kernels for quant to F32/F16 conversions (#12017) 2025-02-25 11:27:58 +02:00