llama.cpp/.github
Rémy O fc1b0d0936
vulkan: initial support for IQ1_S and IQ1_M quantizations (#11528)
* vulkan: initial support for IQ1_S and IQ1_M quantizations

* vulkan: define MMV kernels for IQ1 quantizations

* devops: increase timeout of Vulkan tests again

* vulkan: simplify ifdef for init_iq_shmem
2025-02-15 09:01:40 +01:00
..
ISSUE_TEMPLATE github : add cmd line field to bug report (#11090) 2025-01-06 16:34:49 +01:00
workflows vulkan: initial support for IQ1_S and IQ1_M quantizations (#11528) 2025-02-15 09:01:40 +01:00
labeler.yml ci : add ubuntu cuda build, build with one arch on windows (#10456) 2024-11-26 13:05:07 +01:00
pull_request_template.md github : minify link [no ci] (revert) 2024-12-03 11:21:43 +02:00