llama.cpp/ggml/src
Neo Zhang Jianyu 08d5986290
[SYCL] Optimize mul_mat for Q4_0 on Intel GPU (#12035)
* opt performance by reorder for Intel GPU

* detect hw type and save opt feature, and print opt feature

* correct name

* support optimize graph once when compute graph, record the opt status in tensor->extra, make CI passed

* add env variable GGML_SYCL_DISABLE_OPT for debug

* use syclex::architecture replace the custom hw define, update the guide for GGML_SYCL_DISABLE_OPT

* add performance data

* mv getrows functions to separeted files

* fix global variables

---------

Co-authored-by: arthw <14088817+arthw@users.noreply.github.com>
2025-02-24 22:33:23 +08:00
..
ggml-blas ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-cann llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
ggml-cpu ggml-cpu: Support s390x SIMD Instruction Set (#12019) 2025-02-22 21:39:24 +00:00
ggml-cuda CUDA: app option to compile without FlashAttention (#12025) 2025-02-22 20:44:34 +01:00
ggml-hip CUDA: app option to compile without FlashAttention (#12025) 2025-02-22 20:44:34 +01:00
ggml-kompute llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
ggml-metal metal : fix the crash caused by the lack of residency set support on Intel Macs. (#11904) 2025-02-16 08:50:26 +02:00
ggml-musa CUDA: app option to compile without FlashAttention (#12025) 2025-02-22 20:44:34 +01:00
ggml-opencl opencl: Fix rope and softmax (#11833) 2025-02-14 12:12:23 -07:00
ggml-rpc rpc: fix known RCE in rpc-server (ggml/1103) 2025-02-06 21:22:54 +02:00
ggml-sycl [SYCL] Optimize mul_mat for Q4_0 on Intel GPU (#12035) 2025-02-24 22:33:23 +08:00
ggml-vulkan vulkan: implement several ops relevant for ggml_opt (#11769) 2025-02-17 07:55:57 +01:00
CMakeLists.txt ci: use sccache on windows instead of ccache (#11545) 2025-01-31 17:12:40 +00:00
ggml-alloc.c vulkan: use smaller combined allocations to avoid fragmentation (#11551) 2025-02-06 07:02:18 +01:00
ggml-backend-impl.h rpc : early register backend devices (#11262) 2025-01-17 10:57:09 +02:00
ggml-backend-reg.cpp ggml : allow loading backend with env variable (ggml/1059) 2025-01-08 13:40:18 +02:00
ggml-backend.cpp ggml-backend : only offload from host buffers (fix) (#11124) 2025-01-07 16:11:57 +01:00
ggml-common.h CUDA: use arch list for compatibility check (#11775) 2025-02-11 00:17:22 +01:00
ggml-impl.h MUSA: support ARM64 and enable dp4a .etc (#11843) 2025-02-21 09:46:23 +02:00
ggml-opt.cpp ggml-opt: fix data corruption (ggml/1022) 2024-11-21 09:22:02 +02:00
ggml-quants.c ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c ggml-cpu: Support s390x SIMD Instruction Set (#12019) 2025-02-22 21:39:24 +00:00
gguf.cpp cmake : add sanitizer flags for llama.cpp (#11279) 2025-01-18 16:18:15 +02:00