llama.cpp/ggml/src/ggml-sycl
2025-03-25 18:40:18 +08:00
..
dpct SYCL: Introducing memory host pool (#11251) 2025-01-19 21:33:34 +08:00
backend.hpp llama: Add support for RWKV v7 architecture (#12412) 2025-03-18 07:27:50 +08:00
CMakeLists.txt sycl: cleanup oneDNN related code (#12097) 2025-03-21 10:15:56 +08:00
common.cpp [SYCL] Optimize mul_mat for Q4_0 on Intel GPU (#12035) 2025-02-24 22:33:23 +08:00
common.hpp sycl: cleanup oneDNN related code (#12097) 2025-03-21 10:15:56 +08:00
concat.cpp SYCL: Refactor ggml_sycl_compute_forward (#11121) 2025-01-10 08:13:03 +08:00
concat.hpp SYCL: Refactor ggml_sycl_compute_forward (#11121) 2025-01-10 08:13:03 +08:00
conv.cpp SYCL: Refactor ggml_sycl_compute_forward (#11121) 2025-01-10 08:13:03 +08:00
conv.hpp SYCL: Refactor ggml_sycl_compute_forward (#11121) 2025-01-10 08:13:03 +08:00
convert.cpp fixed compilation warnings in ggml-sycl (#12424) 2025-03-18 08:51:25 +08:00
convert.hpp [SYCL] Optimize mul_mat for Q4_0 on Intel GPU (#12035) 2025-02-24 22:33:23 +08:00
cpy.cpp SYCL: Move CPY kernels to a separate file and add few missing kernels (#12133) 2025-03-03 11:07:22 +01:00
cpy.hpp SYCL: Move CPY kernels to a separate file and add few missing kernels (#12133) 2025-03-03 11:07:22 +01:00
dequantize.hpp [SYCL] Optimize mul_mat for Q4_0 on Intel GPU (#12035) 2025-02-24 22:33:23 +08:00
dmmv.cpp fixed compilation warnings in ggml-sycl (#12424) 2025-03-18 08:51:25 +08:00
dmmv.hpp llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
element_wise.cpp fixed compilation warnings in ggml-sycl (#12424) 2025-03-18 08:51:25 +08:00
element_wise.hpp SYCL: Refactor ggml_sycl_compute_forward (#11121) 2025-01-10 08:13:03 +08:00
gemm.hpp sycl: cleanup oneDNN related code (#12097) 2025-03-21 10:15:56 +08:00
getrows.cpp fixed compilation warnings in ggml-sycl (#12424) 2025-03-18 08:51:25 +08:00
getrows.hpp [SYCL] Optimize mul_mat for Q4_0 on Intel GPU (#12035) 2025-02-24 22:33:23 +08:00
ggml-sycl.cpp SYCL: disable Q4_0 reorder optimization (#12560) 2025-03-25 18:40:18 +08:00
gla.cpp SYCL: Add gated linear attention kernel (#11175) 2025-01-15 11:20:17 +08:00
gla.hpp SYCL: Add gated linear attention kernel (#11175) 2025-01-15 11:20:17 +08:00
im2col.cpp SYCL: Reduce most of the compiler warnings (#10748) 2024-12-13 12:12:15 +05:30
im2col.hpp [SYCL] Fix SYCL im2col and convert Overflow with Large Dims (#9052) 2024-08-20 23:06:51 +08:00
mmq.cpp fixed compilation warnings in ggml-sycl (#12424) 2025-03-18 08:51:25 +08:00
mmq.hpp llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
mmvq.cpp fixed compilation warnings in ggml-sycl (#12424) 2025-03-18 08:51:25 +08:00
mmvq.hpp llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
norm.cpp fixed compilation warnings in ggml-sycl (#12424) 2025-03-18 08:51:25 +08:00
norm.hpp llama: Add support for RWKV v7 architecture (#12412) 2025-03-18 07:27:50 +08:00
outprod.cpp SYCL: Refactor ggml_sycl_compute_forward (#11121) 2025-01-10 08:13:03 +08:00
outprod.hpp SYCL: Refactor ggml_sycl_compute_forward (#11121) 2025-01-10 08:13:03 +08:00
presets.hpp Optimize RWKV6 Operator Naming and Implement Multi-core CPU/ SYCL Acceleration (#10133) 2024-11-07 15:19:10 +08:00
rope.cpp SYCL: Reduce most of the compiler warnings (#10748) 2024-12-13 12:12:15 +05:30
rope.hpp [SYCL] Update SYCL-Rope op and Refactor (#8157) 2024-07-01 19:39:06 +08:00
softmax.cpp fixed compilation warnings in ggml-sycl (#12424) 2025-03-18 08:51:25 +08:00
softmax.hpp SYCL : SOFTMAX F16 mask support and other fixes (#11261) 2025-01-28 09:56:58 +00:00
sycl_hw.cpp [SYCL] Optimize mul_mat for Q4_0 on Intel GPU (#12035) 2025-02-24 22:33:23 +08:00
sycl_hw.hpp [SYCL] Optimize mul_mat for Q4_0 on Intel GPU (#12035) 2025-02-24 22:33:23 +08:00
tsembd.cpp SYCL: Refactor ggml_sycl_compute_forward (#11121) 2025-01-10 08:13:03 +08:00
tsembd.hpp SYCL: Refactor ggml_sycl_compute_forward (#11121) 2025-01-10 08:13:03 +08:00
vecdotq.hpp sycl: Use syclcompat::dp4a (#10267) 2024-11-15 11:09:12 +08:00
wkv.cpp llama: Add support for RWKV v7 architecture (#12412) 2025-03-18 07:27:50 +08:00
wkv.hpp llama: Add support for RWKV v7 architecture (#12412) 2025-03-18 07:27:50 +08:00