llama.cpp/ggml/src
amritahs-ibm 13731766db
llamafile : ppc64le GEMV forwarding for FP32. (#12594)
This patch enables usage of MMA when one of the
dimensions of the matrix(ie either M or N) is 1. This
is useful in case of token generation where N < 2.

The concept of 'GEMV Forwarding' is used where when one
of the matrix has a single row/column, the elements are
broadcasted, instead of using packing routine to prepack
the matrix elements.

This change results in 5% - 15% improvement in total
speed(ie all tokens/total time), across various batch
sizes. This is in comparision with the corresponding
dot product implementation.

The patch is tested with FP32 models of Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf on a IBM POWER10 machine.

Signed-off-by: Amrita H S <amritahs@linux.vnet.ibm.com>
2025-03-28 09:43:22 +02:00
..
ggml-blas ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-cann [CANN]MUL_MAT optimization (#12382) 2025-03-15 09:31:08 +08:00
ggml-cpu llamafile : ppc64le GEMV forwarding for FP32. (#12594) 2025-03-28 09:43:22 +02:00
ggml-cuda HIP: Add support for RDNA4 targets (#12372) 2025-03-26 23:46:30 +01:00
ggml-hip HIP: implement FlashAttention via rocWMMA for CDNA and RDNA3+ (#12032) 2025-03-03 22:10:54 +01:00
ggml-kompute llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
ggml-metal metal : refactor mat-vec code (#12569) 2025-03-26 21:38:38 +02:00
ggml-musa cuda : enable CUDA Graph on CUDA Toolkit < 12.x (#12394) 2025-03-17 20:25:13 +02:00
ggml-opencl opencl: add multi and vision rope, gelu_quick and im2col (#12600) 2025-03-27 08:08:08 -07:00
ggml-rpc rpc : send hash when tensor data is above some fixed threshold (#12496) 2025-03-28 08:18:04 +02:00
ggml-sycl SYCL: implement memset ggml backend buffer interface (#12580) 2025-03-27 09:46:00 +08:00
ggml-vulkan vulkan: fix mul_mat_vec failure in backend tests (#12529) 2025-03-24 07:56:17 +01:00
CMakeLists.txt [SYCL] Fix build on Windows when ccache enabled (#9954) (#9976) 2025-03-21 14:58:47 +08:00
ggml-alloc.c ggml : upgrade init_tensor API to return a ggml_status (#11854) 2025-02-28 14:41:47 +01:00
ggml-backend-impl.h ggml : upgrade init_tensor API to return a ggml_status (#11854) 2025-02-28 14:41:47 +01:00
ggml-backend-reg.cpp ggml-backend : fix backend search path (#12330) 2025-03-11 14:25:17 +01:00
ggml-backend.cpp ggml : portability fixes for VS 2017 (#12150) 2025-03-04 18:53:26 +02:00
ggml-common.h CUDA: use arch list for compatibility check (#11775) 2025-02-11 00:17:22 +01:00
ggml-impl.h ggml : riscv: add 128-bit RVV support (#12530) 2025-03-27 08:38:34 +02:00
ggml-opt.cpp ggml-opt: fix data corruption (ggml/1022) 2024-11-21 09:22:02 +02:00
ggml-quants.c ggml : portability fixes for VS 2017 (#12150) 2025-03-04 18:53:26 +02:00
ggml-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.cpp ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-threading.h remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
ggml.c llama: Add support for RWKV v7 architecture (#12412) 2025-03-18 07:27:50 +08:00
gguf.cpp cmake : add sanitizer flags for llama.cpp (#11279) 2025-01-18 16:18:15 +02:00