llama.cpp/ggml/src/ggml-cuda
Daniel Bevenius 06943a69f6
ggml : move rope type enum to ggml.h (#8949)
* ggml : move rope type enum to ggml.h

This commit moves the `llama_rope_type` enum from `llama.h` to
`ggml.h` and changes its name to `ggml_rope_type`.

The motivation for this change is to address the TODO in `llama.h` and
use the enum in ggml.

Note: This commit does not change the `mode` parameter to be of type
`enum ggml_rope_type`. The name `mode` and its usage suggest that it
might be more generic and possibly used as a bit field for multiple
flags. Further investigation/discussion may be needed to determine
if `mode` should be restricted to RoPE types.

* squash! ggml : move rope type enum to ggml.h

This commit removes GGML_ROPE_TYPE_NONE and GGML_ROPE_TYPE_GLM from
ggml.h, and back the llama_rope_type enum.

I've kept the assert for GGML_ROPE_TYPE_GLM as I'm not sure if it is
safe to remove it yet.

* squash! ggml : move rope type enum to ggml.h

This commit removes the enum ggml_rope_type from ggml.h and replaces it
with a define (GGML_ROPE_TYPE_NEOX). This define is used in the code to
check if the mode is set to GPT-NeoX. Also the enum llama_rope_type has
been updated to reflect this change.

* squash! ggml : move rope type enum to ggml.h

This commit contains a suggestion enable the GGML_ROPE_TYPE_NEOX
macro/define to be passed to the shader compiler.

* squash! ggml : move rope type enum to ggml.h

This commit fixes the editorconfig-checker warnings.

* squash! ggml : move rope type enum to ggml.h

Update comment for ggml_rope function.

* Revert "squash! ggml : move rope type enum to ggml.h"

This reverts commit 6261222bd0dc0efd51f0fb0435ad3f16a5b52fd6.

* squash! ggml : move rope type enum to ggml.h

Add GGML_ROPE_TYPE_NEOX to rope_common.comp.

* remove extra line

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-08-13 21:13:15 +02:00
..
template-instances CUDA: MMQ code deduplication + iquant support (#8495) 2024-07-20 22:25:26 +02:00
vendors cuda : organize vendor-specific headers into vendors directory (#8746) 2024-07-29 14:56:12 +02:00
acc.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
acc.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
arange.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
arange.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
argsort.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
argsort.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
binbcast.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
binbcast.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
clamp.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
clamp.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
common.cuh cuda : organize vendor-specific headers into vendors directory (#8746) 2024-07-29 14:56:12 +02:00
concat.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
concat.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
conv-transpose-1d.cu feat: cuda implementation for ggml_conv_transpose_1d (ggml/854) 2024-07-08 12:23:00 +03:00
conv-transpose-1d.cuh feat: cuda implementation for ggml_conv_transpose_1d (ggml/854) 2024-07-08 12:23:00 +03:00
convert.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
convert.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
cpy.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
cpy.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
dequantize.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
diagmask.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
diagmask.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
dmmv.cu cuda : fix dmmv cols requirement to 2*GGML_CUDA_DMMV_X (#8800) 2024-08-01 15:26:22 +02:00
dmmv.cuh cuda : fix dmmv cols requirement to 2*GGML_CUDA_DMMV_X (#8800) 2024-08-01 15:26:22 +02:00
fattn-common.cuh ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
fattn-tile-f16.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
fattn-tile-f16.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
fattn-tile-f32.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
fattn-tile-f32.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
fattn-vec-f16.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
fattn-vec-f32.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
fattn-wmma-f16.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
fattn.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
fattn.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
getrows.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
getrows.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
im2col.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
im2col.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
mma.cuh CUDA: optimize and refactor MMQ (#8416) 2024-07-11 16:47:47 +02:00
mmq.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
mmq.cuh ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
mmvq.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
mmvq.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
norm.cu ggml : add epsilon as a parameter for group_norm (#8818) 2024-08-06 10:26:46 +03:00
norm.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
pad.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
pad.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
pool2d.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
pool2d.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
quantize.cu ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
quantize.cuh CUDA: optimize and refactor MMQ (#8416) 2024-07-11 16:47:47 +02:00
rope.cu ggml : move rope type enum to ggml.h (#8949) 2024-08-13 21:13:15 +02:00
rope.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
scale.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
scale.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
softmax.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
softmax.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
sumrows.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
sumrows.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
tsembd.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
tsembd.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
unary.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
unary.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
upscale.cu llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
upscale.cuh llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
vecdotq.cuh CUDA: MMQ code deduplication + iquant support (#8495) 2024-07-20 22:25:26 +02:00