Georgi Gerganov
|
f3f65429c4
|
llama : reorganize source code + improve CMake (#8006)
* scripts : update sync [no ci]
* files : relocate [no ci]
* ci : disable kompute build [no ci]
* cmake : fixes [no ci]
* server : fix mingw build
ggml-ci
* cmake : minor [no ci]
* cmake : link math library [no ci]
* cmake : build normal ggml library (not object library) [no ci]
* cmake : fix kompute build
ggml-ci
* make,cmake : fix LLAMA_CUDA + replace GGML_CDEF_PRIVATE
ggml-ci
* move public backend headers to the public include directory (#8122)
* move public backend headers to the public include directory
* nix test
* spm : fix metal header
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* scripts : fix sync paths [no ci]
* scripts : sync ggml-blas.h [no ci]
---------
Co-authored-by: slaren <slarengh@gmail.com>
|
2024-06-26 18:33:02 +03:00 |
|
Johannes Gäßler
|
c8771ab5f8
|
CUDA: fix misaligned shared memory read (#8123)
|
2024-06-26 08:28:02 +02:00 |
|
Johannes Gäßler
|
9a590c8226
|
CUDA: optimize MMQ int8 tensor core performance (#8062)
* CUDA: optimize MMQ int8 tensor core performance
* only a single get_mma_tile_x_k function
* simplify code, make functions constexpr
|
2024-06-24 12:41:23 +02:00 |
|
Johannes Gäßler
|
bdcb8f4222
|
CUDA: int8 tensor cores for MMQ (q4_K, q5_K, q6_K) (#7860)
|
2024-06-11 08:26:07 +02:00 |
|
Johannes Gäßler
|
1f0dabda8d
|
CUDA: use tensor cores for MMQ (#7676)
* CUDA: int8 tensor cores for MMQ (legacy quants)
* fix out-of-bounds writes
* __builtin_assume -> GGML_CUDA_ASSUME
* fix writeback returning too early
|
2024-06-10 11:45:13 +02:00 |
|