llama.cpp/ggml/src
matteo afbb4c1322
ggml-cuda: Adding support for unified memory (#8035)
* Adding support for unified memory

* adding again the documentation about unified memory

* refactoring: Moved the unified memory code in the correct location.

* Fixed compilation error when using hipblas

* cleaning up the documentation

* Updating the documentation

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

* adding one more case where the PR should not be enabled

---------

Co-authored-by: matteo serva <matteo.serva@gmail.com>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2024-08-01 23:28:28 +02:00
..
ggml-cann cann: support q8_0 for Ascend backend (#8805) 2024-08-01 10:39:05 +08:00
ggml-cuda cuda : fix dmmv cols requirement to 2*GGML_CUDA_DMMV_X (#8800) 2024-08-01 15:26:22 +02:00
ggml-sycl [SYCL] Add TIMESTEP_EMBEDDING OP (#8707) 2024-07-30 14:56:51 +08:00
kompute@4565194ed7 llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
kompute-shaders llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
llamafile ggml : move sgemm sources to llamafile subfolder (#8394) 2024-07-10 15:23:29 +03:00
vulkan-shaders chore : Fix vulkan related compiler warnings, add help text, improve CLI options (#8477) 2024-07-28 09:52:42 +02:00
CMakeLists.txt cann: update cmake (#8765) 2024-07-30 12:37:35 +02:00
ggml-aarch64.c ggml : fix build on Windows with Snapdragon X (#8531) 2024-07-25 19:01:00 +03:00
ggml-aarch64.h ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-alloc.c ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
ggml-backend-impl.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-backend.c ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
ggml-blas.cpp ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
ggml-cann.cpp cann: Fix Multi-NPU execution error (#8710) 2024-07-27 16:36:44 +08:00
ggml-common.h feat: Support Moore Threads GPU (#8383) 2024-07-28 01:41:25 +02:00
ggml-cuda.cu ggml-cuda: Adding support for unified memory (#8035) 2024-08-01 23:28:28 +02:00
ggml-impl.h ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
ggml-kompute.cpp ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
ggml-metal.m ggml : reduce hash table reset cost (#8698) 2024-07-27 04:41:55 +02:00
ggml-metal.metal ggml : fix quant dot product with odd number of blocks (#8549) 2024-07-19 17:17:27 +02:00
ggml-quants.c ggml: bugfix: fix the inactive elements is agnostic for risc-v vector (#8748) 2024-07-29 18:38:34 +02:00
ggml-quants.h ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
ggml-rpc.cpp llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-sycl.cpp [SYCL] Add TIMESTEP_EMBEDDING OP (#8707) 2024-07-30 14:56:51 +08:00
ggml-vulkan.cpp vulkan : initialize vk_buffer_struct members to VK_NULL_HANDLE (ggml/893) 2024-07-27 17:43:44 +03:00
ggml.c Build: Only include execinfo.h on linux systems that support it (#8783) 2024-08-01 18:53:46 +02:00