llama.cpp/ggml/src/ggml-rpc
Radoslav Gerganov ab6ab8f809
rpc : send hash when tensor data is above some fixed threshold (#12496)
* rpc : send hash when tensor data is above some fixed threshold

ref #10095

* rpc : put cache under $HOME/.cache/llama.cpp

* try to fix win32 build

* another try to fix win32 build

* remove llama as dependency
2025-03-28 08:18:04 +02:00
..
CMakeLists.txt ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
ggml-rpc.cpp rpc : send hash when tensor data is above some fixed threshold (#12496) 2025-03-28 08:18:04 +02:00