llama.cpp/tools
Diego Devesa 27ebfcacba
llama : do not crash if there is no CPU backend (#13395)
* llama : do not crash if there is no CPU backend

* add checks to examples
2025-05-09 13:02:07 +02:00
..
batched-bench llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
cvector-generator llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
export-lora llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
gguf-split llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
imatrix imatrix : Add --parse-special for enabling parsing of special tokens in imatrix calculation (#13389) 2025-05-09 11:53:58 +02:00
llama-bench llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
main llama : do not crash if there is no CPU backend (#13395) 2025-05-09 13:02:07 +02:00
mtmd llama : do not crash if there is no CPU backend (#13395) 2025-05-09 13:02:07 +02:00
perplexity context : remove logits_all flag (#13284) 2025-05-08 14:26:50 +03:00
quantize llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
rpc llama : do not crash if there is no CPU backend (#13395) 2025-05-09 13:02:07 +02:00
run llama-run: add support for downloading models from ModelScope (#13370) 2025-05-09 10:25:50 +01:00
server server : (webui) rename has_multimodal --> modalities (#13393) 2025-05-09 09:06:37 +02:00
tokenize llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
tts llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
CMakeLists.txt mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00