llama.cpp/tools
welix 0ccc121354
mtmd : fix the calculation of n_tokens for smolvlm (#13381)
Co-authored-by: Taichi Nishimura <Taichi.A.Nishimura@sony.com>
2025-05-08 15:03:53 +02:00
..
batched-bench llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
cvector-generator llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
export-lora llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
gguf-split llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
imatrix context : remove logits_all flag (#13284) 2025-05-08 14:26:50 +03:00
llama-bench llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
main context : remove logits_all flag (#13284) 2025-05-08 14:26:50 +03:00
mtmd mtmd : fix the calculation of n_tokens for smolvlm (#13381) 2025-05-08 15:03:53 +02:00
perplexity context : remove logits_all flag (#13284) 2025-05-08 14:26:50 +03:00
quantize llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
rpc rpc : use backend registry, support dl backends (#13304) 2025-05-04 21:25:43 +02:00
run llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
server context : allow cache-less context for embeddings (#13108) 2025-05-08 14:28:33 +03:00
tokenize llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
tts llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
CMakeLists.txt mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00