.. |
batched-bench
|
llama : move end-user examples to tools directory (#13249)
|
2025-05-02 20:27:13 +02:00 |
cvector-generator
|
llama : move end-user examples to tools directory (#13249)
|
2025-05-02 20:27:13 +02:00 |
export-lora
|
llama : move end-user examples to tools directory (#13249)
|
2025-05-02 20:27:13 +02:00 |
gguf-split
|
llama : move end-user examples to tools directory (#13249)
|
2025-05-02 20:27:13 +02:00 |
imatrix
|
imatrix : Add --parse-special for enabling parsing of special tokens in imatrix calculation (#13389)
|
2025-05-09 11:53:58 +02:00 |
llama-bench
|
Add --no-op-offload to improve -ot pp perf in MoE models like llama4 400B (#13386)
|
2025-05-11 14:18:39 +02:00 |
main
|
llama : do not crash if there is no CPU backend (#13395)
|
2025-05-09 13:02:07 +02:00 |
mtmd
|
mtmd : Use RMS norm for InternVL 3 38B and 78B mmproj (#13459)
|
2025-05-12 00:39:06 +02:00 |
perplexity
|
context : remove logits_all flag (#13284)
|
2025-05-08 14:26:50 +03:00 |
quantize
|
llama : move end-user examples to tools directory (#13249)
|
2025-05-02 20:27:13 +02:00 |
rpc
|
llama : do not crash if there is no CPU backend (#13395)
|
2025-05-09 13:02:07 +02:00 |
run
|
llama-run: add support for downloading models from ModelScope (#13370)
|
2025-05-09 10:25:50 +01:00 |
server
|
tools : fix uninitialized llama_batch in server (#13436)
|
2025-05-11 17:08:26 +02:00 |
tokenize
|
llama : move end-user examples to tools directory (#13249)
|
2025-05-02 20:27:13 +02:00 |
tts
|
llama : move end-user examples to tools directory (#13249)
|
2025-05-02 20:27:13 +02:00 |
CMakeLists.txt
|
mtmd : rename llava directory to mtmd (#13311)
|
2025-05-05 16:02:55 +02:00 |