llama.cpp/tests
Xuan-Son Nguyen 267c1399f1
common : refactor downloading system, handle mmproj with -hf option (#12694)
* (wip) refactor downloading system [no ci]

* fix all examples

* fix mmproj with -hf

* gemma3: update readme

* only handle mmproj in llava example

* fix multi-shard download

* windows: fix problem with std::min and std::max

* fix 2
2025-04-01 23:44:05 +02:00
..
.gitignore
CMakeLists.txt
get-model.cpp
get-model.h
run-json-schema-to-grammar.mjs
test-arg-parser.cpp common : refactor downloading system, handle mmproj with -hf option (#12694) 2025-04-01 23:44:05 +02:00
test-autorelease.cpp
test-backend-ops.cpp metal : improve FA + improve MoE (#12612) 2025-03-28 20:21:59 +02:00
test-barrier.cpp
test-c.c
test-chat-template.cpp llama-chat : Add Yandex instruct model template support (#12621) 2025-03-30 20:12:03 +02:00
test-chat.cpp server: extract <think> tags from qwq outputs (#12297) 2025-03-10 10:59:03 +00:00
test-double-float.cpp
test-gguf.cpp
test-grammar-integration.cpp
test-grammar-llguidance.cpp upgrade to llguidance 0.7.10 (#12576) 2025-03-26 11:06:09 -07:00
test-grammar-parser.cpp
test-json-schema-to-grammar.cpp tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
test-llama-grammar.cpp
test-log.cpp
test-lora-conversion-inference.sh
test-model-load-cancel.cpp
test-opt.cpp
test-quantize-fns.cpp tests : fix test-quantize-fns to init the CPU backend (#12306) 2025-03-10 14:07:15 +02:00
test-quantize-perf.cpp
test-rope.cpp
test-sampling.cpp sampling: add Top-nσ sampler (#11223) 2025-02-13 08:45:57 +02:00
test-tokenizer-0.cpp
test-tokenizer-0.py
test-tokenizer-0.sh
test-tokenizer-1-bpe.cpp
test-tokenizer-1-spm.cpp
test-tokenizer-random.py