..
apple
llama : add xcframework build script ( #11996 )
2025-03-05 06:30:31 +01:00
build-info.sh
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
check-requirements.sh
repo : update links to new url ( #11886 )
2025-02-15 16:40:57 +02:00
ci-run.sh
ci : add model tests + script wrapper ( #4586 )
2024-01-26 14:18:00 +02:00
compare-commits.sh
scripts : change build path to "build-bench" for compare-commits.sh ( #10836 )
2024-12-15 18:44:47 +02:00
compare-llama-bench.py
scripts: fix compare-llama-bench commit hash logic ( #11891 )
2025-02-15 20:23:22 +01:00
debug-test.sh
scripts : fix spelling typo in messages and comments ( #9782 )
2024-10-08 09:19:53 +03:00
fetch_server_test_models.py
tool-call
: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034 )
2025-03-05 13:05:13 +00:00
gen-authors.sh
license : update copyright notice + add AUTHORS ( #6405 )
2024-04-09 09:23:19 +03:00
gen-unicode-data.py
py : type-check all Python scripts with Pyright ( #8341 )
2024-07-07 15:04:39 -04:00
get-flags.mk
build : pass all warning flags to nvcc via -Xcompiler ( #5570 )
2024-02-18 16:21:52 -05:00
get-hellaswag.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
get-pg.sh
scripts : improve get-pg.sh ( #4838 )
2024-01-09 19:21:13 +02:00
get-wikitext-2.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
get-wikitext-103.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
get-winogrande.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
get_chat_template.py
scripts: corrected encoding when getting chat template ( #11866 ) ( #11907 )
2025-02-18 10:30:16 +01:00
hf.sh
scripts : restore hf.sh ( #11288 )
2025-01-18 13:18:32 +02:00
install-oneapi.bat
support SYCL backend windows build ( #5208 )
2024-01-31 08:08:07 +05:30
qnt-all.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
run-all-perf.sh
scripts : add pipefail
2023-08-29 10:50:30 +03:00
run-all-ppl.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
sync-ggml-am.sh
scripts : sync-ggml-am.sh fix
2025-03-03 18:18:11 +02:00
sync-ggml.last
sync : ggml
2025-03-03 18:18:11 +02:00
sync-ggml.sh
scripts : sync gguf
2025-01-14 09:36:58 +02:00
tool_bench.py
tool-call
: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034 )
2025-03-05 13:05:13 +00:00
tool_bench.sh
tool-call
: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034 )
2025-03-05 13:05:13 +00:00
verify-checksum-models.py
convert.py : add python logging instead of print() ( #6511 )
2024-05-03 22:36:41 +03:00
xxd.cmake
build
: generate hex dump of server assets during build (#6661 )
2024-04-21 18:48:53 +01:00