..
.gitignore
tests : gitignore ggml-common.h
2024-03-09 14:17:11 +02:00
CMakeLists.txt
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
get-model.cpp
ci : add model tests + script wrapper ( #4586 )
2024-01-26 14:18:00 +02:00
get-model.h
ci : add model tests + script wrapper ( #4586 )
2024-01-26 14:18:00 +02:00
run-json-schema-to-grammar.mjs
server : revamp chat UI with vuejs and daisyui ( #10175 )
2024-11-07 17:31:10 -04:00
test-arg-parser.cpp
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
test-autorelease.cpp
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
test-backend-ops.cpp
CUDA: noncont MMVQ + batched bs1 MUL_MAT_ID ( #13014 )
2025-04-22 21:27:40 +02:00
test-barrier.cpp
ggml : move CPU backend to a separate file ( #10144 )
2024-11-03 19:34:08 +01:00
test-c.c
Nomic Vulkan backend ( #4456 )
2024-01-29 15:50:50 -05:00
test-chat-template.cpp
ci: detach common from the library ( #12827 )
2025-04-09 10:11:11 +02:00
test-chat.cpp
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
test-double-float.cpp
ggml : minor naming changes ( #8433 )
2024-07-12 10:46:02 +03:00
test-gbnf-validator.cpp
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
test-gguf.cpp
cleanup: fix compile warnings associated with gnu_printf ( #11811 )
2025-02-12 10:06:53 -04:00
test-grammar-integration.cpp
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
test-grammar-llguidance.cpp
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
test-grammar-parser.cpp
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
test-json-schema-to-grammar.cpp
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
test-llama-grammar.cpp
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
test-log.cpp
common : use common_ prefix for common library functions ( #9805 )
2024-10-10 22:57:42 +02:00
test-lora-conversion-inference.sh
ci : use -no-cnv in gguf-split tests ( #11254 )
2025-01-15 18:28:35 +02:00
test-model-load-cancel.cpp
llama : update llama_model API names ( #11063 )
2025-01-06 10:55:18 +02:00
test-opt.cpp
ggml : inttypes.h -> cinttypes ( #0 )
2024-11-17 08:30:29 +02:00
test-quantize-fns.cpp
tests : fix test-quantize-fns to init the CPU backend ( #12306 )
2025-03-10 14:07:15 +02:00
test-quantize-perf.cpp
ggml : inttypes.h -> cinttypes ( #0 )
2024-11-17 08:30:29 +02:00
test-quantize-stats.cpp
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
test-rope.cpp
llama : add Qwen2VL support + multimodal RoPE ( #10361 )
2024-12-14 14:43:46 +02:00
test-sampling.cpp
sampling: add Top-nσ sampler ( #11223 )
2025-02-13 08:45:57 +02:00
test-tokenizer-0.cpp
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
test-tokenizer-0.py
py : logging and flake8 suppression refactoring ( #7081 )
2024-05-05 08:07:48 +03:00
test-tokenizer-0.sh
tests : fix test-tokenizer-0.sh
2024-05-28 15:04:09 +03:00
test-tokenizer-1-bpe.cpp
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
test-tokenizer-1-spm.cpp
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
test-tokenizer-random.py
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00