llama.cpp/examples
2023-12-18 20:17:43 +02:00
..
baby-llama ggml : remove n_dims from ggml_tensor (#4469) 2023-12-14 16:52:08 +01:00
batched cuda : add batched cuBLAS GEMM for faster attention (#3749) 2023-10-24 16:48:37 +03:00
batched-bench ggml : add ggml_soft_max_ext (#4256) 2023-12-01 10:51:24 +02:00
batched.swift swift : fix prompt tokenization logic (#4321) 2023-12-04 15:43:45 +02:00
beam-search llama : remove token functions with context args in favor of model (#3720) 2023-10-23 22:40:03 +03:00
benchmark ggml : add ggml_row_size() (fixes llama out of space) (#4461) 2023-12-14 14:13:33 +02:00
convert-llama2c-to-ggml ggml : remove n_dims from ggml_tensor (#4469) 2023-12-14 16:52:08 +01:00
embedding build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
export-lora sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
finetune finetune : keep allocs alive until all allocations are done (#4486) 2023-12-17 16:05:56 +01:00
gguf ggml : remove n_dims from ggml_tensor (#4469) 2023-12-14 16:52:08 +01:00
infill main : Add ChatML functionality to main example (#4046) 2023-11-20 14:56:59 +01:00
jeopardy parallel : add option to load external prompt file (#3416) 2023-10-06 16:16:38 +03:00
llama-bench llama : per-layer KV cache + quantum K cache (#4309) 2023-12-07 13:03:17 +02:00
llama.swiftui llama.swiftui : add tinyllama 1.1B F16 2023-12-18 20:17:43 +02:00
llava ggml : remove n_dims from ggml_tensor (#4469) 2023-12-14 16:52:08 +01:00
lookahead english : use typos to fix comments and logs (#4354) 2023-12-12 11:53:36 +02:00
main sampling : custom samplers order (#4285) 2023-12-05 12:05:51 +02:00
main-cmake-pkg cmake : add missed dependencies (#3763) 2023-10-24 20:48:45 +03:00
metal sync : ggml (backend v2) (#3912) 2023-11-13 14:16:23 +02:00
parallel llama : KV cache view API + better KV cache management (#4170) 2023-11-23 19:07:56 +02:00
perplexity Respect tokenizer.ggml.add_bos_token value when tokenizing (#4040) 2023-11-16 19:14:37 -07:00
quantize build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
quantize-stats llama : per-layer KV cache + quantum K cache (#4309) 2023-12-07 13:03:17 +02:00
save-load-state build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
server server : disable llm logs if SERVER_VERBOSE is off (#3792) 2023-12-17 17:02:16 +02:00
simple simple : update error message for KV cache check (#4324) 2023-12-04 18:04:21 +02:00
speculative english : use typos to fix comments and logs (#4354) 2023-12-12 11:53:36 +02:00
tokenize tokenize example: Respect normal add BOS token behavior (#4126) 2023-11-18 14:48:17 -07:00
train-text-from-scratch train : fix #4227 (double free in examples/train-text-from-scratch/train-text-from-scratch.cpp) (#4351) 2023-12-07 12:25:22 +02:00
alpaca.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh llama : fix session saving/loading (#3400) 2023-10-03 21:04:01 +03:00
chat-vicuna.sh
chat.sh
CMakeLists.txt lookahead : add example for lookahead decoding (#4207) 2023-11-26 20:33:07 +02:00
gpt4all.sh
json-schema-to-grammar.py
llama.vim
llama2-13b.sh
llama2.sh
llm.vim
make-ggml.py
Miku.sh
reason-act.sh
server-llama2-13B.sh