llama.cpp/examples
2024-10-23 17:16:56 +03:00
..
baby-llama Threadpool: take 2 (#8672) 2024-08-30 01:20:53 +02:00
batched common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
batched-bench llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
batched.swift llama : llama_perf + option to disable timings during decode (#9355) 2024-09-13 09:53:38 +03:00
convert-llama2c-to-ggml common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
cvector-generator llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
deprecation-warning examples : remove finetune and train-text-from-scratch (#8669) 2024-07-25 10:39:04 +02:00
embedding common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
eval-callback llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
export-lora common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
gbnf-validator llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
gen-docs common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
gguf
gguf-hash
gguf-split gguf-split : improve --split and --merge logic (#9619) 2024-10-02 10:21:57 +03:00
gritlm common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
imatrix llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
infill llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
jeopardy
llama-bench llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
llama.android llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
llama.swiftui llama : default sampling changes + greedy update (#9897) 2024-10-21 09:46:40 +03:00
llava llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
lookahead llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
lookup llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
main llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
main-cmake-pkg
parallel llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
passkey common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
perplexity llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
quantize quantize : improve type name parsing (#9570) 2024-09-20 20:55:36 +02:00
quantize-stats ggml : fix BLAS with unsupported types (#9775) 2024-10-08 14:21:43 +02:00
retrieval common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
rpc rpc : add backend registry / device interfaces (#9812) 2024-10-10 20:14:55 +02:00
save-load-state llama : default sampling changes + greedy update (#9897) 2024-10-21 09:46:40 +03:00
server llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
simple llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
speculative llama : default sampling changes + greedy update (#9897) 2024-10-21 09:46:40 +03:00
sycl [SYCL]set context default value to avoid memory issue, update guide (#9476) 2024-09-18 08:30:31 +08:00
tokenize common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
base-translate.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt examples : remove benchmark (#9704) 2024-10-02 10:14:44 +03:00
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py grammar : fix JSON Schema for string regex with top-level alt. (#9903) 2024-10-16 19:03:24 +03:00
llama.vim llama.vim : bump generation time limit to 3s [no ci] 2024-10-23 17:16:56 +03:00
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh