llama.cpp/examples
Xuan Son Nguyen 6c59567689
server : (tests) don't use thread for capturing stdout/stderr, bump openai client library (#10568)
* server : (tests) don't use thread for capturing stdout/stderr

* test: bump openai to 1.55.2

* bump openai to 1.55.3
2024-11-28 19:17:49 +01:00
..
batched speculative : refactor and add a simpler example (#10362) 2024-11-25 09:58:41 +02:00
batched-bench llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
batched.swift
convert-llama2c-to-ggml common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
cvector-generator llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
deprecation-warning
embedding common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
eval-callback ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
export-lora common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
gbnf-validator
gen-docs common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
gguf
gguf-hash llama : disable warnings for 3rd party sha1 dependency (#10527) 2024-11-26 21:01:47 +01:00
gguf-split
gritlm common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
imatrix llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
infill speculative : refactor and add a simpler example (#10362) 2024-11-25 09:58:41 +02:00
jeopardy
llama-bench ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
llama.android llama : remove all_pos_0, all_pos_1, all_seq_id from llama_batch (#9745) 2024-10-18 23:18:01 +02:00
llama.swiftui llama : default sampling changes + greedy update (#9897) 2024-10-21 09:46:40 +03:00
llava speculative : refactor and add a simpler example (#10362) 2024-11-25 09:58:41 +02:00
lookahead speculative : refactor and add a simpler example (#10362) 2024-11-25 09:58:41 +02:00
lookup speculative : refactor and add a simpler example (#10362) 2024-11-25 09:58:41 +02:00
main ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
main-cmake-pkg
parallel speculative : refactor and add a simpler example (#10362) 2024-11-25 09:58:41 +02:00
passkey common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
perplexity llama/ex: remove --logdir argument (#10339) 2024-11-16 23:00:41 +01:00
quantize
quantize-stats ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
retrieval speculative : refactor and add a simpler example (#10362) 2024-11-25 09:58:41 +02:00
rpc ggml : move CPU backend to a separate file (#10144) 2024-11-03 19:34:08 +01:00
run Introduce llama-run (#10291) 2024-11-25 22:56:24 +01:00
save-load-state speculative : refactor and add a simpler example (#10362) 2024-11-25 09:58:41 +02:00
server server : (tests) don't use thread for capturing stdout/stderr, bump openai client library (#10568) 2024-11-28 19:17:49 +01:00
simple docs: fix outdated usage of llama-simple (#10565) 2024-11-28 16:03:11 +01:00
simple-chat ggml : add support for dynamic loading of backends (#10469) 2024-11-25 15:13:39 +01:00
speculative llama : accept a list of devices to use to offload a model (#10497) 2024-11-25 19:30:06 +01:00
speculative-simple cmake : enable warnings in llama (#10474) 2024-11-26 14:18:08 +02:00
sycl
tokenize common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
base-translate.sh
chat-13B.bat
chat-13B.sh
chat-persistent.sh scripts : fix pattern and get n_tokens in one go (#10221) 2024-11-09 09:06:54 +02:00
chat-vicuna.sh
chat.sh
CMakeLists.txt cmake : enable warnings in llama (#10474) 2024-11-26 14:18:08 +02:00
convert_legacy_llama.py metadata: Detailed Dataset Authorship Metadata (#8875) 2024-11-13 21:10:38 +11:00
json_schema_pydantic_example.py
json_schema_to_grammar.py grammar : fix JSON Schema for string regex with top-level alt. (#9903) 2024-10-16 19:03:24 +03:00
llama.vim llama.vim : bump generation time limit to 3s [no ci] 2024-10-23 17:16:56 +03:00
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh