llama.cpp/tools
Olivier Chafik f5cd27b71d
server: streaming of tool calls and thoughts when --jinja is on (#12379)
* add common_json w/ support for truncated json healing

* add common_chat_msg_diff

* partial common_chat_parse

* refactor parser w/ optionals

* server: wire chat diffs in stream mode

* fix trigger of thinking models (must happen after thoughts are closed)

* fix functionary v3.2 raw python!

* rename: common_chat_syntax (now contains format)

* rm common_regex.at_start

* don't return empty <think></think>

* accommodate yet another deepseek r1 distill fantasy syntax (`<|tool▁calls|>`)

* fix QwQ 32B tool call parsing after thoughts (hermes2)

* better logs for grammar triggers

* consume spaces after parse_json_tool_calls

* fix required tool calls w/ thinking models that have pre-opened thinking tags

* fix thinking model's initial trigger + test qwq's template

* run most test_tool_call tests in stream + non-stream modes

* make functionary v3.2 parsing more strict (differentiate first match from others)

* send final diff from server, to close off raw python arguments

* support partial content streaming in Generic mode

* tool-call: allow content prelude before hermes2 tool calls (for Qwen2.5)

* Update function-calling.md

* Update tool_bench.py

* chat-parser: remove input from exception (llm output may contain PII)

---------

Co-authored-by: ochafik <ochafik@google.com>
Co-authored-by: Olivier Chafik <ochafik@users.noreply.github.com>
2025-05-25 01:48:08 +01:00
..
batched-bench batched-bench : fix pp batch contents (#13492) 2025-05-13 18:01:53 +03:00
cvector-generator llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
export-lora llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
gguf-split llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
imatrix imatrix : Add --parse-special for enabling parsing of special tokens in imatrix calculation (#13389) 2025-05-09 11:53:58 +02:00
llama-bench kv-cache : add SWA support (#13194) 2025-05-20 08:05:46 +03:00
main llama : do not crash if there is no CPU backend (#13395) 2025-05-09 13:02:07 +02:00
mtmd server : support audio input (#13714) 2025-05-23 11:03:47 +02:00
perplexity context : remove logits_all flag (#13284) 2025-05-08 14:26:50 +03:00
quantize quantize : improve tensor-type pattern matching (#13033) 2025-05-13 19:12:31 +02:00
rpc llama : do not crash if there is no CPU backend (#13395) 2025-05-09 13:02:07 +02:00
run kv-cache : simplify the interface (#13660) 2025-05-21 15:11:13 +03:00
server server: streaming of tool calls and thoughts when --jinja is on (#12379) 2025-05-25 01:48:08 +01:00
tokenize llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
tts tts : fix n_ubatch + make WavTokenizer cache-less (#13713) 2025-05-22 22:21:07 +03:00
CMakeLists.txt mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00