llama.cpp/tools/server/tests
Olivier Chafik f5cd27b71d
server: streaming of tool calls and thoughts when --jinja is on (#12379)
* add common_json w/ support for truncated json healing

* add common_chat_msg_diff

* partial common_chat_parse

* refactor parser w/ optionals

* server: wire chat diffs in stream mode

* fix trigger of thinking models (must happen after thoughts are closed)

* fix functionary v3.2 raw python!

* rename: common_chat_syntax (now contains format)

* rm common_regex.at_start

* don't return empty <think></think>

* accommodate yet another deepseek r1 distill fantasy syntax (`<|tool▁calls|>`)

* fix QwQ 32B tool call parsing after thoughts (hermes2)

* better logs for grammar triggers

* consume spaces after parse_json_tool_calls

* fix required tool calls w/ thinking models that have pre-opened thinking tags

* fix thinking model's initial trigger + test qwq's template

* run most test_tool_call tests in stream + non-stream modes

* make functionary v3.2 parsing more strict (differentiate first match from others)

* send final diff from server, to close off raw python arguments

* support partial content streaming in Generic mode

* tool-call: allow content prelude before hermes2 tool calls (for Qwen2.5)

* Update function-calling.md

* Update tool_bench.py

* chat-parser: remove input from exception (llm output may contain PII)

---------

Co-authored-by: ochafik <ochafik@google.com>
Co-authored-by: Olivier Chafik <ochafik@users.noreply.github.com>
2025-05-25 01:48:08 +01:00
..
unit server: streaming of tool calls and thoughts when --jinja is on (#12379) 2025-05-25 01:48:08 +01:00
.gitignore llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
conftest.py llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
pytest.ini llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
README.md llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
requirements.txt llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
tests.sh llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
utils.py server: streaming of tool calls and thoughts when --jinja is on (#12379) 2025-05-25 01:48:08 +01:00

Server tests

Python based server tests scenario using pytest.

Tests target GitHub workflows job runners with 4 vCPU.

Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict, kv_size.

Install dependencies

pip install -r requirements.txt

Run tests

  1. Build the server
cd ../../..
cmake -B build
cmake --build build --target llama-server
  1. Start the test: ./tests.sh

It's possible to override some scenario steps values with environment variables:

variable description
PORT context.server_port to set the listening port of the server during scenario, default: 8080
LLAMA_SERVER_BIN_PATH to change the server binary path, default: ../../../build/bin/llama-server
DEBUG to enable steps and server verbose mode --verbose
N_GPU_LAYERS number of model layers to offload to VRAM -ngl --n-gpu-layers
LLAMA_CACHE by default server tests re-download models to the tmp subfolder. Set this to your cache (e.g. $HOME/Library/Caches/llama.cpp on Mac or $HOME/.cache/llama.cpp on Unix) to avoid this

To run slow tests (will download many models, make sure to set LLAMA_CACHE if needed):

SLOW_TESTS=1 ./tests.sh

To run with stdout/stderr display in real time (verbose output, but useful for debugging):

DEBUG=1 ./tests.sh -s -v -x

To run all the tests in a file:

./tests.sh unit/test_chat_completion.py -v -x

To run a single test:

./tests.sh unit/test_chat_completion.py::test_invalid_chat_completion_req

Hint: You can compile and run test in single command, useful for local developement:

cmake --build build -j --target llama-server && ./tools/server/tests/tests.sh

To see all available arguments, please refer to pytest documentation