llama.cpp/tools/server/tests
Olivier Chafik c9bbc77931
server: update deepseek reasoning format (pass reasoning_content as diffs) (#13933)
* server: update deepseek reasoning format (now in reasoning_content diffs), add legacy option for compat
* update unit/test_tool_call.py::test_thoughts
2025-06-02 10:15:44 -07:00
..
unit server: update deepseek reasoning format (pass reasoning_content as diffs) (#13933) 2025-06-02 10:15:44 -07:00
.gitignore llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
conftest.py llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
pytest.ini llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
README.md llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
requirements.txt llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
tests.sh llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
utils.py server: update deepseek reasoning format (pass reasoning_content as diffs) (#13933) 2025-06-02 10:15:44 -07:00

Server tests

Python based server tests scenario using pytest.

Tests target GitHub workflows job runners with 4 vCPU.

Note: If the host architecture inference speed is faster than GitHub runners one, parallel scenario may randomly fail. To mitigate it, you can increase values in n_predict, kv_size.

Install dependencies

pip install -r requirements.txt

Run tests

  1. Build the server
cd ../../..
cmake -B build
cmake --build build --target llama-server
  1. Start the test: ./tests.sh

It's possible to override some scenario steps values with environment variables:

variable description
PORT context.server_port to set the listening port of the server during scenario, default: 8080
LLAMA_SERVER_BIN_PATH to change the server binary path, default: ../../../build/bin/llama-server
DEBUG to enable steps and server verbose mode --verbose
N_GPU_LAYERS number of model layers to offload to VRAM -ngl --n-gpu-layers
LLAMA_CACHE by default server tests re-download models to the tmp subfolder. Set this to your cache (e.g. $HOME/Library/Caches/llama.cpp on Mac or $HOME/.cache/llama.cpp on Unix) to avoid this

To run slow tests (will download many models, make sure to set LLAMA_CACHE if needed):

SLOW_TESTS=1 ./tests.sh

To run with stdout/stderr display in real time (verbose output, but useful for debugging):

DEBUG=1 ./tests.sh -s -v -x

To run all the tests in a file:

./tests.sh unit/test_chat_completion.py -v -x

To run a single test:

./tests.sh unit/test_chat_completion.py::test_invalid_chat_completion_req

Hint: You can compile and run test in single command, useful for local developement:

cmake --build build -j --target llama-server && ./tools/server/tests/tests.sh

To see all available arguments, please refer to pytest documentation