llama.cpp/.github
Diego Devesa 6adc3c3ebc
llama : add thread safety test (#14035)
* llama : add thread safety test

* llamafile : remove global state

* llama : better LLAMA_SPLIT_MODE_NONE logic

when main_gpu < 0 GPU devices are not used

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-06-16 08:11:43 -07:00
..
actions releases : use arm version of curl for arm releases (#13592) 2025-05-16 19:36:51 +02:00
ISSUE_TEMPLATE repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
workflows llama : add thread safety test (#14035) 2025-06-16 08:11:43 -07:00
labeler.yml CANN: Enable labeler for Ascend NPU (#13914) 2025-06-09 11:20:06 +08:00
pull_request_template.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00