llama.cpp/docs
Xuan-Son Nguyen bc583e3c63
mtmd : support Qwen 2.5 Omni (input audio+vision, no audio output) (#13784)
* mtmd : allow multiple modalities at the same time

* refactor mtmd tokenizer

* fix compile

* ok, missing SinusoidsPositionEmbedding

* first working version

* fix style

* more strict validate of n_embd

* refactor if..else to switch

* fix regression

* add test for 3B

* update docs

* fix tokenizing with add_special

* add more tests

* fix test case "huge"

* rm redundant code

* set_position_mrope_1d rm n_tokens
2025-05-27 14:06:10 +02:00
..
backend CANN: Add the basic supports of Flash Attention kernel (#13627) 2025-05-26 10:20:18 +08:00
development llama : move end-user examples to tools directory (#13249) 2025-05-02 20:27:13 +02:00
multimodal mtmd : rename llava directory to mtmd (#13311) 2025-05-05 16:02:55 +02:00
android.md repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
build.md CUDA/HIP: Share the same unified memory allocation logic. (#12934) 2025-04-15 11:20:38 +02:00
docker.md musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (#13647) 2025-05-21 09:58:49 +08:00
function-calling.md docs: remove link for llama-cli function calling (#13810) 2025-05-27 08:52:40 -03:00
install.md install : add macports (#12518) 2025-03-23 10:21:48 +02:00
llguidance.md llguidance build fixes for Windows (#11664) 2025-02-14 12:46:08 -08:00
multimodal.md mtmd : support Qwen 2.5 Omni (input audio+vision, no audio output) (#13784) 2025-05-27 14:06:10 +02:00