From 21ca987fba504d273ea28ebc4d3e5b3736a11c8e Mon Sep 17 00:00:00 2001 From: ddpasa <112642920+ddpasa@users.noreply.github.com> Date: Wed, 14 May 2025 09:59:12 +0200 Subject: [PATCH] docs: Update link to ggml-org in multimodal.md (#13513) * Update multimodal.md Minor change to include the huggingface link * Update docs/multimodal.md --------- Co-authored-by: Xuan-Son Nguyen --- docs/multimodal.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/multimodal.md b/docs/multimodal.md index 6a5d2b34..80014ba1 100644 --- a/docs/multimodal.md +++ b/docs/multimodal.md @@ -31,7 +31,7 @@ llama-server -hf ggml-org/gemma-3-4b-it-GGUF --no-mmproj-offload ## Pre-quantized models -These are ready-to-use models, most of them come with `Q4_K_M` quantization by default. +These are ready-to-use models, most of them come with `Q4_K_M` quantization by default. They can be found at the Hugging Face page of the ggml-org: https://huggingface.co/ggml-org Replaces the `(tool_name)` with the name of binary you want to use. For example, `llama-mtmd-cli` or `llama-server`