llama.cpp/examples/llava
HimariO ca2bb89eac
clip : Add Qwen2.5VL support (#12402)
* implment vision model architecture, gguf convertor

* handle window attention inputs

* add debug utils

* fix few incorrect tensor memory layout

* move position id remap out of ggml to avoid int32 cuda operations

* cleaning up

* ignore transformers Qwen2_5_xxx type check

* remove not so often use `qwen2vl-cli` debug functions

* remove commented-out code blocks

* fix attn weight scaling after rebase

* add `PROJECTOR_TYPE_QWEN2_5_VL`

* remove `KEY_USE_GLU_MLP`, `KEY_USE_RMS_NORM`

* replace `KEY_FULLATTN_BLK_IDX` with `KEY_WIN_ATTN_PATTERN`

* remove `attn_window_size` from gguf

* fix model conversion

* clean up

* fix merging problem

* add test

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2025-04-27 10:10:34 +02:00
..
android llava : update documentations (#13055) 2025-04-22 10:37:00 +02:00
clip-impl.h clip : Add Qwen2.5VL support (#12402) 2025-04-27 10:10:34 +02:00
clip-quantize-cli.cpp llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644) 2025-02-05 10:45:40 +03:00
clip.cpp clip : Add Qwen2.5VL support (#12402) 2025-04-27 10:10:34 +02:00
clip.h clip : improve projector naming (#13118) 2025-04-26 22:39:47 +02:00
CMakeLists.txt mtmd : merge llava, gemma3 and minicpmv CLI into single llama-mtmd-cli (#13012) 2025-04-21 15:32:58 +02:00
convert_image_encoder_to_gguf.py llava: add big-endian conversion for image encoder (#12218) 2025-03-06 09:33:21 +01:00
deprecation-warning.cpp mtmd : merge llava, gemma3 and minicpmv CLI into single llama-mtmd-cli (#13012) 2025-04-21 15:32:58 +02:00
glmedge-convert-image-encoder-to-gguf.py llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) 2025-02-02 09:48:46 +02:00
glmedge-surgery.py llama : add support for GLM-Edge and GLM-Edge-V series models (#10573) 2025-02-02 09:48:46 +02:00
llava.cpp clip : use smart pointer (⚠️ breaking change) (#12869) 2025-04-11 12:09:39 +02:00
llava.h llava : support MiniCPM-V-2.5 (#7599) 2024-08-09 13:33:53 +03:00
llava_surgery.py py : switch to snake_case (#8305) 2024-07-05 07:53:33 +03:00
llava_surgery_v2.py llava : Add Granite Vision Support (#11794) 2025-02-24 17:09:51 +01:00
minicpmv-convert-image-encoder-to-gguf.py llava : fix bug in minicpm-v code (#11513) 2025-03-10 10:33:24 +02:00
minicpmv-surgery.py llava : support Minicpm-omni (#11289) 2025-01-22 09:35:48 +02:00
mtmd-cli.cpp arg : add --no-mmproj-offload (#13093) 2025-04-24 14:04:14 +02:00
mtmd.cpp clip : remove boi/eoi embeddings for GLM-edge model (#13081) 2025-04-24 22:17:04 +02:00
mtmd.h mtmd : add methods to access mtmd_image_tokens (#12906) 2025-04-18 10:04:51 +02:00
qwen2_vl_surgery.py clip : Add Qwen2.5VL support (#12402) 2025-04-27 10:10:34 +02:00
qwen2vl-cli.cpp clip : Add Qwen2.5VL support (#12402) 2025-04-27 10:10:34 +02:00
README-quantize.md llava: add quantization for the visual projector LLAVA, Qwen2VL (#11644) 2025-02-05 10:45:40 +03:00
README.md mtmd : Support Pixtral 12B (#13065) 2025-04-23 20:21:59 +02:00
requirements.txt py : fix requirements check '==' -> '~=' (#8982) 2024-08-12 11:02:01 +03:00
test-1.jpeg clip : refactor clip_init, add tests (#12757) 2025-04-05 17:17:40 +02:00
tests.sh clip : Add Qwen2.5VL support (#12402) 2025-04-27 10:10:34 +02:00

Multimodal Support in llama.cpp

This directory provides multimodal capabilities for llama.cpp. Initially intended as a showcase for running LLaVA models, its scope has expanded significantly over time to include various other vision-capable models. As a result, LLaVA is no longer the only multimodal architecture supported.

Important

Multimodal support can be viewed as a sub-project within llama.cpp. It is under very heavy development, and breaking changes are expected.

The naming and structure related to multimodal support have evolved, which might cause some confusion. Here's a brief timeline to clarify:

  • #3436: Initial support for LLaVA 1.5 was added, introducing llava.cpp and clip.cpp. The llava-cli binary was created for model interaction.
  • #4954: Support for MobileVLM was added, becoming the second vision model supported. This built upon the existing llava.cpp, clip.cpp, and llava-cli infrastructure.
  • Expansion & Fragmentation: Many new models were subsequently added (e.g., #7599, #10361, #12344, and others). However, llava-cli lacked support for the increasingly complex chat templates required by these models. This led to the creation of model-specific binaries like qwen2vl-cli, minicpmv-cli, and gemma3-cli. While functional, this proliferation of command-line tools became confusing for users.
  • #12849: libmtmd was introduced as a replacement for llava.cpp. Its goals include providing a single, unified command-line interface, improving the user/developer experience (UX/DX), and supporting both audio and image inputs.
  • #13012: mtmd-cli was added, consolidating the various model-specific CLIs into a single tool powered by libmtmd.

Pre-quantized models

These are ready-to-use models, most of them come with Q4_K_M quantization by default:

# Gemma 3
llama-mtmd-cli -hf ggml-org/gemma-3-4b-it-GGUF
llama-mtmd-cli -hf ggml-org/gemma-3-12b-it-GGUF
llama-mtmd-cli -hf ggml-org/gemma-3-27b-it-GGUF

# SmolVLM
llama-mtmd-cli -hf ggml-org/SmolVLM-Instruct-GGUF
llama-mtmd-cli -hf ggml-org/SmolVLM-256M-Instruct-GGUF
llama-mtmd-cli -hf ggml-org/SmolVLM-500M-Instruct-GGUF
llama-mtmd-cli -hf ggml-org/SmolVLM2-2.2B-Instruct-GGUF
llama-mtmd-cli -hf ggml-org/SmolVLM2-256M-Video-Instruct-GGUF
llama-mtmd-cli -hf ggml-org/SmolVLM2-500M-Video-Instruct-GGUF

# Pixtral 12B
llama-mtmd-cli -hf ggml-org/pixtral-12b-GGUF

How it works and what is mmproj?

Multimodal support in llama.cpp works by encoding images into embeddings using a separate model component, and then feeding these embeddings into the language model.

This approach keeps the multimodal components distinct from the core libllama library. Separating these allows for faster, independent development cycles. While many modern vision models are based on Vision Transformers (ViTs), their specific pre-processing and projection steps can vary significantly. Integrating this diverse complexity directly into libllama is currently challenging.

Consequently, running a multimodal model typically requires two GGUF files:

  1. The standard language model file.
  2. A corresponding multimodal projector (mmproj) file, which handles the image encoding and projection.

What is libmtmd?

As outlined in the history, libmtmd is the modern library designed to replace the original llava.cpp implementation for handling multimodal inputs.

Built upon clip.cpp (similar to llava.cpp), libmtmd offers several advantages:

  • Unified Interface: Aims to consolidate interaction for various multimodal models.
  • Improved UX/DX: Features a more intuitive API, inspired by the Processor class in the Hugging Face transformers library.
  • Flexibility: Designed to support multiple input types (text, audio, images) while respecting the wide variety of chat templates used by different models.

How to obtain mmproj

Multimodal projector (mmproj) files are specific to each model architecture. Please refer to the relevant guide for instructions on how to obtain or create them:

For the following models, you can use convert_hf_to_gguf.pywith --mmproj flag to get the mmproj file: