llama.cpp/gguf-py/gguf
Xuan-Son Nguyen 92ecdcc06a
mtmd : add vision support for llama 4 (#13282)
* wip llama 4 conversion

* rm redundant __init__

* fix conversion

* fix conversion

* test impl

* try this

* reshape patch_embeddings_0

* fix view

* rm ffn_post_norm

* cgraph ok

* f32 for pos embd

* add image marker tokens

* Llama4UnfoldConvolution

* correct pixel shuffle

* fix merge conflicts

* correct

* add debug_graph

* logits matched, but it still preceives the image incorrectly

* fix style

* add image_grid_pinpoints

* handle llama 4 preprocessing

* rm load_image_size

* rm unused line

* fix

* small fix 2

* add test & docs

* fix llava-1.6 test

* test: add notion of huge models

* add comment

* add warn about degraded quality
2025-05-19 13:04:14 +02:00
..
scripts gguf-py : fix disconnect-before-connect in editor-gui (#13569) 2025-05-15 18:47:10 +02:00
__init__.py convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499) 2024-07-18 20:40:15 +10:00
constants.py mtmd : add vision support for llama 4 (#13282) 2025-05-19 13:04:14 +02:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
gguf_reader.py Refactor gguf scripts to improve metadata handling (#11909) 2025-02-26 08:04:48 -05:00
gguf_writer.py convert : converting mmproj for Qwen2/2.5VL from convert_hf_to_gguf (#13209) 2025-05-02 17:17:15 +02:00
lazy.py gguf-py : support lazy tensor splitting (#12809) 2025-04-08 09:03:07 +02:00
metadata.py convert : fix Norway problem when parsing YAML (#12114) 2025-02-28 17:44:46 +01:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) 2024-09-05 21:48:47 -04:00
tensor_mapping.py mtmd : add vision support for llama 4 (#13282) 2025-05-19 13:04:14 +02:00
utility.py convert : ability to lazy-load safetensors remotely without downloading to disk (#12820) 2025-04-10 17:24:44 +02:00
vocab.py convert : Support chat_template.json (#12460) 2025-03-19 08:58:13 +01:00