llama.cpp/gguf-py/gguf
Si1w f125b8dccf
llama : add PLM GGUF Conversion & Inference Support (#12457)
* add edgellm model arch[conversation feature doesn't work]

* remove output.weight layer for edgellm arch

* [Model] update the name of the model

* update the name of model arch in convert gguf

* [Model] Refarctor the model arch into llama-model

* [Bug] Fix the bug in create attn kv

* [Code] Fix editorconfig erros

* [Code] Remove Trailing whitespace

* [Code] Remove Trailing whitespace

* [Code] Change the order of model arch in list

* [Code] Fix flake8 Lint errors

* Remove trailing white space

* [Code] Remove  call in model arch
2025-03-27 12:49:15 +02:00
..
scripts Refactor gguf scripts to improve metadata handling (#11909) 2025-02-26 08:04:48 -05:00
__init__.py convert-*.py: GGUF Naming Convention Refactor and Metadata Override Refactor (#7499) 2024-07-18 20:40:15 +10:00
constants.py llama : add PLM GGUF Conversion & Inference Support (#12457) 2025-03-27 12:49:15 +02:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
gguf_reader.py Refactor gguf scripts to improve metadata handling (#11909) 2025-02-26 08:04:48 -05:00
gguf_writer.py llama: Add support for RWKV v7 architecture (#12412) 2025-03-18 07:27:50 +08:00
lazy.py gguf-py : simplify support for quant types (#8838) 2024-08-08 13:33:09 -04:00
metadata.py convert : fix Norway problem when parsing YAML (#12114) 2025-02-28 17:44:46 +01:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
quants.py ggml-quants : ternary packing for TriLMs and BitNet b1.58 (#8151) 2024-09-05 21:48:47 -04:00
tensor_mapping.py llama: Add support for RWKV v7 architecture (#12412) 2025-03-18 07:27:50 +08:00
utility.py repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
vocab.py convert : Support chat_template.json (#12460) 2025-03-19 08:58:13 +01:00