![]() * add edgellm model arch[conversation feature doesn't work] * remove output.weight layer for edgellm arch * [Model] update the name of the model * update the name of model arch in convert gguf * [Model] Refarctor the model arch into llama-model * [Bug] Fix the bug in create attn kv * [Code] Fix editorconfig erros * [Code] Remove Trailing whitespace * [Code] Remove Trailing whitespace * [Code] Change the order of model arch in list * [Code] Fix flake8 Lint errors * Remove trailing white space * [Code] Remove call in model arch |
||
---|---|---|
.. | ||
scripts | ||
__init__.py | ||
constants.py | ||
gguf.py | ||
gguf_reader.py | ||
gguf_writer.py | ||
lazy.py | ||
metadata.py | ||
py.typed | ||
quants.py | ||
tensor_mapping.py | ||
utility.py | ||
vocab.py |