..
CMakeLists.txt
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
llama-adapter.cpp
llama : make loras compatible with repacking ( #12593 )
2025-03-27 08:24:10 +02:00
llama-adapter.h
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llama-arch.cpp
model : Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture ( #12466 )
2025-04-28 22:52:15 +03:00
llama-arch.h
model : Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture ( #12466 )
2025-04-28 22:52:15 +03:00
llama-batch.cpp
kv-cache : separate recurrent vs non-recurrent impl ( #12799 )
2025-05-02 17:48:36 +03:00
llama-batch.h
kv-cache : separate recurrent vs non-recurrent impl ( #12799 )
2025-05-02 17:48:36 +03:00
llama-chat.cpp
llama-chat : reset glmedge chat template ( #13253 )
2025-05-02 11:06:09 +02:00
llama-chat.h
llama-chat : fix typo GML --> GLM ( #13143 )
2025-04-28 10:11:58 +02:00
llama-context.cpp
context : fix reorder logic ( #13267 )
2025-05-02 20:54:13 +03:00
llama-context.h
kv-cache : separate recurrent vs non-recurrent impl ( #12799 )
2025-05-02 17:48:36 +03:00
llama-cparams.cpp
llama : refactor src/llama.cpp
( #10902 )
2025-01-03 10:18:53 +02:00
llama-cparams.h
Load all MoE experts during warmup ( #11571 )
2025-03-14 13:47:05 +01:00
llama-grammar.cpp
tool-call
: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034 )
2025-03-05 13:05:13 +00:00
llama-grammar.h
tool-call
: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034 )
2025-03-05 13:05:13 +00:00
llama-graph.cpp
llama : fix build_ffn without gate ( #13336 )
2025-05-06 14:25:40 +02:00
llama-graph.h
kv-cache : separate recurrent vs non-recurrent impl ( #12799 )
2025-05-02 17:48:36 +03:00
llama-hparams.cpp
hparams : add SWA rope parameters ( #12374 )
2025-03-14 09:03:24 +02:00
llama-hparams.h
model : Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture ( #12466 )
2025-04-28 22:52:15 +03:00
llama-impl.cpp
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
llama-impl.h
cleanup: fix compile warnings associated with gnu_printf ( #11811 )
2025-02-12 10:06:53 -04:00
llama-io.cpp
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llama-io.h
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llama-kv-cache.cpp
kv-cache : separate recurrent vs non-recurrent impl ( #12799 )
2025-05-02 17:48:36 +03:00
llama-kv-cache.h
kv-cache : separate recurrent vs non-recurrent impl ( #12799 )
2025-05-02 17:48:36 +03:00
llama-memory.cpp
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llama-memory.h
kv-cache : separate recurrent vs non-recurrent impl ( #12799 )
2025-05-02 17:48:36 +03:00
llama-mmap.cpp
mmap : skip resource limit checks on AIX ( #12541 )
2025-03-24 12:17:10 +02:00
llama-mmap.h
llama-mmap: fix missing include ( #11796 )
2025-02-10 20:58:18 +02:00
llama-model-loader.cpp
model : print tensor size during load ( #12711 )
2025-04-02 16:38:54 +03:00
llama-model-loader.h
llama : add option to override model tensor buffers ( #11397 )
2025-04-02 14:52:01 +02:00
llama-model.cpp
llama : Llama-3_1-Nemotron-Ultra-253B-v1 support ( #12843 )
2025-05-03 17:39:51 +02:00
llama-model.h
llama : Llama-3_1-Nemotron-Ultra-253B-v1 support ( #12843 )
2025-05-03 17:39:51 +02:00
llama-quant.cpp
quantize: Handle user-defined quantization levels for additional tensors ( #12511 )
2025-04-13 21:29:28 +03:00
llama-quant.h
llama : refactor src/llama.cpp
( #10902 )
2025-01-03 10:18:53 +02:00
llama-sampling.cpp
sampling : Integrate Top-nσ into main sampling chain (and add it to the server) ( #13264 )
2025-05-05 22:12:19 +02:00
llama-sampling.h
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
llama-vocab.cpp
mtmd : Support Pixtral 12B ( #13065 )
2025-04-23 20:21:59 +02:00
llama-vocab.h
llama : remove notion of CLS token ( #11064 )
2025-01-12 12:15:53 +02:00
llama.cpp
llama : add option to override model tensor buffers ( #11397 )
2025-04-02 14:52:01 +02:00
unicode-data.cpp
server : better security control for public deployments ( #9776 )
2024-10-08 13:27:04 +02:00
unicode-data.h
llama : reduce compile time and binary size ( #9712 )
2024-10-02 15:49:55 +02:00
unicode.cpp
repo : update links to new url ( #11886 )
2025-02-15 16:40:57 +02:00
unicode.h
unicode : improve naming style ( #10838 )
2024-12-16 12:31:45 +02:00