.. |
CMakeLists.txt
|
kv-cache : split implementation in separate sources (#13920)
|
2025-06-01 11:39:27 +03:00 |
llama-adapter.cpp
|
llama : do not crash if there is no CPU backend (#13395)
|
2025-05-09 13:02:07 +02:00 |
llama-adapter.h
|
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
2025-03-13 12:35:44 +02:00 |
llama-arch.cpp
|
llama : add support for jina-reranker-v2 (#13900)
|
2025-05-29 21:42:31 +02:00 |
llama-arch.h
|
llama : add RobertaForSequenceClassification reranker support (#13875)
|
2025-05-29 08:15:01 +02:00 |
llama-batch.cpp
|
kv-cache : refactor + add llama_memory_state_i (#13746)
|
2025-05-31 10:24:04 +03:00 |
llama-batch.h
|
kv-cache : refactor + add llama_memory_state_i (#13746)
|
2025-05-31 10:24:04 +03:00 |
llama-chat.cpp
|
llama : one-off chat template fix for Mistral-Small-2503 (#13398)
|
2025-05-09 11:17:51 +02:00 |
llama-chat.h
|
llama : one-off chat template fix for Mistral-Small-2503 (#13398)
|
2025-05-09 11:17:51 +02:00 |
llama-context.cpp
|
llama : deprecate explicit kv_self defrag/update calls (#13921)
|
2025-05-31 15:58:33 +03:00 |
llama-context.h
|
llama : auto-batch preparation (#13845)
|
2025-05-31 12:55:57 +03:00 |
llama-cparams.cpp
|
kv-cache : rework kv_cell (#13706)
|
2025-05-25 16:34:36 +03:00 |
llama-cparams.h
|
kv-cache : rework kv_cell (#13706)
|
2025-05-25 16:34:36 +03:00 |
llama-grammar.cpp
|
server : streaming of tool calls and thoughts when --jinja is on (#12379)
|
2025-05-25 01:48:08 +01:00 |
llama-grammar.h
|
tool-call : fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034)
|
2025-03-05 13:05:13 +00:00 |
llama-graph.cpp
|
kv-cache : split implementation in separate sources (#13920)
|
2025-06-01 11:39:27 +03:00 |
llama-graph.h
|
kv-cache : refactor + add llama_memory_state_i (#13746)
|
2025-05-31 10:24:04 +03:00 |
llama-hparams.cpp
|
hparams : initialize arrays (#13728)
|
2025-05-23 20:16:13 +03:00 |
llama-hparams.h
|
llama : add RobertaForSequenceClassification reranker support (#13875)
|
2025-05-29 08:15:01 +02:00 |
llama-impl.cpp
|
GGUF: C++ refactor, backend support, misc fixes (#11030)
|
2025-01-07 18:01:58 +01:00 |
llama-impl.h
|
cleanup: fix compile warnings associated with gnu_printf (#11811)
|
2025-02-12 10:06:53 -04:00 |
llama-io.cpp
|
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
2025-03-13 12:35:44 +02:00 |
llama-io.h
|
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
2025-03-13 12:35:44 +02:00 |
llama-kv-cache-recurrent.cpp
|
kv-cache : split implementation in separate sources (#13920)
|
2025-06-01 11:39:27 +03:00 |
llama-kv-cache-recurrent.h
|
kv-cache : split implementation in separate sources (#13920)
|
2025-06-01 11:39:27 +03:00 |
llama-kv-cache-unified-iswa.cpp
|
kv-cache : split implementation in separate sources (#13920)
|
2025-06-01 11:39:27 +03:00 |
llama-kv-cache-unified-iswa.h
|
kv-cache : split implementation in separate sources (#13920)
|
2025-06-01 11:39:27 +03:00 |
llama-kv-cache-unified.cpp
|
kv-cache : split implementation in separate sources (#13920)
|
2025-06-01 11:39:27 +03:00 |
llama-kv-cache-unified.h
|
kv-cache : split implementation in separate sources (#13920)
|
2025-06-01 11:39:27 +03:00 |
llama-kv-cache.cpp
|
kv-cache : split implementation in separate sources (#13920)
|
2025-06-01 11:39:27 +03:00 |
llama-kv-cache.h
|
kv-cache : split implementation in separate sources (#13920)
|
2025-06-01 11:39:27 +03:00 |
llama-kv-cells.h
|
kv-cache : refactor + add llama_memory_state_i (#13746)
|
2025-05-31 10:24:04 +03:00 |
llama-memory.cpp
|
llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181)
|
2025-03-13 12:35:44 +02:00 |
llama-memory.h
|
kv-cache : refactor + add llama_memory_state_i (#13746)
|
2025-05-31 10:24:04 +03:00 |
llama-mmap.cpp
|
mmap : skip resource limit checks on AIX (#12541)
|
2025-03-24 12:17:10 +02:00 |
llama-mmap.h
|
llama-mmap: fix missing include (#11796)
|
2025-02-10 20:58:18 +02:00 |
llama-model-loader.cpp
|
gguf : use ggml log system (#13571)
|
2025-05-15 19:13:11 +02:00 |
llama-model-loader.h
|
llama : add option to override model tensor buffers (#11397)
|
2025-04-02 14:52:01 +02:00 |
llama-model-saver.cpp
|
llama/ggml: add LLM training support (#10544)
|
2025-05-12 14:44:49 +02:00 |
llama-model-saver.h
|
llama/ggml: add LLM training support (#10544)
|
2025-05-12 14:44:49 +02:00 |
llama-model.cpp
|
gemma : more consistent attention scaling for v2 and v3 (#13951)
|
2025-06-02 20:54:26 +03:00 |
llama-model.h
|
kv-cache : add SWA support (#13194)
|
2025-05-20 08:05:46 +03:00 |
llama-quant.cpp
|
quantize : improve tensor-type pattern matching (#13033)
|
2025-05-13 19:12:31 +02:00 |
llama-quant.h
|
llama : refactor src/llama.cpp (#10902)
|
2025-01-03 10:18:53 +02:00 |
llama-sampling.cpp
|
sampling : make sure samplers return at least 1 token (#13822)
|
2025-05-27 12:07:52 +03:00 |
llama-sampling.h
|
llama : add llama_vocab , functions -> methods, naming (#11110)
|
2025-01-12 11:32:42 +02:00 |
llama-vocab.cpp
|
convert : fix nomic-bert-moe mask token (#13757)
|
2025-06-01 18:07:21 +02:00 |
llama-vocab.h
|
llama/ggml: add LLM training support (#10544)
|
2025-05-12 14:44:49 +02:00 |
llama.cpp
|
llama : print hint when loading a model when no backends are loaded (#13589)
|
2025-05-16 16:38:07 +02:00 |
unicode-data.cpp
|
server : better security control for public deployments (#9776)
|
2024-10-08 13:27:04 +02:00 |
unicode-data.h
|
llama : reduce compile time and binary size (#9712)
|
2024-10-02 15:49:55 +02:00 |
unicode.cpp
|
repo : update links to new url (#11886)
|
2025-02-15 16:40:57 +02:00 |
unicode.h
|
unicode : improve naming style (#10838)
|
2024-12-16 12:31:45 +02:00 |