..
cmake
llama : reorganize source code + improve CMake ( #8006 )
2024-06-26 18:33:02 +03:00
minja
sync: minja ( #12739 )
2025-04-04 21:16:39 +01:00
arg.cpp
Add --no-op-offload
to improve -ot
pp perf in MoE models like llama4 400B ( #13386 )
2025-05-11 14:18:39 +02:00
arg.h
common : add common_remote_get_content ( #13123 )
2025-04-26 22:58:12 +02:00
base64.hpp
llava : expose as a shared library for downstream projects ( #3613 )
2023-11-07 00:36:23 +03:00
build-info.cpp.in
build : link against build info instead of compiling against it ( #3879 )
2023-11-02 08:50:16 +02:00
chat.cpp
server : (webui) revamp the input area, plus many small UI improvements ( #13365 )
2025-05-08 15:37:29 +02:00
chat.h
server
: extract <think> tags from qwq outputs (#12297 )
2025-03-10 10:59:03 +00:00
CMakeLists.txt
chore(llguidance): use tagged version that does not break the build ( #13413 )
2025-05-09 23:15:39 +03:00
common.cpp
llama/ggml: add LLM training support ( #10544 )
2025-05-12 14:44:49 +02:00
common.h
llama/ggml: add LLM training support ( #10544 )
2025-05-12 14:44:49 +02:00
console.cpp
console : utf-8 fix for windows stdin ( #9690 )
2024-09-30 11:23:42 +03:00
console.h
gguf : new file format with flexible meta data (beta) ( #2398 )
2023-08-21 23:07:43 +03:00
json-schema-to-grammar.cpp
grammar : handle maxItems == 0 in JSON schema ( #13117 )
2025-04-26 10:10:20 +02:00
json-schema-to-grammar.h
tool-call
: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034 )
2025-03-05 13:05:13 +00:00
json.hpp
json-schema-to-grammar improvements (+ added to server) ( #5978 )
2024-03-21 11:50:43 +00:00
llguidance.cpp
llguidance : set tokenizer slices to default ( #13424 )
2025-05-10 17:19:52 +02:00
log.cpp
Fix: Compile failure due to Microsoft STL breaking change ( #11836 )
2025-02-12 21:36:11 +01:00
log.h
cleanup: fix compile warnings associated with gnu_printf ( #11811 )
2025-02-12 10:06:53 -04:00
ngram-cache.cpp
ggml : portability fixes for VS 2017 ( #12150 )
2025-03-04 18:53:26 +02:00
ngram-cache.h
llama : use LLAMA_TOKEN_NULL ( #11062 )
2025-01-06 10:52:15 +02:00
sampling.cpp
common : Add a warning when we can't match samplers from a string or char. ( #13330 )
2025-05-07 11:23:28 +03:00
sampling.h
sampling : support for llguidance grammars ( #10224 )
2025-02-02 09:55:32 +02:00
speculative.cpp
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
speculative.h
speculative : update default params ( #11954 )
2025-02-19 13:29:42 +02:00
stb_image.h
common : Update stb_image.h to latest version ( #9161 )
2024-08-27 08:58:50 +03:00