..
batched
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
batched-bench
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
batched.swift
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
convert-llama2c-to-ggml
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
cvector-generator
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
deprecation-warning
Update deprecation-warning.cpp ( #10619 )
2024-12-04 23:19:20 +01:00
embedding
embeddings : fix batch sizes ( #13076 )
2025-04-24 22:29:22 +03:00
eval-callback
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
export-lora
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
gen-docs
ggml : move AMX to the CPU backend ( #10570 )
2024-11-29 21:54:58 +01:00
gguf
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
gguf-hash
GGUF: C++ refactor, backend support, misc fixes ( #11030 )
2025-01-07 18:01:58 +01:00
gguf-split
gguf-split : --merge now respects --dry-run option ( #12681 )
2025-04-04 16:09:12 +02:00
gritlm
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
imatrix
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
infill
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
jeopardy
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
llama-bench
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llama.android
cmake : enable curl by default ( #12761 )
2025-04-07 13:35:19 +02:00
llama.swiftui
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
llava
clip : Add Qwen2.5VL support ( #12402 )
2025-04-27 10:10:34 +02:00
lookahead
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
lookup
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
main
main : Fix Ctrl+D/newline handling ( #12951 )
2025-04-18 22:02:55 +02:00
parallel
llama : refactor kv cache guard ( #12695 )
2025-04-02 14:32:59 +03:00
passkey
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
perplexity
hellaswag: display estimated score confidence interval ( #12797 )
2025-04-07 18:47:08 +03:00
quantize
quantize: Handle user-defined quantization levels for additional tensors ( #12511 )
2025-04-13 21:29:28 +03:00
retrieval
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
rpc
rpc : add command line option for number of threads for the CPU backend ( #13060 )
2025-04-23 10:32:49 +03:00
run
contrib: support modelscope community ( #12664 )
2025-04-11 14:01:56 +02:00
save-load-state
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
server
grammar : handle maxItems == 0 in JSON schema ( #13117 )
2025-04-26 10:10:20 +02:00
simple
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
simple-chat
llama : refactor llama_context, llama_kv_cache, llm_build_context ( #12181 )
2025-03-13 12:35:44 +02:00
simple-cmake-pkg
repo : update links to new url ( #11886 )
2025-02-15 16:40:57 +02:00
speculative
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
speculative-simple
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
sycl
dsiable curl lib check, this action is missed by commit bd3f59f812
( #12761 ) ( #12937 )
2025-04-14 18:19:07 +08:00
tokenize
llama : add llama_vocab
, functions -> methods, naming ( #11110 )
2025-01-12 11:32:42 +02:00
tts
common : refactor downloading system, handle mmproj with -hf option ( #12694 )
2025-04-01 23:44:05 +02:00
chat-13B.bat
Create chat-13B.bat ( #592 )
2023-03-29 20:21:09 +03:00
chat-13B.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
chat-persistent.sh
scripts : fix pattern and get n_tokens in one go ( #10221 )
2024-11-09 09:06:54 +02:00
chat-vicuna.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
chat.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
CMakeLists.txt
cmake : do not include ./src as public for libllama ( #13062 )
2025-04-24 16:00:10 +03:00
convert_legacy_llama.py
metadata: Detailed Dataset Authorship Metadata ( #8875 )
2024-11-13 21:10:38 +11:00
json_schema_pydantic_example.py
py : type-check all Python scripts with Pyright ( #8341 )
2024-07-07 15:04:39 -04:00
json_schema_to_grammar.py
grammar : handle maxItems == 0 in JSON schema ( #13117 )
2025-04-26 10:10:20 +02:00
llama.vim
repo : update links to new url ( #11886 )
2025-02-15 16:40:57 +02:00
llm.vim
llm.vim : stop generation at multiple linebreaks, bind to <F2> ( #2879 )
2023-08-30 09:50:55 +03:00
Miku.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
pydantic_models_to_grammar.py
pydantic : replace uses of __annotations__ with get_type_hints ( #8474 )
2024-07-14 19:51:21 -04:00
pydantic_models_to_grammar_examples.py
repo : update links to new url ( #11886 )
2025-02-15 16:40:57 +02:00
reason-act.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
regex_to_grammar.py
py : switch to snake_case ( #8305 )
2024-07-05 07:53:33 +03:00
server-llama2-13B.sh
build
: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809 )
2024-06-13 00:41:52 +01:00
server_embd.py
llama : fix FA when KV cache is not used (i.e. embeddings) ( #12825 )
2025-04-08 19:54:51 +03:00
ts-type-to-grammar.sh
JSON schema conversion: ⚡ ️ faster repetitions, min/maxLength for strings, cap number length ( #6555 )
2024-04-12 19:43:38 +01:00