llama.cpp/examples
Nikolaos Pothitos 3ab410f55f
readme : update front-end framework (#11753)
After the migration to React with #11688
2025-02-08 10:43:04 +01:00
..
batched
batched-bench
batched.swift
convert-llama2c-to-ggml
cvector-generator
deprecation-warning
embedding llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
eval-callback
export-lora
gbnf-validator
gen-docs
gguf
gguf-hash
gguf-split
gritlm
imatrix
infill
jeopardy
llama-bench rpc : early register backend devices (#11262) 2025-01-17 10:57:09 +02:00
llama.android
llama.swiftui
llava
lookahead
lookup
main
parallel
passkey
perplexity llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
quantize
quantize-stats
retrieval
rpc
run Make logging more verbose (#11714) 2025-02-07 14:42:46 +00:00
save-load-state
server readme : update front-end framework (#11753) 2025-02-08 10:43:04 +01:00
simple llama : add llama_vocab, functions -> methods, naming (#11110) 2025-01-12 11:32:42 +02:00
simple-chat
simple-cmake-pkg
speculative
speculative-simple
sycl
tokenize
tts
chat-13B.bat
chat-13B.sh
chat-persistent.sh
chat-vicuna.sh
chat.sh
CMakeLists.txt
convert_legacy_llama.py
json_schema_pydantic_example.py
json_schema_to_grammar.py
llama.vim
llm.vim
Miku.sh
pydantic_models_to_grammar.py
pydantic_models_to_grammar_examples.py
reason-act.sh
regex_to_grammar.py
server-llama2-13B.sh
server_embd.py
ts-type-to-grammar.sh