This website requires JavaScript.
Explore
Help
Sign in
ver4a
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
72b090da2c
llama.cpp
/
docs
History
Download ZIP
Download TAR.GZ
bandoti
72b090da2c
docs: remove link for llama-cli function calling (
#13810
)
2025-05-27 08:52:40 -03:00
..
backend
CANN: Add the basic supports of Flash Attention kernel (
#13627
)
2025-05-26 10:20:18 +08:00
development
llama : move end-user examples to tools directory (
#13249
)
2025-05-02 20:27:13 +02:00
multimodal
mtmd : rename llava directory to mtmd (
#13311
)
2025-05-05 16:02:55 +02:00
android.md
repo : update links to new url (
#11886
)
2025-02-15 16:40:57 +02:00
build.md
CUDA/HIP: Share the same unified memory allocation logic. (
#12934
)
2025-04-15 11:20:38 +02:00
docker.md
musa: Upgrade MUSA SDK version to rc4.0.1 and use mudnn::Unary::IDENTITY op to accelerate D2D memory copy (
#13647
)
2025-05-21 09:58:49 +08:00
function-calling.md
docs: remove link for llama-cli function calling (
#13810
)
2025-05-27 08:52:40 -03:00
install.md
install : add macports (
#12518
)
2025-03-23 10:21:48 +02:00
llguidance.md
llguidance build fixes for Windows (
#11664
)
2025-02-14 12:46:08 -08:00
multimodal.md
mtmd : add support for Qwen2-Audio and SeaLLM-Audio (
#13760
)
2025-05-25 14:06:32 +02:00