This website requires JavaScript.
Explore
Help
Sign in
ver4a
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
cb79c2e7fa
llama.cpp
/
ggml
History
Download ZIP
Download TAR.GZ
cmdr2
cb79c2e7fa
ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)
...
fix
#1186
2025-04-11 00:17:47 +03:00
..
cmake
scripts : update sync + fix cmake merge
2025-03-27 10:09:29 +02:00
include
ggml : add bilinear upscale support (ggml/1185)
2025-04-11 00:17:47 +03:00
src
ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)
2025-04-11 00:17:47 +03:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml : add logging for native build options/vars (whisper/2935)
2025-03-30 08:33:31 +03:00