This website requires JavaScript.
Explore
Help
Sign in
ver4a
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
cf756d6e0a
llama.cpp
/
ggml
History
Download ZIP
Download TAR.GZ
Gian-Carlo Pascutto
d70908421f
cuda: Add Q5_1, Q5_0, Q4_1 and Q4_0 to F32 conversion support. (
#12000
)
2025-02-22 09:43:24 +01:00
..
cmake
cmake: add ggml find package (
#11369
)
2025-01-26 12:07:48 -04:00
include
ggml-cpu: Add CPU backend support for KleidiAI library (
#11390
)
2025-02-20 15:06:51 +02:00
src
cuda: Add Q5_1, Q5_0, Q4_1 and Q4_0 to F32 conversion support. (
#12000
)
2025-02-22 09:43:24 +01:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml-cpu: Add CPU backend support for KleidiAI library (
#11390
)
2025-02-20 15:06:51 +02:00