This website requires JavaScript.
Explore
Help
Sign in
ver4a
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
6f3bd38640
llama.cpp
/
ggml
/
src
/
ggml-vulkan
History
Download ZIP
Download TAR.GZ
bandoti
6f3bd38640
cmake: remove caching from vulkan coopmat checks (
#12719
)
2025-04-02 14:56:26 -03:00
..
cmake
fix: ggml: fix vulkan-shaders-gen build (
#10448
)
2025-01-15 14:17:42 +01:00
vulkan-shaders
vulkan: Implement grouped query attention in the coopmat2 FA shader (
#12559
)
2025-04-02 19:40:32 +02:00
CMakeLists.txt
cmake: remove caching from vulkan coopmat checks (
#12719
)
2025-04-02 14:56:26 -03:00
ggml-vulkan.cpp
vulkan: Implement grouped query attention in the coopmat2 FA shader (
#12559
)
2025-04-02 19:40:32 +02:00