This website requires JavaScript.
Explore
Help
Sign in
ver4a
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
35cae5ba05
llama.cpp
/
ggml
/
src
/
ggml-vulkan
History
Download ZIP
Download TAR.GZ
0cc4m
fd123cfead
Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (
#12434
)
2025-03-18 07:21:40 +01:00
..
cmake
fix: ggml: fix vulkan-shaders-gen build (
#10448
)
2025-01-15 14:17:42 +01:00
vulkan-shaders
llama: Add support for RWKV v7 architecture (
#12412
)
2025-03-18 07:27:50 +08:00
CMakeLists.txt
fix: ggml: fix vulkan-shaders-gen build (
#10448
)
2025-01-15 14:17:42 +01:00
ggml-vulkan.cpp
Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (
#12434
)
2025-03-18 07:21:40 +01:00