This website requires JavaScript.
Explore
Help
Sign in
ver4a
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
4182
commits
1
branch
0
tags
110
MiB
ab96610b1e
Commit graph
1 commit
Author
SHA1
Message
Date
Johannes Gäßler
8e558309dc
CUDA: MMQ support for iq4_nl, iq4_xs (
#8278
)
2024-07-05 09:06:31 +02:00