llama.cpp/.devops
2025-04-10 01:17:12 +02:00
..
nix repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
cloud-v-pipeline build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
cpu.Dockerfile cmake : enable curl by default (#12761) 2025-04-07 13:35:19 +02:00
cuda.Dockerfile docker : added all CPU to GPU images (#12749) 2025-04-10 01:17:12 +02:00
intel.Dockerfile docker : added all CPU to GPU images (#12749) 2025-04-10 01:17:12 +02:00
llama-cli-cann.Dockerfile CANN: Support Opt CONV_TRANSPOSE_1D and ELU (#12786) 2025-04-09 14:04:14 +08:00
llama-cpp-cuda.srpm.spec repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
llama-cpp.srpm.spec repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
musa.Dockerfile docker : added all CPU to GPU images (#12749) 2025-04-10 01:17:12 +02:00
rocm.Dockerfile docker : added all CPU to GPU images (#12749) 2025-04-10 01:17:12 +02:00
tools.sh docker: add perplexity and bench commands to full image (#11438) 2025-01-28 10:42:32 +00:00
vulkan.Dockerfile docker : added all CPU to GPU images (#12749) 2025-04-10 01:17:12 +02:00