llama.cpp/.devops
Georgi Gerganov 68ff663a04
repo : update links to new url (#11886)
* repo : update links to new url

ggml-ci

* cont : more urls

ggml-ci
2025-02-15 16:40:57 +02:00
..
nix repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
cloud-v-pipeline build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
cpu.Dockerfile docker : add GGML_CPU_ARM_ARCH arg to select ARM architecture to build for (#11419) 2025-01-25 17:22:41 +01:00
cuda.Dockerfile docker : drop to CUDA 12.4 (#11869) 2025-02-14 14:48:40 +02:00
intel.Dockerfile devops : add docker-multi-stage builds (#10832) 2024-12-22 23:22:58 +01:00
llama-cli-cann.Dockerfile docker: use GGML_NATIVE=OFF (#10368) 2024-11-18 00:21:53 +01:00
llama-cpp-cuda.srpm.spec repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
llama-cpp.srpm.spec repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
musa.Dockerfile musa: bump MUSA SDK version to rc3.1.1 (#11822) 2025-02-13 13:28:18 +01:00
rocm.Dockerfile repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
tools.sh docker: add perplexity and bench commands to full image (#11438) 2025-01-28 10:42:32 +00:00
vulkan.Dockerfile ci : fix build CPU arm64 (#11472) 2025-01-29 00:02:56 +01:00