llama.cpp/.devops
Xuan-Son Nguyen bd3f59f812
cmake : enable curl by default (#12761)
* cmake : enable curl by default

* no curl if no examples

* fix build

* fix build-linux-cross

* add windows-setup-curl

* fix

* shell

* fix path

* fix windows-latest-cmake*

* run: include_directories

* LLAMA_RUN_EXTRA_LIBS

* sycl: no llama_curl

* no test-arg-parser on windows

* clarification

* try riscv64 / arm64

* windows: include libcurl inside release binary

* add msg

* fix mac / ios / android build

* will this fix xcode?

* try clearing the cache

* add bunch of licenses

* revert clear cache

* fix xcode

* fix xcode (2)

* fix typo
2025-04-07 13:35:19 +02:00
..
nix repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
cloud-v-pipeline build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
cpu.Dockerfile cmake : enable curl by default (#12761) 2025-04-07 13:35:19 +02:00
cuda.Dockerfile cmake : enable curl by default (#12761) 2025-04-07 13:35:19 +02:00
intel.Dockerfile cmake : enable curl by default (#12761) 2025-04-07 13:35:19 +02:00
llama-cli-cann.Dockerfile docker: use GGML_NATIVE=OFF (#10368) 2024-11-18 00:21:53 +01:00
llama-cpp-cuda.srpm.spec repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
llama-cpp.srpm.spec repo : update links to new url (#11886) 2025-02-15 16:40:57 +02:00
musa.Dockerfile cmake : enable curl by default (#12761) 2025-04-07 13:35:19 +02:00
rocm.Dockerfile cmake : enable curl by default (#12761) 2025-04-07 13:35:19 +02:00
tools.sh docker: add perplexity and bench commands to full image (#11438) 2025-01-28 10:42:32 +00:00
vulkan.Dockerfile ci : fix build CPU arm64 (#11472) 2025-01-29 00:02:56 +01:00