llama.cpp/common
エシュナヴァリシア c6ff5d2a8d
common: custom hf endpoint support (#12769)
* common: custom hf endpoint support

Add support for custom huggingface endpoints via HF_ENDPOINT environment variable

You can now specify a custom huggingface endpoint using the HF_ENDPOINT environment variable when using the --hf-repo flag, which works similarly to huggingface-cli's endpoint configuration.

Example usage:
HF_ENDPOINT=https://hf-mirror.com/ ./bin/llama-cli --hf-repo Qwen/Qwen1.5-0.5B-Chat-GGUF --hf-file qwen1_5-0_5b-chat-q2_k.gguf -p "The meaning to life and the universe is"

The trailing slash in the URL is optional:
HF_ENDPOINT=https://hf-mirror.com ./bin/llama-cli --hf-repo Qwen/Qwen1.5-0.5B-Chat-GGUF --hf-file qwen1_5-0_5b-chat-q2_k.gguf -p "The meaning to life and the universe is"

* Update common/arg.cpp

readability Improvement

Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>

* Apply suggestions from code review

---------

Co-authored-by: ベアトリーチェ <148695646+MakiSonomura@users.noreply.github.com>
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>
2025-04-05 15:31:42 +02:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
minja sync: minja (#12739) 2025-04-04 21:16:39 +01:00
arg.cpp common: custom hf endpoint support (#12769) 2025-04-05 15:31:42 +02:00
arg.h arg : option to exclude arguments from specific examples (#11136) 2025-01-08 12:55:36 +02:00
base64.hpp llava : expose as a shared library for downstream projects (#3613) 2023-11-07 00:36:23 +03:00
build-info.cpp.in build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
chat.cpp server: extract <think> tags from qwq outputs (#12297) 2025-03-10 10:59:03 +00:00
chat.h server: extract <think> tags from qwq outputs (#12297) 2025-03-10 10:59:03 +00:00
CMakeLists.txt upgrade to llguidance 0.7.10 (#12576) 2025-03-26 11:06:09 -07:00
common.cpp llama : add option to override model tensor buffers (#11397) 2025-04-02 14:52:01 +02:00
common.h llama : add option to override model tensor buffers (#11397) 2025-04-02 14:52:01 +02:00
console.cpp console : utf-8 fix for windows stdin (#9690) 2024-09-30 11:23:42 +03:00
console.h gguf : new file format with flexible meta data (beta) (#2398) 2023-08-21 23:07:43 +03:00
json-schema-to-grammar.cpp tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
json-schema-to-grammar.h tool-call: fix Qwen 2.5 Coder support, add micro benchmarks, support trigger patterns for lazy grammars (#12034) 2025-03-05 13:05:13 +00:00
json.hpp json-schema-to-grammar improvements (+ added to server) (#5978) 2024-03-21 11:50:43 +00:00
llguidance.cpp upgrade to llguidance 0.7.10 (#12576) 2025-03-26 11:06:09 -07:00
log.cpp Fix: Compile failure due to Microsoft STL breaking change (#11836) 2025-02-12 21:36:11 +01:00
log.h cleanup: fix compile warnings associated with gnu_printf (#11811) 2025-02-12 10:06:53 -04:00
ngram-cache.cpp ggml : portability fixes for VS 2017 (#12150) 2025-03-04 18:53:26 +02:00
ngram-cache.h llama : use LLAMA_TOKEN_NULL (#11062) 2025-01-06 10:52:15 +02:00
sampling.cpp llama: fix error on bad grammar (#12628) 2025-03-28 18:08:52 +01:00
sampling.h sampling : support for llguidance grammars (#10224) 2025-02-02 09:55:32 +02:00
speculative.cpp llama : refactor llama_context, llama_kv_cache, llm_build_context (#12181) 2025-03-13 12:35:44 +02:00
speculative.h speculative : update default params (#11954) 2025-02-19 13:29:42 +02:00
stb_image.h common : Update stb_image.h to latest version (#9161) 2024-08-27 08:58:50 +03:00