Commit graph

  • 2789baf480
    tests : fix --keep_split -> --keep-split (#7374) Georgi Gerganov 2024-05-20 08:55:09 +03:00
  • 33c8d50acc
    Add provisions for windows support for BF16 code including CMake provision for enabling AVX512_BF16 (#7258) Srihari-mcw 2024-05-19 19:18:39 -07:00
  • d359f30921
    llama : remove MPI backend (#7395) slaren 2024-05-20 01:17:03 +02:00
  • 1ea2a0036e
    quantize : fix --keep-split check (#7374) Fred Douglas 2024-05-19 11:37:04 -05:00
  • f030ec1f7a
    Vulkan Embedding Fix (#7360) 0cc4m 2024-05-19 17:19:53 +02:00
  • e4e6f67be6
    ggml : fix another case of quants nans (#7387) slaren 2024-05-19 17:08:46 +02:00
  • 5ca49cbecd
    ggml: implement quantized KV cache for FA (#7372) Johannes Gäßler 2024-05-19 16:46:13 +02:00
  • 1b01f06db0
    server: add test for token probs (#7347) Johannes Gäßler 2024-05-19 16:26:02 +02:00
  • 41858392e1
    server: fix seed being reported back (#7382) Johannes Gäßler 2024-05-19 16:06:33 +02:00
  • 6aade19ee7
    Add StableLM2 pre-tokenizer (#7349) Anas Ahouzi 2024-05-19 14:46:46 +02:00
  • ab33f7a338
    cuda : clear error after buffer allocation failure (#7376) slaren 2024-05-19 14:19:37 +02:00
  • e23b974f4c
    labeler.yml: Use settings from ggerganov/llama.cpp [no ci] (#7363) Brian 2024-05-19 20:51:03 +10:00
  • 854d365aba
    cmake : update android comments (#7341) Georgi Gerganov 2024-05-19 11:01:01 +03:00
  • f5bf761747
    Capture CUDA logging output (#7298) fraxy-v 2024-05-19 01:44:42 +03:00
  • 059031b8c4
    ci : re-enable sanitizer runs (#7358) Georgi Gerganov 2024-05-18 18:55:54 +03:00
  • 511182eabb
    android : use "ci-android" branch for CI (#7341) Georgi Gerganov 2024-05-18 13:40:39 +03:00
  • 133d99c599
    CUDA: deduplicate FlashAttention code (#7352) Johannes Gäßler 2024-05-18 12:36:25 +02:00
  • cb42c29427
    server: correct --threads documentation [no ci] (#7362) Johannes Gäßler 2024-05-18 11:10:47 +02:00
  • d233b507cd
    cuda : add half2 __shfl_xor() for ROCm 5.5 (#7263) Engininja2 2024-05-18 02:05:17 -06:00
  • 0f98acfac6
    llama : add support for larger Granite Code Models (20B, 34B) (#7324) Steffen Röcker 2024-05-18 10:04:55 +02:00
  • ca57e0f35e
    perplexity : ndot progress and show stats with < 100 tasks (#7348) strawberrymelonpanda 2024-05-18 00:57:08 -07:00
  • c1b295eea5
    Update and fix Vulkan soft_max and argsort implementations (#7237) 0cc4m 2024-05-18 08:10:58 +02:00
  • de73196344
    github-actions-labeler: initial commit (#7330) Brian 2024-05-18 16:04:23 +10:00
  • b49a13dd2f
    convert : fix set_vocab_sentencepiece (#6866) Georgi Gerganov 2024-05-18 08:46:20 +03:00
  • 05834841dc
    ggml : fix quants nans when all the group weights are very close to zero (#7313) slaren 2024-05-18 02:39:54 +02:00
  • ef277de2ad
    cmake : fix typo in AMDGPU_TARGETS (#7356) Engininja2 2024-05-17 18:39:25 -06:00
  • b43272afa2
    Unicode codepoint flags for custom regexs (#7245) jaime-m-p 2024-05-18 01:09:13 +02:00
  • 0fc1e820a9
    CUDA: faster large batch FA without tensor cores (#7314) Johannes Gäßler 2024-05-17 18:54:52 +02:00
  • 82ca83db3c
    ROCm: use native CMake HIP support (#5966) Gavin Zhao 2024-05-17 11:03:03 -04:00
  • f4bd8b3d26
    rpc : set SO_REUSEADDR for the server socket (#7320) Radoslav Gerganov 2024-05-17 17:25:44 +03:00
  • 51e9d02599
    Added a single test function script and fix debug-test.sh to be more robust (#7279) Brian 2024-05-17 22:40:14 +10:00
  • d273c1402b
    py : convert-hf-to-gguf-update improvements (#7340) Aarni Koskela 2024-05-17 15:11:45 +03:00
  • 27b040691c
    llama : use n_embd_head_v when reshaping kqv (#7327) fairydreaming 2024-05-17 13:24:38 +02:00
  • 29c60d8cdd
    tokenization: add warning for double BOS (#7332) Johannes Gäßler 2024-05-17 09:59:57 +02:00
  • 359cbe3f46
    ggml-quants, llama : removed excess checks (#7274) Herman Semenov 2024-05-17 07:08:49 +00:00
  • e18bc6aaf3
    convert : fix Qwen/Qwen-7b conversion (#7308) amd-lalithnc 2024-05-17 12:31:58 +05:30
  • ee94172d33
    server : add support for the RPC backend (#7305) Radoslav Gerganov 2024-05-17 10:00:17 +03:00
  • 934266c0e0
    ggml : rewrite silu and softmax for cpu (#7154) Justine Tunney 2024-05-17 02:58:52 -04:00
  • 9c4fdcbec8
    [Server] Added --verbose option to README [no ci] (#7335) Leon Knauer 2024-05-17 02:11:03 +02:00
  • 24ecb58168
    Revert "server bench: fix bench not waiting for model load (#7284)" (#7334) Pierrick Hymbert 2024-05-16 20:43:45 +02:00
  • 9afdffe70e rpc : get available mem for the CPU backend Radoslav Gerganov 2024-05-15 16:04:40 +03:00
  • 3b3963c55c rpc : add command line arg for specifying backend memory Radoslav Gerganov 2024-05-15 15:29:07 +03:00
  • dda64fc17c
    convert : get general.name from model dir, not its parent (#5615) Jared Van Bortel 2024-05-16 02:15:23 -04:00
  • 0350f58152
    grammar, json, llama: replace push on emplace if it possible (#7273) Herman Semenov 2024-05-16 06:14:24 +00:00
  • ad52d5c259
    doc: add references to hugging face GGUF-my-repo quantisation web tool. (#7288) Vaibhav Srivastav 2024-05-16 07:38:43 +02:00
  • 172b78210a
    ci: fix bin/Release path for windows-arm64 builds (#7317) Max Krasnyansky 2024-05-15 22:36:43 -07:00
  • 13ad16af12
    Add support for properly optimized Windows ARM64 builds with LLVM and MSVC (#7191) Max Krasnyansky 2024-05-15 19:47:36 -07:00
  • 8f7080bf48
    readme : remove stray double quote (#7310) Daniel Bevenius 2024-05-15 23:41:03 +02:00
  • e1b40ac3b9
    ggml : use dynamic thread scheduling for matrix multiplication (#6915) kunnis 2024-05-15 12:59:12 -05:00
  • dc020985b8
    Avoid unnecessarily disabling CUDA graphs (#7302) agray3 2024-05-15 14:44:49 +01:00
  • 344f9126cc
    ggml : tag ggml_tensor::backend as deprecated (#7290) slaren 2024-05-15 15:08:48 +02:00
  • 9a17ab914b
    Add missing " (#7303) AidanBeltonS 2024-05-15 13:26:30 +01:00
  • ea3b0590ee
    embedding : free the batch after execution (#7297) dm4 2024-05-15 20:01:12 +08:00
  • 29499bb593
    sync : ggml Georgi Gerganov 2024-05-15 13:23:41 +03:00
  • 48aa8fd1f2
    ggml : add ggml_upscale_ext (ggml/814) John Balis 2024-05-15 03:52:33 -05:00
  • 583fd6b000
    server bench: fix bench not waiting for model load (#7284) Johannes Gäßler 2024-05-15 08:44:16 +02:00
  • 9f773486ab
    script : sync ggml-rpc Georgi Gerganov 2024-05-14 19:14:38 +03:00
  • e8a7fd4fb0
    metal : support FA without mask + add asserts (#7278) Georgi Gerganov 2024-05-14 19:09:30 +03:00
  • a5e3fde857 sync : ggml Georgi Gerganov 2024-05-14 15:33:16 +03:00
  • f308ea7059 metal : tune soft_max number of threads (whisper/0) Georgi Gerganov 2024-05-13 11:01:07 +03:00
  • c3c88f296a ggml : try fix ppc64 (whisper/0) Georgi Gerganov 2024-05-12 20:36:31 +03:00
  • 182adefcf3 ggml : expose SSE3 and SSSE3 for MSVC when AVX is available (whisper/2128) Przemysław Pawełczyk 2024-05-08 17:33:43 +02:00
  • 0d26d8ccd8 ggml : optimize for ppc64le using VSX intrinsics (ggml/784) Hong Bo PENG 2024-05-12 17:17:18 +08:00
  • 4f0263633b
    server: free sampling contexts on exit (#7264) Steve Grubb 2024-05-14 10:11:24 -04:00
  • 1265c670fd
    Revert "move ndk code to a new library (#6951)" (#7282) Brian 2024-05-14 23:10:39 +10:00
  • 5e31828d3e
    ggml : add RPC backend (#6829) Radoslav Gerganov 2024-05-14 14:27:19 +03:00
  • 541600201e
    llama : disable pipeline parallelism with nkvo (#7265) slaren 2024-05-14 09:33:42 +02:00
  • efc8f767c8
    move ndk code to a new library (#6951) Elton Kola 2024-05-14 03:30:30 -04:00
  • e0f556186b
    Add left recursion check: quit early instead of going into an infinite loop (#7083) Haggai Nuchi 2024-05-13 22:25:56 -07:00
  • 27f65d6267
    docs: Fix typo and update description for --embeddings flag (#7026) Ryuei 2024-05-14 14:20:47 +09:00
  • ee52225067
    convert-hf : support direct Q8_0 conversion (#7234) compilade 2024-05-13 14:10:51 -04:00
  • 614d3b914e
    llama : less KV padding when FA is off (#7257) Georgi Gerganov 2024-05-13 17:15:15 +03:00
  • 30e70334f7
    llava-cli: fix base64 prompt (#7248) k.h.lai 2024-05-13 22:02:36 +08:00
  • 1c570d8bee
    perplexity: add BF16 vs. FP16 results (#7150) Johannes Gäßler 2024-05-13 13:03:27 +02:00
  • 948f4ec7c5
    [SYCL] rm wait() (#7233) Neo Zhang 2024-05-13 18:11:26 +08:00
  • 9aa672490c
    llama : rename jina tokenizers to v2 (#7249) Joan Fontanals 2024-05-13 10:35:14 +02:00
  • b1f8af1886
    convert.py: Outfile default name change and additional metadata support (#4858) Brian 2024-05-13 12:56:47 +10:00
  • e586ee4259
    change default temperature of OAI compat API from 0 to 1 (#7226) Benjamin Findley 2024-05-12 19:40:08 -07:00
  • cbf75894d2
    [SYCL] Add oneapi runtime dll files to win release package (#7241) Neo Zhang 2024-05-13 08:04:29 +08:00
  • 0d5cef78ae
    [SYCL] update CI with oneapi 2024.1 (#7235) Neo Zhang 2024-05-13 08:02:55 +08:00
  • dc685be466
    CUDA: add FP32 FlashAttention vector kernel (#7188) Johannes Gäßler 2024-05-12 19:40:45 +02:00
  • 6f1b63606f
    cmake : fix version cmp (#7227) Georgi Gerganov 2024-05-12 18:30:23 +03:00
  • b228aba91a
    remove convert-lora-to-ggml.py (#7204) slaren 2024-05-12 02:29:33 +02:00
  • 7bd4ffb780
    metal : fix warnings (skipme) (#0) Georgi Gerganov 2024-05-11 21:36:20 +03:00
  • 1622ac023f
    sync : ggml Georgi Gerganov 2024-05-11 21:35:05 +03:00
  • 6aeff24f8b
    metal : fix indent (ggml/0) Georgi Gerganov 2024-05-11 16:57:53 +03:00
  • 325756d28d
    ggml : resolve merge (ggml/0) Georgi Gerganov 2024-05-11 16:25:50 +03:00
  • fed0108491
    Scripting & documenting debugging one test without anything else in the loop. (#7096) Josh Ramer 2024-05-11 12:26:35 -05:00
  • 72c177c1f6
    fix system prompt handling (#7153) Xuan Son Nguyen 2024-05-11 17:28:10 +02:00
  • 5a419926b0
    convert-hf : support bfloat16 conversion (#7158) compilade 2024-05-11 11:06:26 -04:00
  • fae9d234b6 sync : ggml Georgi Gerganov 2024-05-11 12:02:39 +03:00
  • f5ef34e428 feat: implemented sigmoid function (ggml/806) Justina Cho 2024-05-01 14:44:26 -07:00
  • ef0d5e3ec9 build: fix and ignore msvc warnings (ggml/805) Borislav Stanimirov 2024-04-25 17:24:07 +03:00
  • 3292733f95
    convert : skip unaccessible HF repos (#7210) CrispStrobe 2024-05-11 10:18:35 +02:00
  • 988631335a
    server : free llama_batch on exit (#7212) Steve Grubb 2024-05-11 04:13:02 -04:00
  • f99e1e456e
    llama : lookup word in vocab before doing BPE merges (#7193) Haoxiang Fei 2024-05-11 16:12:06 +08:00
  • 5ae3426b0b
    server: fix reported top tokens for temperature 0 (#7203) Johannes Gäßler 2024-05-11 10:11:28 +02:00
  • b83cc3f5b3
    llama : add Jina Embeddings architecture (#6826) Joan Fontanals 2024-05-11 09:46:09 +02:00
  • 9cb317f77e
    ggml : full ALiBi support (#7192) Georgi Gerganov 2024-05-11 10:32:41 +03:00
  • e849648888
    llama-bench : add pp+tg test type (#7199) slaren 2024-05-10 18:03:54 +02:00