* llama : deprecate llama_kv_self_ API ggml-ci * llama : allow llama_memory_(nullptr) ggml-ci * memory : add flag for optional data clear in llama_memory_clear ggml-ci |
||
|---|---|---|
| .. | ||
| Sources | ||
| .gitignore | ||
| Makefile | ||
| Package.swift | ||
| README.md | ||
This is a swift clone of examples/batched.
$ make
$ ./llama-batched-swift MODEL_PATH [PROMPT] [PARALLEL]