llama : support input embeddings directly (#1910)
* add interface for float input * fixed inpL shape and type * add examples of input floats * add test example for embd input * fixed sampling * add free for context * fixed add end condition for generating * add examples for llava.py * add READMD for llava.py * add READMD for llava.py * add example of PandaGPT * refactor the interface and fixed the styles * add cmake build for embd-input * add cmake build for embd-input * Add MiniGPT-4 example * change the order of the args of llama_eval_internal * fix ci error
This commit is contained in:
parent
9d23589d63
commit
cfa0750bc9
16 changed files with 811 additions and 22 deletions
3
.gitignore
vendored
3
.gitignore
vendored
|
@ -1,5 +1,6 @@
|
|||
*.o
|
||||
*.a
|
||||
*.so
|
||||
.DS_Store
|
||||
.build/
|
||||
.cache/
|
||||
|
@ -39,8 +40,8 @@ models/*
|
|||
/vdot
|
||||
/server
|
||||
/Pipfile
|
||||
/embd-input-test
|
||||
/libllama.so
|
||||
|
||||
build-info.h
|
||||
arm_neon.h
|
||||
compile_commands.json
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue