COMMITS
/ llama_cpp/llama.py November 10, 2023
A
Potential bugfix for eval
Andrei Betlen committed
A
Fix: default max_tokens matches openai api (16 for completion, max length for chat completion)
Andrei Betlen committed
November 8, 2023
A
Add set_seed to Llama class
Andrei Betlen committed
A
Fix destructor NoneType is not callable error
Andrei Betlen committed
A
Add JSON mode support. Closes #881
Andrei Betlen committed
A
Add seed parameter support for completion and chat_completion requests. Closes #884
Andrei Betlen committed
D
Multimodal Support (Llava 1.5) (#821)
Damian Stewart committed
November 6, 2023
A
Fix type bug
Andrei Betlen committed
A
Refactor Llama class internals
Andrei Betlen committed
November 3, 2023
A
Clean up stdout / stderr suppression
Andrei Betlen committed
A
Rename internal only module utils to _utils
Andrei Betlen committed
A
Update llama.cpp
Andrei Betlen committed
A
Add functionary support (#784)
Andrei committed
A
Migrate inference to llama_batch and llama_decode api (#795)
Andrei committed
November 2, 2023
A
Update llama.cpp
Andrei Betlen committed
A
fix: tokenization of special characters: (#850)
Antoine Lizee committed
November 1, 2023
C
llama: fix exception in Llama.__del__ (#846)
cebtenzzre committed
M
October 24, 2023
A
Update llama.cpp
Andrei Betlen committed
October 19, 2023
G
Fix streaming doesn't return finish reason (#798)
gmcgoldr committed
A
Update llama.cpp
Andrei Betlen committed
October 15, 2023
P
Make use of suppress_stdout_stderr when freeing model (#803)
Pierre Alexandre SCHEMBRI committed
E
Add validation for tensor_split size exceeding LLAMA_MAX_DEVICES (#820)
Eric Liu committed
September 30, 2023
A
Fix logits_all bug
Andrei Betlen committed
A
Fix bug in embedding
Andrei Betlen committed
September 29, 2023
A
Configurable Chat Formats (#711)
Andrei committed
J
Fix rope scaling defaults (#767)
Josh XT committed
A
Update llama.cpp
Andrei Betlen committed
September 18, 2023
A
Update llama.cpp
Andrei Betlen committed
September 14, 2023
A
A
Reorder init params to match llama.cpp order
Andrei Betlen committed
A
Explicitly make all init params other than model_path into keyword only params
Andrei Betlen committed
A
Add kwargs to init to catch extra params
Andrei Betlen committed
A
remove print
Andrei Betlen committed
A
Convert missed llama.cpp constants into standard python types
Andrei Betlen committed
A
Fix tensor_split cli option
Andrei Betlen committed
September 12, 2023
A
Merge branch 'main' into v0.2-wip
Andrei Betlen committed
August 29, 2023
A
cjk pr minor cleanup
Andrei Betlen committed
A
Merge pull request #309 from MeouSker77/fix-CJK
Andrei committed
August 27, 2023
A
Update llama.cpp
Andrei Betlen committed
A
Update llama.cpp
Andrei Betlen committed
August 25, 2023
A
Merge branch 'main' into v0.2-wip
Andrei Betlen committed
A
Use _with_model variants for tokenization
Andrei Betlen committed
A
Strip leading space when de-tokenizing.
Andrei Betlen committed
August 24, 2023
A
Remove deprecated params
Andrei Betlen committed
A
Merge branch 'main' into v0.2-wip
Andrei Betlen committed
A
Update llama.cpp
Andrei Betlen committed
August 15, 2023
A
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
Andrei Betlen committed
A
Remove unnused import
Andrei Betlen committed
August 13, 2023
B
make n_gpu_layers=-1 offload all layers
Billy Cao committed