COMMITS
July 18, 2025
A
chore: Bump version
Andrei Betlen committed
July 16, 2025
A
feat: Update llama.cpp
Andrei Betlen committed
July 15, 2025
A
chore: Bump version
Andrei Betlen committed
A
fix: Better chat format for Qwen2.5-VL (#2040)
Alcoft committed
A
feat: Update llama.cpp
Andrei Betlen committed
July 6, 2025
A
fix(ci): Fix macos cpu builds
Andrei Betlen committed
A
chore: Bump version
Andrei Betlen committed
A
fix(ci): Temporarily disable windows cuda wheels
Andrei Betlen committed
A
feat: Update llama.cpp
Andrei Betlen committed
A
fix(ci): Update docker runner
Andrei Betlen committed
July 5, 2025
A
fix(ci): update runners for cpu builds
Andrei Betlen committed
A
chore: Bump version
Andrei Betlen committed
A
fix(ci): Remove macos-13 builds to fix cross compilation error
Andrei Betlen committed
A
fix(ci): Add git to package list
Andrei Betlen committed
A
fix(ci): Update cuda build action to use ubuntu 22.04
Andrei Betlen committed
A
fix: Update reference to in Llama.embed. Closes #2037
Andrei Betlen committed
July 3, 2025
A
chore: Bump version
Andrei Betlen committed
A
docs: Add Qwen2.5-VL to README
Andrei Betlen committed
A
fix: Use num_threads from llama model for mtmd
Andrei Betlen committed
A
feat: Add support for new mtmd api, add Qwen2.5-VL chat handler
Andrei Betlen committed
July 1, 2025
A
fix: Fix missing deprecated symbols on windows with missing LLAMA_API prefix in header file
Andrei Betlen committed
A
fix(minor): Fix type hint for older versions of python
Andrei Betlen committed
A
misc: Fix support for new parameters, deprecate rpc_servers parameter
Andrei Betlen committed
A
feat: Update llama.cpp
Andrei Betlen committed
May 8, 2025
A
hotfix: Disable curl support
Andrei Betlen committed
A
chore: Bump version
Andrei Betlen committed
A
feat: Update llama.cpp
Andrei Betlen committed
April 11, 2025
A
feat: Update llama.cpp
Andrei Betlen committed
March 12, 2025
A
chore: Bump version
Andrei Betlen committed
A
feat: Update llama.cpp
Andrei Betlen committed