Logo
Explore Help
Register Sign In
abetlen/llama-cpp-python
1
10.1k
Fork 0
You've already forked llama-cpp-python
mirror of https://github.com/abetlen/llama-cpp-python.git synced 2026-03-26 07:21:25 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
61508c265a4e295d69aaee7533bde5b51278e574
llama-cpp-python/.github/workflows
History
Andrei Betlen 61508c265a Add CUDA 12.5 and 12.6 to generated output wheels
2024-12-08 22:55:03 -05:00
..
build-and-release.yaml
chore(deps): bump pypa/cibuildwheel from 2.21.1 to 2.22.0 (#1844)
2024-12-06 04:32:37 -05:00
build-docker.yaml
chore(deps): bump docker/build-push-action from 5 to 6 (#1539)
2024-06-21 12:10:34 -04:00
build-wheels-cuda.yaml
fix: Re-add suport for CUDA 12.5, add CUDA 12.6 (#1775)
2024-12-06 05:01:17 -05:00
build-wheels-metal.yaml
chore(deps): bump pypa/cibuildwheel from 2.21.1 to 2.22.0 (#1844)
2024-12-06 04:32:37 -05:00
generate-index-from-release.yaml
Add CUDA 12.5 and 12.6 to generated output wheels
2024-12-08 22:55:03 -05:00
publish-to-test.yaml
feat(ci): Speed up CI workflows using uv, add support for CUDA 12.5 wheels
2024-09-18 19:22:05 -04:00
publish.yaml
fix: install build dependency
2024-09-25 14:17:04 -04:00
test-pypi.yaml
feat(ci): Speed up CI workflows using uv, add support for CUDA 12.5 wheels
2024-09-18 19:22:05 -04:00
test.yaml
fix(ci): Use default architecture chosen by action
2024-12-06 04:06:07 -05:00
Powered by Gitea Version: 0.0.0+morphllm-e437710 Page: 58ms Template: 5ms
Light
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API