Add sphinx-airflow-theme to the upgrade-important-versions prek hook so
it automatically fetches the latest version from
https://airflow.apache.org/sphinx-airflow-theme/LATEST_VERSION.txt and
updates the whl URL in devel-common/pyproject.toml.
Also fix selective checks tests (add update-uv-lock to skip-prek-hooks
expectations) and resolve mypy errors in selective_checks.py.
Previously, a missing or corrupted cache tarball would hard-fail the
entire static checks job even when stash-hit reported true. Now the
extraction is wrapped in error handling: if the tarball is missing or
tar fails, it gracefully falls through to a clean prek install-hooks
instead of aborting.
Prek 0.2.22 added the new feature that pretty much everyone adds
now after all shei-hulud attacks to get some cooldown period
of upgrades, to give chance for github scanners and "bleeding edge"
users to find out tha there are some malicious modifications.
This PR adds cooldown period to prek auto-upgrade in our CI
for `breeze ci upgrade` method.
* allocating pseudo-terminal inside the python script creating
the images instead of trying to do it by docker compose run
* better diagnostics in case of error (verbosity handling)
* properly allocating console with forcing pesudo-terminal creation
inside the container when --tty command is used with breeze shell
via `enable-tty.yaml`
* upgrading prek + uv to latest versions
* a bit of refactoring how the docker-compose files are referrred to
* Console in the script also uses pseudo-terminal
- Update uv from 0.9.4 to 0.9.5
- Update ruff from 0.14.1 to 0.14.2
- Update mypy to 1.18.2
- Update Python to 3.12.12
- Update various other dependencies
There were a few issues with ARM workflows:
* not all jobs were run in ARM tests - we do not want to run mysql
of course, but other tests should be fine to run on ARM
* some conditions were not updated (we have to somehow duplicate
amd and arm job definition because we run out of composite
workflows - so sometimes conditions are not synced)
* most importantly - we uploaded prek cache in build-info job, but
that job only run on AMD, not on ARM so the ARM cache was really
an AMD one (and it caused unterminated strings in doctoc installation
It's not possible to upload same artifact twice in the same job and
since we use prek in several jobs we should make sure that the cache
is only uploaded once per job. This was the reason why it was initially
uploaded in build-info job (and save-cache was set to false elsewhere).
With this PR, we have save-cache in 3 places:
* basic checks
* static CI-image bound checks
* in octopin (Python 3.11)
Basic checks and static checks are mutually exclusive (controlled by
basic-checks-only flag) - so we can safely upload cache in both.
In all other places we only install prek with cache, but we do not
save the cache as artifact.