Files
Dev-iL be8252e628 Add Python 3.14 Support (#63520)
* Update python version exclusion to 3.15

* Add 3.14 metadata version classifiers and related constants

* Regenerate Breeze command help screenshots

* Assorted workarounds to fix breeze image building

- constraints are skipped entirely
- greenlet pin updated

* Exclude cassandra

* Exclude amazon

* Exclude google

* CI: Only add pydantic extra to Airflow 2 migration tests

Before this fix there were two separate issues in the migration-test setup for Python 3.14:

1. The migration workflow always passes --airflow-extras pydantic.
2. For Python 3.14, the minimum Airflow version is resolved to 3.2.0 by get_min_airflow_version_for_python.py, and apache-airflow[pydantic]==3.2.0 is not a valid thing to install.

So when constraints installation fails, the fallback path tries to install an invalid spec.

* Disable DB migration tests for python 3.14

* Enforce werkzeug 3.x for python 3.14

* Increase K8s executor test timeout for Python 3.14

Python 3.14 changed the default multiprocessing start method from 'fork' to 'forkserver' on Linux. The forkserver start method is slower because each new process must import modules from scratch rather than copying the parent's address space. This makes `multiprocessing.Manager()` initialization take longer, causing the test to exceed its 10s timeout.

* Adapt LocalExecutor tests for Python 3.14 forkserver default

Python 3.14 changed the default multiprocessing start method from
'fork' to 'forkserver' on Linux.  Like 'spawn', 'forkserver' doesn't
share the parent's address space, so mock patches applied in the test
process are invisible to worker subprocesses.

- Skip tests that mock across process boundaries on non-fork methods
- Add test_executor_lazy_worker_spawning to verify that non-fork start
  methods defer worker creation and skip gc.freeze
- Make test_multiple_team_executors_isolation and
  test_global_executor_without_team_name assert the correct worker
  count for each start method instead of assuming pre-spawning
- Remove skip from test_clean_stop_on_signal (works on all methods)
  and increase timeout from 5s to 30s for forkserver overhead

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Bump dependencies to versions supporting 3.14

* Fix PROD image build failing on Python 3.14 due to excluded providers

The PROD image build installed all provider wheels regardless of Python
version compatibility. Providers like google and amazon that exclude
Python 3.14 were still passed to pip, causing resolution failures (e.g.
ray has no cp314 wheel on PyPI).

Two fixes:
- get_distribution_specs.py now reads each wheel's Requires-Python
  metadata and skips incompatible wheels instead of passing them to pip.
- The requires-python specifier generation used !=3.14 which per PEP 440
  only excludes 3.14.0, not 3.14.3. Changed to !=3.14.* wildcard.

* Split core test types into 2 matrix groups to avoid OOM on Python 3.14

Non-DB core tests use xdist which runs all test types in a single pytest
process. With 2059 items across 4 workers, memory accumulates until the
OOM killer strikes at ~86% completion (exit code 137).

Split core test types into 2 groups (API/Always/CLI and
Core/Other/Serialization), similar to how provider tests already use
_split_list with NUMBER_OF_LOW_DEP_SLICES. Each group gets ~1000 items,
well under the ~1770 threshold where OOM occurs.

Update selective_checks test expectations to reflect the 2-group split.

* Gracefully handle an already removed password file in fixture

The old code had a check-then-act race (if `os.path.exists` → `os.remove`), which fails when the file doesn't exist at removal time. `contextlib.suppress(FileNotFoundError)` handles this atomically — if the file is missing (never created in this xdist worker, or removed between check and delete), it's silently ignored.

* Fix OOM and flaky tests in test_process_utils

Replace multiprocessing.Process with subprocess.Popen running minimal
inline scripts. multiprocessing.Process uses fork(), which duplicates
the entire xdist worker memory. At 95% test completion the worker has
accumulated hundreds of MBs; forking it triggers the OOM killer
(exit code 137) on Python 3.14.

subprocess.Popen starts a fresh lightweight process (~10MB) without
copying the parent's memory, avoiding the OOM entirely.

Also replace the racy ps -ax process counting in
TestKillChildProcessesByPids with psutil.pid_exists() checks on the
specific PID — the old approach was non-deterministic because unrelated
processes could start/stop between measurements.

* Add prek hook to validate python_version markers for excluded providers

When a provider declares excluded-python-versions in provider.yaml,
every dependency string referencing that provider in pyproject.toml
must carry a matching python_version marker. Missing markers cause
excluded providers to be silently installed as transitive dependencies
(e.g. aiobotocore pulling in amazon on Python 3.14).

The new check-excluded-provider-markers hook reads exclusions from
provider.yaml and validates all dependency strings in pyproject.toml
at commit time, preventing regressions like the one fixed in the
previous commit.

* Update `uv.lock`

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19 23:03:46 +02:00

225 lines
9.2 KiB
YAML

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
---
name: Unit tests
on: # yamllint disable-line rule:truthy
workflow_call:
inputs:
runners:
description: "The array of labels (in json form) determining public AMD runners."
required: true
type: string
platform:
description: "Platform for the build - 'linux/amd64' or 'linux/arm64'"
required: true
type: string
test-group:
description: "Test group to run: ('core', 'providers')"
required: true
type: string
test-types-as-strings-in-json:
description: "The list of list of test types to run (types in item are separated by spaces) as json"
required: true
type: string
backend:
description: "The backend to run the tests on"
required: true
type: string
test-scope:
description: "The scope of the test to run: ('DB', 'Non-DB', 'All')"
required: true
type: string
test-name:
description: "The name of the test to run"
required: true
type: string
test-name-separator:
description: "The separator to use after the test name"
required: false
default: ":"
type: string
python-versions:
description: "The list of python versions (stringified JSON array) to run the tests on."
required: true
type: string
backend-versions:
description: "The list of backend versions (stringified JSON array) to run the tests on."
required: true
type: string
excluded-providers-as-string:
description: "Excluded providers (per Python version) as json string"
required: true
type: string
excludes:
description: "Excluded combos (stringified JSON array of python-version/backend-version dicts)"
required: true
type: string
run-migration-tests:
description: "Whether to run migration tests or not (true/false)"
required: false
default: "false"
type: string
run-coverage:
description: "Whether to run coverage or not (true/false)"
required: true
type: string
debug-resources:
description: "Whether to debug resources or not (true/false)"
required: true
type: string
include-success-outputs:
description: "Whether to include success outputs or not (true/false)"
required: false
default: "false"
type: string
downgrade-sqlalchemy:
description: "Whether to downgrade SQLAlchemy or not (true/false)"
required: false
default: "false"
type: string
upgrade-sqlalchemy:
description: "Whether to upgrade SQLAlchemy or not (true/false)"
required: false
default: "false"
type: string
upgrade-boto:
description: "Whether to upgrade boto or not (true/false)"
required: false
default: "false"
type: string
downgrade-pendulum:
description: "Whether to downgrade pendulum or not (true/false)"
required: false
default: "false"
type: string
force-lowest-dependencies:
description: "Whether to force lowest dependencies for the tests or not (true/false)"
required: false
default: "false"
type: string
monitor-delay-time-in-seconds:
description: "How much time to wait between printing parallel monitor summary"
required: false
default: 20
type: number
skip-providers-tests:
description: "Whether to skip providers tests or not (true/false)"
required: true
type: string
use-uv:
description: "Whether to use uv"
required: true
type: string
default-branch:
description: "The default branch of the repository"
required: true
type: string
permissions:
contents: read
jobs:
tests:
timeout-minutes: 65
# yamllint disable rule:line-length
name: "\
${{ inputs.test-scope == 'All' && '' || inputs.test-scope == 'Quarantined' && 'Qrnt' || inputs.test-scope }}\
${{ inputs.test-scope == 'All' && '' || '-' }}\
${{ inputs.test-group == 'providers' && 'prov' || inputs.test-group}}:\
${{ inputs.test-name }}${{ inputs.test-name-separator }}${{ matrix.backend-version }}:\
${{ matrix.python-version}}:${{ matrix.test-types.description }}"
runs-on: ${{ fromJSON(inputs.runners) }}
strategy:
fail-fast: false
max-parallel: 20
matrix:
python-version: "${{fromJSON(inputs.python-versions)}}"
backend-version: "${{fromJSON(inputs.backend-versions)}}"
test-types: ${{ fromJSON(inputs.test-types-as-strings-in-json) }}
exclude: "${{fromJSON(inputs.excludes)}}"
env:
BACKEND: "${{ inputs.backend }}"
BACKEND_VERSION: "${{ matrix.backend-version }}"
DB_RESET: "true"
DEBUG_RESOURCES: "${{ inputs.debug-resources }}"
DOWNGRADE_SQLALCHEMY: "${{ inputs.downgrade-sqlalchemy }}"
DOWNGRADE_PENDULUM: "${{ inputs.downgrade-pendulum }}"
ENABLE_COVERAGE: "${{ inputs.run-coverage }}"
EXCLUDED_PROVIDERS: "${{ inputs.excluded-providers-as-string }}"
FORCE_LOWEST_DEPENDENCIES: "${{ inputs.force-lowest-dependencies }}"
GITHUB_REPOSITORY: ${{ github.repository }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_USERNAME: ${{ github.actor }}
INCLUDE_SUCCESS_OUTPUTS: ${{ inputs.include-success-outputs }}
PLATFORM: "${{ inputs.platform }}"
# yamllint disable rule:line-length
JOB_ID: "${{ inputs.test-group }}-${{ matrix.test-types.description }}-${{ inputs.test-scope }}-${{ inputs.test-name }}-${{inputs.backend}}-${{ matrix.backend-version }}-${{ matrix.python-version }}"
MOUNT_SOURCES: "skip"
# yamllint disable rule:line-length
PARALLEL_TEST_TYPES: ${{ matrix.test-types.test_types }}
PYTHON_MAJOR_MINOR_VERSION: "${{ matrix.python-version }}"
UPGRADE_BOTO: "${{ inputs.upgrade-boto }}"
UPGRADE_SQLALCHEMY: "${{ inputs.upgrade-sqlalchemy }}"
AIRFLOW_MONITOR_DELAY_TIME_IN_SECONDS: "${{inputs.monitor-delay-time-in-seconds}}"
VERBOSE: "true"
DEFAULT_BRANCH: "${{ inputs.default-branch }}"
TOTAL_TEST_TIMEOUT: "3600" # 60 minutes in seconds
if: inputs.test-group == 'core' || inputs.skip-providers-tests != 'true'
steps:
- name: "Cleanup repo"
shell: bash
run: docker run -v "${GITHUB_WORKSPACE}:/workspace" -u 0:0 bash -c "rm -rf /workspace/*"
- name: "Checkout ${{ github.ref }} ( ${{ github.sha }} )"
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: "Make /mnt writeable"
run: ./scripts/ci/make_mnt_writeable.sh
- name: "Move docker to /mnt"
run: ./scripts/ci/move_docker_to_mnt.sh
- name: "Prepare breeze & CI image: ${{ matrix.python-version }}"
uses: ./.github/actions/prepare_breeze_and_image
with:
platform: ${{ inputs.platform }}
python: ${{ matrix.python-version }}
use-uv: ${{ inputs.use-uv }}
# We do not want to clean up /mnt here - it's been already done before preparing image
make-mnt-writeable-and-cleanup: false
- name: >
Migration Tests: ${{ matrix.python-version }}:${{ env.PARALLEL_TEST_TYPES }}
uses: ./.github/actions/migration_tests
with:
python-version: ${{ matrix.python-version }}
# Any new python version should be disabled below via `&& matrix.python-version != '3.xx'` until the first
# Airflow version that supports it is released - otherwise there's nothing to migrate back to.
if: inputs.run-migration-tests == 'true' && inputs.test-group == 'core' && matrix.python-version != '3.14'
- name: >
${{ inputs.test-group }}:${{ inputs.test-scope }} Tests ${{ inputs.test-name }} ${{ matrix.backend-version }}
Py${{ matrix.python-version }}:${{ env.PARALLEL_TEST_TYPES }}
env:
TEST_GROUP: "${{ inputs.test-group }}"
TEST_SCOPE: "${{ inputs.test-scope }}"
run: ./scripts/ci/testing/run_unit_tests.sh "${TEST_GROUP}" "${TEST_SCOPE}"
- name: "Post Tests success"
uses: ./.github/actions/post_tests_success
with:
codecov-token: ${{ secrets.CODECOV_TOKEN }}
python-version: ${{ matrix.python-version }}
if: success()
- name: "Post Tests failure"
uses: ./.github/actions/post_tests_failure
if: failure() || cancelled()