The recent upgrade to PostgreSQL 18 in CI (commit 27515d4) caused the
Setup Database job to fail because the runner's host pg_dump (v16) refuses
to dump a Postgres 18 server.
Replace tj-actions/pg-dump and tj-actions/pg-restore with docker run
commands using the postgres:18 image, ensuring the client version always
matches the server. This aligns with the fix already applied in the
internal cal repo.
* fix: reuse existing Devin sessions in stale PR completion workflow
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* refactor: consolidate Devin session logic into reusable action
- Create .github/actions/devin-session composite action for session checking
- Update stale-pr-devin-completion.yml to use the new action
- Update cubic-devin-review.yml to use the new action
- Update devin-conflict-resolver.yml to use the new action
- Rename workflows to 'PR Labeled' with descriptive job names
The new action checks for existing Devin sessions by:
1. Looking for session URLs in PR body (for PRs created by Devin)
2. Searching PR comments for known Devin session patterns
3. Verifying session is active (working, blocked, or resumed status)
This eliminates duplicated session checking logic across all three workflows.
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* revert: restore original workflow names
Reverted workflow names back to descriptive names:
- 'Stale Community PR Devin Completion' (was 'PR Labeled')
- 'Devin PR Conflict Resolver' (was 'PR Labeled')
Descriptive names are the recommended convention as they appear in
GitHub Actions tab, status checks, and notifications.
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* fix: add active status check for PR body sessions
Addresses Cubic AI feedback (confidence 9/10): PR body sessions
now verify the session is active (working, blocked, or resumed)
before reusing, matching the behavior of comment-based session checks.
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* ci: optimize sparse-checkout to reduce cache size by ~230MB
Exclude additional large files from CI checkout that are not needed for builds/tests:
- docs/images/ (~90MB) - Documentation images
- packages/app-store/*/static/*.png,jpg,jpeg,gif (~140MB) - App store screenshots
These files are only needed for documentation rendering and app store UI display,
not for CI builds, linting, type checking, or tests.
SVG icons in app-store are preserved as they may be needed for the build.
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* ci: exclude generated app-store static files from cache
The apps/web/public/app-store directory contains ~151MB of static files
copied from packages/app-store/*/static/ during build. These files are
regenerated during yarn install and don't need to be cached.
Combined with the sparse-checkout exclusions, this should significantly
reduce the git checkout cache size.
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* ci: fix sparse-checkout by limiting first checkout to .github only
The first checkout in the prepare job was doing a full checkout (~535 MB),
and then dangerous-git-checkout applied sparse-checkout. But the files from
the first checkout remained on disk because sparse-checkout doesn't remove
files that were already checked out.
By limiting the first checkout to only .github (which is needed to access
the local actions), we avoid downloading the full repo twice. The actual
sparse-checkout with exclusions is then applied by dangerous-git-checkout.
This should reduce the cache from ~504 MB to ~148 MB.
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* fix: update cache-checkout to include sparse-checkout exclusions and fix pr.yml compatibility
Co-Authored-By: unknown <>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
This fixes cache propagation delay issues with Blacksmith's distributed cache.
The previous implementation used separate save/restore actions with
fail-on-cache-miss: true, which failed when the cache wasn't immediately
available after being saved by the Prepare job.
The new implementation uses actions/cache@v4 (combined save/restore)
with a fallback to do the checkout if cache miss, matching the pattern
used by cache-build and cache-db which don't have this issue.
Changes:
- Refactored cache-checkout action to use actions/cache@v4
- Added fallback checkout step when cache miss occurs
- Removed mode input (no longer needed)
- Updated all workflows to use the simplified cache-checkout action
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* feat: implement git checkout caching to speed up CI workflows
- Create cache-checkout action to save/restore git working directory
- Update prepare job in pr.yml to cache checkout after dangerous-git-checkout
- Update all downstream workflows to restore from cache instead of checkout
- Pass commit-sha to all workflow calls for cache key consistency
This reduces the number of full git checkouts from 20+ per workflow run to just 1,
significantly improving CI performance especially for large repos.
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* refactor: use actions/cache/restore directly instead of custom action
Remove sparse-checkout and use actions/cache/restore@v4 directly in all
downstream workflows. This eliminates the need for any git fetch operation
before restoring from cache, making the optimization more effective.
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* refactor: use github.event.pull_request.head.sha for cache key
- Remove commit-sha input from all downstream workflows
- Update cache-checkout action to use github.event.pull_request.head.sha directly
- Remove commit-sha output from prepare job
- Remove commit-sha from all workflow calls in pr.yml
- Keep sparse-checkout of .github to access the cache-checkout action
This eliminates the need to pass commit-sha around while still using
a reusable action for the cache restore logic.
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* fix: update cache key to include branch name and add cache cleanup
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* fix: use github.head_ref and github.sha for cache key
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* fix: add prefix delete before saving cache to clear previous caches
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* perf: add sparse-checkout to exclude example-apps and mp4 files
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* fix: restore get_sha step and commit-sha output that was accidentally removed
Co-Authored-By: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* fix: add trailing dash to cache key prefix to prevent accidental deletion of other branches' caches
The prefix-based cache deletion was using 'git-checkout-{branch}' which would
accidentally match and delete caches for branches with similar prefixes.
For example, branch 'feature' would delete caches for 'feature-2', 'feature-new', etc.
Adding a trailing '-' ensures exact branch matching:
- 'git-checkout-feature-' matches 'git-checkout-feature-abc123' (intended)
- 'git-checkout-feature-' does NOT match 'git-checkout-feature-2-def456' (correct)
Fixes both the cache-checkout action and the PR close cleanup workflow.
Co-Authored-By: unknown <>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* refactor: detach yarn prisma generate from yarn-install action
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: add run-prisma-generate input for backward compatibility
The yarn-install action now has a run-prisma-generate input that defaults
to true for backward compatibility. This ensures CI works correctly since
workflow files are pulled from the base branch (main) while actions are
pulled from the PR branch.
Workflows that have explicit yarn prisma generate steps now set
run-prisma-generate: false to avoid running it twice after merge.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* refactor: completely remove prisma generate from yarn-install action
Remove all prisma-related code from the yarn-install action:
- Remove the run-prisma-generate input parameter
- Remove the Generate Prisma client step
Remove explicit yarn prisma generate steps from all workflow files.
Prisma generation is now handled by the postinstall script in package.json
which runs 'turbo run post-install' after yarn install. This triggers
@calcom/prisma#post-install which runs 'prisma generate && prisma format'.
This makes the yarn-install action have no knowledge of Prisma at all,
as requested.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: add generic post-install step to ensure generated files are up-to-date
When all caches are hit, yarn install completes quickly without running
the postinstall script. This means generated files (like Prisma types)
may not be created.
Add a generic 'turbo run post-install' step that runs after yarn install
to ensure all post-install tasks complete regardless of cache state.
This keeps the action from having Prisma-specific knowledge while
ensuring the post-install pipeline runs.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* refactor: remove post-install step from yarn-install action
Remove the turbo run post-install step as requested. The yarn-install
action now only handles yarn install with caching, with no knowledge
of post-install tasks or Prisma generation.
Let CI show what fails without explicit post-install handling.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Add back Prisma schema loaded from schema.prisma
✔ Generated Prisma Client (6.16.1) to ./generated/prisma in 442ms
✔ Generated Zod Prisma Types to ./zod in 1.43s
✔ Generated Kysely types (2.2.0) to ./../kysely in 271ms
✔ Generated Prisma Enum Generator to ./enums/index.ts in 176ms where needed
* Adding Prisma schema loaded from schema.prisma
✔ Generated Prisma Client (6.16.1) to ./generated/prisma in 451ms
✔ Generated Zod Prisma Types to ./zod in 1.40s
✔ Generated Kysely types (2.2.0) to ./../kysely in 271ms
✔ Generated Prisma Enum Generator to ./enums/index.ts in 205ms where needed for E2E
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* ci: skip yarn-install in setup-db when DB cache exists
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* ci: skip entire setup-db job when DB cache exists
This saves ~35-40s per workflow run by:
1. Adding DB cache lookup to prepare job (lookup-only mode)
2. Skipping the entire setup-db job when cache exists (avoids postgres container startup)
3. Keeping the yarn-install skip in setup-db.yml as a fallback
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* ci: simplify DB cache key and skip yarn-install on cache hit
- Remove commit SHA and PR number from cache key for better reuse
- Cache key now only depends on prisma schema/migrations hash
- Skip yarn-install in setup-db when DB cache exists (saves ~17s)
- Revert changes to pr.yml required job logic
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* ci: move DB cache lookup to prepare step and skip cache-db on hit
- Move cache lookup to pr.yml prepare step for early detection
- Pass db-cache-hit output to setup-db.yml as input
- Skip yarn-install and cache-db action when cache exists
- setup-db job still runs and reports success (no required job changes)
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* ci: skip container initialization when DB cache exists
- Rename setup-db to setup-db-seed (only runs on cache miss)
- Add lightweight setup-db wrapper job that always returns success
- When cache exists: setup-db-seed is skipped, setup-db returns success
- When cache miss: setup-db-seed runs with postgres, setup-db checks result
- This avoids postgres container startup (~17-20s) when cache exists
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* ci: add cache-db-key action for consistent cache key generation
- Create cache-db-key action to generate DB cache key in one place
- Update cache-db action to use cache-db-key for consistency
- Update pr.yml prepare step to use cache-db-key for lookup
- This ensures the lookup key always matches the actual cache key
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* ci: remove runner.os from cache-db-key
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* ci: simplify setup-db by removing wrapper job
- Remove setup-db wrapper job and rename setup-db-seed back to setup-db
- setup-db is now skipped when cache exists (no container startup)
- Update required check to allow skipped for setup-db when cache hit
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* ci: revert to two-job pattern for setup-db
- setup-db-seed: only runs on cache miss (has postgres container)
- setup-db: wrapper that always runs and reports success
- This ensures downstream jobs (Tests, Production builds) are not skipped
- Reverts required check to simple setup-db.result != 'success'
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* ci: single setup-db job with conditional steps on cache hit
- Removed two-job pattern (setup-db-seed + wrapper)
- Single setup-db job calls setup-db.yml reusable workflow
- Pass DB_CACHE_HIT input from prepare job's db-cache-hit output
- Skip yarn-install and cache-db steps when cache hit (~17s savings)
- Container still starts (will optimize in follow-up)
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* fix(ci): skip yarn install in deps job when cache is hit
The Install Dependencies / Yarn install & cache step in pr.yml is primarily
for populating the cache when nothing is cached. When cache keys are found,
we can skip the actual yarn install and let the workflow carry on quickly.
This adds a skip-install-if-cache-hit parameter to the yarn-install action
that allows skipping the install when node_modules cache is hit. This is
enabled for the deps job in yarn-install.yml but not for other jobs that
actually need node_modules for their work.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix(ci): also skip playwright install in deps job when cache is hit
Extends the skip-install-if-cache-hit parameter to the yarn-playwright-install
action as well, so both yarn install and playwright install are skipped in the
deps job when their respective caches are hit.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix(ci): use lookup-only cache check to avoid downloading 1.2GB
When skip-install-if-cache-hit is true, use actions/cache/restore@v4 with
lookup-only: true to check if caches exist without downloading them. This
avoids downloading ~1.2GB of cache data when we just want to verify caches
exist in the deps job.
The flow is now:
1. If skip-install-if-cache-hit is true, run lookup-only checks for all caches
2. If all caches hit, skip the entire restore + install flow (no downloads)
3. If any cache misses, fall back to normal restore + install + save behavior
This optimization only applies when skip-install-if-cache-hit is set to true,
so other jobs that need node_modules continue to work normally.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Apply suggestion from @keithwillcode
* Apply suggestion from @keithwillcode
* Apply suggestion from @keithwillcode
* Apply suggestion from @keithwillcode
* fix(ci): address @cubic-dev-ai feedback on cache conditions
1. yarn-install: Add 'all-caches-check' step to compute whether all three
caches hit (not just node_modules). This ensures we only bail out when
everything is cached, matching the PR description.
2. yarn-playwright-install: Fix backward compatibility for install step.
In default mode, check playwright-cache.outputs.cache-hit (the restore step).
In skip mode, check playwright-cache-check.outputs.cache-hit (lookup-only).
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* chore: Delete cache-build cache entries on PR close
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* refactor: Extract cache-build key generation into reusable action
- Create .github/actions/cache-build-key to generate cache keys
- Update cache-build action to use the shared key action
- Update delete workflow to use the shared key action
- Checkout PR head SHA in delete workflow for correct hash computation
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: Remove PR head SHA checkout per review feedback
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* refactor: Use prefix-based cache deletion for simpler cleanup
- Use useblacksmith/cache-delete prefix mode to delete all caches matching branch
- Remove dependency on cache-build-key action for deletion
- No checkout needed since we're just using the branch name as prefix
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* refactor: Simplify cache key by removing Linux and node version segments
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* fix: align env vars between Production Builds and E2E workflows for turbo cache sharing
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: move E2E-specific env vars to step-level for turbo cache alignment
Instead of adding env vars to production-build-without-database.yml (which
caused DB calls during build), move the E2E-specific env vars from workflow-level
to step-level in e2e.yml. This ensures the cache-build step has the same env
context in both workflows, enabling turbo remote cache sharing.
Env vars moved to step-level:
- DATABASE_DIRECT_URL (to cache-db and Run Tests steps)
- E2E_TEST_MAILHOG_ENABLED (to Run Tests step)
- EMAIL_SERVER_* (to Run Tests step)
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: restore production build cache with stable key using actions/cache
- Restore artifact caching for .next, dist, and .turbo directories
- Use actions/cache@v4 (accelerated by Blacksmith on their runners)
- Design stable cache key scoped to branch:
- Includes branch name (head_ref for PRs, ref_name for pushes)
- Hashes yarn.lock for dependency changes
- Hashes source files in apps/web and packages/*/src
- Excludes generated directories (prisma/zod, kysely) that caused instability
- Add restore-keys for partial cache hits
- Only run yarn build on cache miss
- Revert e2e.yml env var changes (no longer needed with artifact caching)
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: simplify restore-keys to single fallback
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: remove restore-keys entirely for exact cache matching only
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: use broader package pattern with exclusions for generated prisma files
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: add exclusions for prisma/generated and prisma/client directories
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Apply suggestion from @cubic-dev-ai[bot]
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
* fix: add .json and .css patterns to cache hash for config and style changes
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
* chore: migrate GitHub workflows from Buildjet to Blacksmith
- Replace buildjet-*vcpu-ubuntu-2204 runners with blacksmith-*vcpu-ubuntu-2204
- Replace buildjet/cache@v4 with actions/cache@v4
- Replace buildjet/setup-node@v4 with actions/setup-node@v4
- Replace buildjet/cache-delete@v1 with useblacksmith/cache-delete@v1
- Rename delete-buildjet-cache.yml to delete-blacksmith-cache.yml
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* chore: bump Blacksmith runners to Ubuntu 24.04
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Reduce 16vcpu to 4vcpu for the API v2 E2E
* Remove 8vcpu usage
* chore: switch Blacksmith runners to ARM architecture
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* chore: switch Blacksmith runners back to AMD (remove -arm suffix)
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: improve test cleanup to cover all bookings including reassignment-created ones
- Changed afterEach cleanup to find all bookings by eventTypeId instead of tracking bookingIds
- This ensures bookings created indirectly by managedEventManualReassignment are also cleaned up
- Removed problematic prefix-based deleteMany calls that could affect parallel tests
- Fixes idempotencyKey collision errors on high-parallelism CI runners
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: improve icons screenshot test stability with deviceScaleFactor and increased threshold
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: cap Playwright workers to 4 to match vCPU allocation on Blacksmith runners
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* refactor: Remove buildjet cache from cache-build action
The buildjet cache was causing cache misses due to unstable hashFiles()
keys that included generated Prisma files. Since we now use Turbo Remote
Caching, the buildjet cache is redundant and was causing more problems
than it solved.
This simplifies the cache-build action to just run yarn build, relying
on Turbo Remote Caching for build artifact caching.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: Preserve original name and description in cache-build action
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* refactor: Add dedicated setup-db job to eliminate cache race condition
- Create new setup-db.yml workflow that runs cache-db action
- Update pr.yml to add setup-db job that runs before all E2E jobs
- Update integration-test, e2e, e2e-api-v2, e2e-app-store, e2e-embed,
and e2e-embed-react jobs to depend on setup-db instead of build-api-v1
- Add setup-db to required job needs list and result check
This eliminates the race condition where multiple jobs could try to
write to the same database cache simultaneously.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* refactor: Remove DB parts from api-v1-production-build.yml
The database setup is now handled by the dedicated setup-db job,
so the postgres service and cache-db action are no longer needed
in the API v1 build workflow.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Apply suggestion from @keithwillcode
* Made integration tests dependency consistent
* refactor: Remove unnecessary env vars from setup-db.yml
Only keep the env vars needed for database setup:
- CALENDSO_ENCRYPTION_KEY (for db-seed)
- DATABASE_URL / DATABASE_DIRECT_URL (for database connection)
- TURBO_TOKEN / TURBO_TEAM (for turbo caching)
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: Add back E2E_TEST_CALCOM_QA_* env vars needed by db-seed
These env vars are used by scripts/seed.ts to create the QA user
with Google Calendar credentials during database seeding.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: Restore postgres service and cache-db to api-v1-production-build.yml
The API v1 build still needs a database to run properly. Restored the
postgres service and cache-db action, and added dependency on setup-db
so it waits for the database cache to be created first.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* E2E for API v2 no longer waiting on API v2 build
* Reordering the jobs by dependencies
* add GOOGLE_API_CREDENTIALS env var to setup db
* Added back all env vars to setup-db
* fix: Include commit SHA in cache-db key to invalidate cache on new commits
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* refactor: Strip back setup-db.yml env vars to minimal set needed for db-seed
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* update
* Add cache-db action to e2e workflow
* fix: remove cache-db from prepare job (requires Postgres service)
The prepare job doesn't have a Postgres service container, so cache-db
was failing with 'connection refused' when trying to run psql.
cache-db should only run in the e2e shards which have the Postgres service.
Co-Authored-By: anik@cal.com <adhabal2002@gmail.com>
* update
* Update e2e.yml
* Add newline at end of e2e.yml file
* fix
* Fix indentation in e2e.yml for retention-days
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* feat: Add official Docker support
* Adding scarf data support
* Comment out pushing the image for now
* Getting env vars ported
* Renamed the job to Release instead of Remote Release
* Move the Dockerfile and docker-compose files to monorepo root
* Remove Slack notifications for failures for now
* Show database container status
* Setting env directly for testing
* Removing env var
* Adding container logs
* Change the volume
* fixing file paths
* Double-quotes wrecking things
* Fixing /calcom paths
* Update permission for scripts
* Fixed the Slack notification
* Updated Slack notification emojis
* Checking the workflow_dispatch input for checkout
* Commenting out the tag checkout for now since our new Docker files are not in main
* Added .dockerignore
* Remove the scarf data export
* Removed extra empty line
* refactor: Create reusable Docker build action for AMD64 and ARM support
- Extract common Docker build logic into reusable composite action
- Create separate workflows for AMD64 and ARM builds that run in parallel
- Both workflows use the same reusable action with platform-specific parameters
- ARM builds use ubuntu-24.04-arm runner and add -arm suffix to tags
- AMD64 builds use buildjet-4vcpu-ubuntu-2204 runner
- Remove old monolithic release-docker.yaml workflow
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Revert "refactor: Create reusable Docker build action for AMD64 and ARM support"
This reverts commit 66d2c1741e.
* refactor: Add parallel AMD64 and ARM Docker builds using reusable action
- Create reusable composite action in .github/actions/docker-build-and-test
- Extract common Docker build, test, and push logic into the action
- Update release-docker.yaml to have two parallel jobs:
- release-amd64: Builds for linux/amd64 on buildjet-4vcpu-ubuntu-2204
- release-arm: Builds for arm64 on ubuntu-24.04-arm with -arm suffix
- Both jobs use the same reusable action with platform-specific parameters
- Maintains existing functionality while enabling parallel builds
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Update the ARM action to run on buildjet 4vCPU ARM
* Move the Dockerfile to apps/web
* Revert "Move the Dockerfile to apps/web"
This reverts commit fd91ebe5b4.
* Revert the arm machine back off build jet
* Use node 20
* Set push to true
* Remove Dockerfile.render
* Removed commented Docker lines
* Fixed read me
* Updated README for Docker support
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
* feat: upgrade Prisma to 6.16.0 with no-rust engine
- Update Prisma packages to 6.16.0
- Add PostgreSQL adapter dependency
- Configure engineType: 'client' and provider: 'prisma-client' in schema
- Update Prisma client instantiation with PostgreSQL adapter
- Remove binaryTargets from generators (not needed with library engine)
- Fix schema view issue by removing @id decorator from BookingTimeStatusDenormalized
- Fix ESLint warning by removing non-null assertion
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Web app running but types wrecked
* web app running but build and type issues
* Removed the connection pool
* Fixed zod type issue
* Fixed types in booking reference extension
* Fixed test issues
* Type checks passing it seems
* Using cjs as moduleFormat
* Fixing Prisma undefined
* fix: update prismock initialization for Prisma 6.16 compatibility
- Add @prisma/internals dependency for getDMMF()
- Restructure prismock initialization to use createPrismock() with DMMF
- Create Proxy that's returned from mock factory for proper spy support
- Fixes 89 failing unit tests with 'Cannot read properties of undefined (reading datamodel)' error
- Based on workaround from prismock issue #1482
All unit tests now pass (375 test files, 3323 tests passed)
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: cast serviceAccountKey to Prisma.InputJsonValue in bookingScenario.ts
- Apply type cast at lines 2493 and 2535
- Fixes type errors from Prisma 6.16 upgrade
- Follows established pattern from delegationCredential.ts
- Add eslint-disable for pre-existing any types
- Rename unused appStoreLookupKey parameter to satisfy lint
- All 3323 tests passing
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* chore: remove whitespace-only lines from bookingScenario.ts
- Remove blank lines where eslint-disable comments were replaced
- Cleanup from pre-commit hook formatting
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Update is-prisma-available-check.ts
* fix: remove datasources config when using Prisma Driver Adapters
- Update customPrisma to create new adapter when datasources URL is provided
- Remove datasources config from API v2 Prisma services (already in adapter)
- Fixes 'Custom datasource configuration is not compatible with Prisma Driver Adapters' error
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: use Pool instances for PrismaPg adapters in index.ts
- Create Pool instance before passing to PrismaPg adapter
- Update customPrisma to create Pool for custom connection strings
- Matches working pattern from API v2 services
- Fixes 'Invalid `prisma.$queryRawUnsafe()` invocation' error
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Not using queryRawUnsafe
* Trying anything at this point
* Make sure the DB is ready first
* Don't auto run migrations in CI mode
* Revert "Make sure the DB is ready first"
This reverts commit 2b20bd45c9.
* Dynamic import of prisma
* Commenting where it seems to break
* Backwards compatability for API v2
* fix: add explicit type annotations for map callbacks in API v2
- Add type annotation for map parameter in memberships.repository.ts
- Add type annotation for map parameter in stripe.service.ts
- Fixes implicit 'any' type errors from stricter Prisma 6.16.0 type inference
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: add explicit type annotations for API v2 map callbacks
- users.repository.ts:292: add Profile & { user: User } type
- memberships.service.ts:19-20: add Membership type to filter callbacks
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: add explicit type annotation for attributeToUser in organizations-users.repository.ts
- organizations-users.repository.ts:63: add AttributeToUser with nested relations type
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: add explicit Membership type annotations in teams.repository.ts
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: use API v2 dedicated Prisma client to support adapter in PrismaClientOptions
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Running API v2 build commands together so they all get the space size var
* Fixing Maximum call depth exceeded error
* fixed type issues
* Trying to make the seed more stable
* Revert "Trying to make the seed more stable"
This reverts commit 1fd4495e6a.
* Fixed path to prisma client
* Fixed type check
* Fix eslint warnings
* fix: externalize @prisma/adapter-pg and pg in platform-libraries Vite config
- Add @prisma/adapter-pg and pg to external dependencies list
- Add corresponding globals for these packages
- Fix Prisma client aliases to point to packages/prisma/client instead of node_modules
- Add Node.js resolve conditions to prefer Node.js exports
- Keep commonjsOptions.include for proper CommonJS transformation
- Add eslint-disable for __dirname in Vite config file
- Remove problematic prettier/prettier eslint comment
This fixes the 'Extensions.defineExtension is unable to run in this browser environment' error when running yarn generate-swagger in apps/api/v2 after upgrading to Prisma v6.16 with the no-rust engine approach.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: update Prisma imports in API v2 services to use package path
- Change imports from '../../../generated/prisma/client' to '@calcom/prisma/client'
- Fixes CI error: Cannot find module '../../../../../packages/prisma/generated/prisma/client.ts'
- Aligns with backwards compatibility re-export structure after Prisma v6.16 upgrade
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: remove .ts extension from Prisma client path mapping in tsconfig
- Remove file extension from @calcom/prisma/client path mapping
- Fixes runtime error: Cannot find module '../../../../../packages/prisma/generated/prisma/client.ts'
- TypeScript path mappings should not include file extensions per best practices
- Allows Node.js to correctly resolve to .js files at runtime
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: resolve Prisma 6.16.0 type incompatibilities in bookingScenario tests
- Changed InputPayment.data type from PaymentData to Prisma.InputJsonValue
- Changed createCredentials key parameter from JsonValue to InputJsonValue
- Removed unused PaymentData type definition
- Resolves type errors at lines 709 and 1088 without using 'as any' casts
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: remove non-existent Watchlist fields from test fixtures
- Remove createdById from isLockedOrBlocked.test.ts (lines 15, 20)
- Remove severity and createdById from _post.test.ts (line 110)
- These fields don't exist in Watchlist model schema after Prisma 6.16.0 upgrade
- Resolves TS2353 errors without using 'as any' casts
Relates to PR #23816
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: api v2 imports generated prisma and platform libraries
* fix: resolve type errors from Prisma 6.16 upgrade
- Add missing markdownToSafeHTML import in AppCard.tsx
- Fix organizationId null handling in fresh-booking.test.ts
- Remove non-existent createdById field from Watchlist test utils
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Put back some external rollups
* Added back the resolve conditions
* Stop using Pool directly
* chore: remove prisma bookingReferenceExtension and update calls
* fix: organizations-admin-not-team-member-event-types.e2e-spec.ts
* chore: bring back POOL in api v2 prisma clients
* chore: remove Pool but await connect
* fixup! chore: remove Pool but await connect
* chore: bring back Pool on all clients
* chore: end pool manually
* chore: test with pool max 1
* chore: e2e test prisma max pool of 1 connection
* chore: give more control over pool for prisma module with env
* remove pool from base prisma client
* chore: prisma client in libraries use pool
* Fixed types
* chore: log pool events and improve pooling
* Fixing some types and tests
* Changing the parsing of USE_POOL
* fix: ensure Prisma client is connected before seeding to prevent transaction errors
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* chore: adjust pools
* chore: add process.env.USE_POOL to libraries vite config
* fix: v1 _patch reference check bookingRef on the booking find
* fix: v1 get references deleted null for system admin
* test: add integration tests for bookingReference soft-delete behavior
- Add bookingReference.integration-test.ts to test repository methods
- Add handleDeleteCredential.integration-test.ts to test credential deletion cascade
- Add booking-references.integration-test.ts for API v1 integration tests
- All tests verify soft-delete behavior without using mocks
- Tests use real database operations to ensure soft-deleted records persist
- Cover scenarios: replacing references, credential deletion, querying with filters
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* refactor: convert booking-references test to actual API endpoint testing
- Modified _get.ts to export handler function for testing
- Refactored integration test to call API handler instead of directly testing Prisma
- Added timestamps to test data to avoid conflicts
- Tests now verify API layer correctly filters soft-deleted references
- All 4 tests passing
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: add explicit prisma.$connect() call to seed-insights script
With Prisma 6.16 and the PostgreSQL adapter, scripts need to explicitly call $connect() before running database operations to ensure the connection pool is properly initialized. This prevents 'Transaction already closed' errors.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: add $connect() to main() execution in seed-insights
Both main() and createPerformanceData() entry points need explicit prisma.$connect() calls with the Prisma 6.16 PostgreSQL adapter.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: always use connection pool for Prisma PostgreSQL adapter
Enable connection pooling by default for the Prisma adapter to prevent
transaction state issues during seed operations. Without a pool, each
operation creates a new connection which can lead to 'Transaction already
closed' errors during heavy database operations like seeding.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Revert "fix: always use connection pool for Prisma PostgreSQL adapter"
This reverts commit 6724bb08e4.
* fix: enable connection pool for db-seed in cache-db action
Set USE_POOL=true when running yarn db-seed to use connection pooling
with the Prisma PostgreSQL adapter. This prevents 'Transaction already
closed' errors during seeding by maintaining stable database connections.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: add safety check for undefined ownerForEvent in seed script
Prevent 'Cannot read properties of undefined' error when orgMembersInDBWithProfileId
is empty. This can happen if organization members fail to create or when there's a
duplicate constraint violation causing early return.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: v1 _patch reference check bookingRef
* fix: increase pool size and add timeout settings to prevent transaction errors
- Increase max connections from 5 to 10
- Add connectionTimeoutMillis: 30000 (30 seconds)
- Add statement_timeout: 60000 (60 seconds)
These settings help prevent 'Unknown transaction status' errors during
heavy database operations like seeding by giving transactions more time
to complete and allowing more concurrent connections.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Revert "fix: increase pool size and add timeout settings to prevent transaction errors"
This reverts commit 148264f1f1.
* fix: remove standalone execution in seed-app-store to prevent premature disconnect
The seed-app-store.ts file had a standalone main() call at the bottom
that would execute immediately when imported, including a prisma.$disconnect()
in its .finally() block.
This caused issues because:
1. seed.ts imports and calls mainAppStore()
2. The import triggers the standalone main() execution
3. This standalone execution disconnects prisma after completion
4. seed.ts then tries to call mainHugeEventTypesSeed() but prisma is disconnected
5. This leads to 'Unknown transaction status' errors
Fixed by removing the standalone execution since mainAppStore() is already
called programmatically from seed.ts which manages the connection lifecycle.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: use require.main check to prevent premature disconnect when imported
Added require.main === module check so seed-app-store.ts:
- Runs standalone with proper connection management when executed directly
via 'yarn seed-app-store' or 'ts-node seed-app-store.ts'
- Does NOT run standalone when imported as a module by seed.ts,
preventing premature prisma disconnect
This fixes 'Unknown transaction status' errors while maintaining
backward compatibility for direct execution.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: seed apps before creating users to prevent foreign key constraint violation
Reordered seeding operations to call mainAppStore() before main() because:
- main() creates users with credentials that reference apps via appId foreign key
- mainAppStore() seeds the App table with app records
- Apps must exist before credentials can reference them
This fixes the 'Foreign key constraint violated on Credential_appId_fkey' error
that occurred when creating credentials before the apps they reference existed.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* Apply suggestion from @cubic-dev-ai[bot]
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
* Removing functional changes of deleted: null
* Apply suggestion from @keithwillcode
* refactor: move seedAppData call to bottom of main() in seed.ts
Moved seedAppData() call from seed-app-store.ts to the bottom of main()
in seed.ts to ensure the 'pro' user is created before attempting to
create routing form data for them.
Changes:
- Exported seedAppData function from seed-app-store.ts
- Removed seedAppData() call from the main() export in seed-app-store.ts
- Added seedAppData() call at the bottom of main() in seed.ts
- Updated standalone execution in seed-app-store.ts to still call
seedAppData() when run directly via 'yarn seed-app-store'
This ensures proper ordering: apps seeded → users created → routing
form data created for existing users.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* refactor: move routing form seeding from seed-app-store.ts to seed.ts
Moved the routing form seeding logic (previously in seedAppData function)
from seed-app-store.ts to be inline at the bottom of main() in seed.ts.
This ensures the 'pro' user is created before attempting to create routing
form data for them.
Changes:
- Removed seedAppData function and seededForm export from seed-app-store.ts
- Removed import of seedAppData from seed.ts
- Added routing form seeding logic inline at bottom of main() in seed.ts
Seeding order: apps → users (including 'pro') → routing forms
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* fix: add deleted: null filter to bookingReference update operations
- Add deleted: null filter to API v1 PATCH endpoint to prevent updating soft-deleted booking references
- Add deleted: null filter to DailyVideo updateMeetingTokenIfExpired and setEnableRecordingUIAndUserIdForOrganizer
- Add comprehensive test coverage for PATCH endpoint soft-delete behavior
- Tests verify that soft-deleted booking references cannot be updated
- Tests verify that only active booking references can be updated successfully
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* revert: remove deleted: null filters to preserve existing functionality
Per @keithwillcode's feedback, reverting the soft-delete filtering changes to preserve existing functionality in this PR. This PR should focus only on the Prisma upgrade itself.
- Reverted API v1 PATCH endpoint change
- Reverted DailyVideo adapter changes (updateMeetingTokenIfExpired and setEnableRecordingUIAndUserIdForOrganizer)
- Removed test file that was added for soft-delete behavior testing
Addresses comments:
- https://github.com/calcom/cal.com/pull/23816#discussion_r2448854197
- https://github.com/calcom/cal.com/pull/23816#discussion_r2448860594
- https://github.com/calcom/cal.com/pull/23816#discussion_r2448860833
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* test: restore and update booking reference tests to match existing functionality
Updated tests to verify existing behavior where PATCH endpoint can update
booking references regardless of their deleted status. This matches the
current implementation after reverting the deleted: null filters.
Changes:
- Restored test file that was previously deleted
- Updated PATCH tests to expect successful updates of soft-deleted references
- Renamed test suite to 'Existing functionality' to clarify intent
- Tests now verify that the PATCH endpoint preserves existing behavior
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
* test: rename booking-references test to integration-test
The test requires a database connection and should run in the integration
test job, not the unit test job. Renamed from .test.ts to .integration-test.ts
to match the repository's testing conventions.
Co-Authored-By: keith@cal.com <keithwillcode@gmail.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: cal.com <morgan@cal.com>
Co-authored-by: Morgan <33722304+ThyMinimalDev@users.noreply.github.com>
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
* Added a log for pull_request
* Added labels logging
* Using labels straight from event PR object
* Converting to JSON
* Switching to use payload
* Fixed issue with undefined pr
* Fixed non-mapping issue
* Removed the types
* Added another cache key segment using the head commit sha
* Added separate workflow for labeled action
* Fixed syntax error
* Fixing payload issue
* Added log
* logging full object
* Put e2e back in the names to help find them
* Limited logging
* fix: v2 even-types versioning
* test: old v2 even-types request with VERSION_2024_06_11
* test: old v2 even-types request with VERSION_2024_04_15
---------
Co-authored-by: Keith Williams <keithwillcode@gmail.com>
* chore: Removed duplicate production build
* fix references
* Move env vars to top level of e2e
* job renaming
* Added env vars to top level of all e2e jobs
* Removed part of cache key that causes issues and is moot
* Using buildjet hardware so the caching works
* clean up
* Adding check for cache hit
* Adding a separate install step first
* Put the restore cache steps back
* Revert the uses type for restoring cache
* Added step to restore nm cache
* Removed the cache-hit check
* Comments and naming
* Removed extra install command
* Updated the name of the linting step to be more clear
* chore: bump node version to v18
* fix(web): support node 19 as accepted
* fix(web): update boxyhq/saml-jackson to 1.8.1
* Drop support for Node 16.x
* Removed n19 pending @azure/msal-node
---------
Co-authored-by: Alex van Andel <me@alexvanandel.com>
Co-authored-by: zomars <zomars@me.com>