fix: correct deploy-pages SHA and improve preview cleanup reliability#304
fix: correct deploy-pages SHA and improve preview cleanup reliability#304
Conversation
The deploy-pages action was pinned to a non-existent SHA (typo from PR #298), masked by upstream MkDocs build failures until PR #302 fixed them. Corrected to the actual v4.0.5 SHA. The preview cleanup silently swallowed Cloudflare API errors via curl -sf + || true, causing "No deployments found" even when deployments existed. Now surfaces HTTP errors, logs available branches on miss, and uses jq --arg for safer filtering.
Dependency Review✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.OpenSSF Scorecard
Scanned Files
|
📝 WalkthroughSummary by CodeRabbit
WalkthroughThis PR updates GitHub Actions workflows with improved deployment cleanup logic. The pages-preview.yml cleanup stage is refactored to use explicit API calls, structured response parsing, and enhanced error handling for Cloudflare deployments. The pages.yml file updates the deploy-pages action version from v4 to v4.0.5. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Comment |
|
Note Gemini is unable to generate a summary for this pull request due to the file types involved not being currently supported. |
Greptile SummaryThis PR fixes two independent issues: a typo in the Key changes:
Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant GH as GitHub Actions
participant CF as Cloudflare API
Note over GH: cleanup-preview job (PR closed)
GH->>GH: Delete preview comment (continue-on-error)
loop Paginate deployments (page=1..N, max 100/page)
GH->>CF: GET /deployments?per_page=100&page=N
alt curl transport failure (exit≠0 or HTTP 000)
CF-->>GH: (network error)
GH->>GH: ::error:: exit 1
else HTTP 4xx/5xx
CF-->>GH: Error response
GH->>GH: ::error:: log API message, exit 1
else .result missing
CF-->>GH: Unexpected body
GH->>GH: ::warning:: exit 0
else OK
CF-->>GH: Deployment list
GH->>GH: Filter by branch (jq --arg), accumulate IDs
end
GH->>GH: page_count < 100 → break
end
alt No matching deployments
GH->>GH: Log available branches (diagnostic), exit 0
else Deployments found
loop For each deployment ID
GH->>CF: DELETE /deployments/{id}?force=true
alt curl transport failure
CF-->>GH: (network error)
GH->>GH: ::warning:: skip
else HTTP 4xx (e.g. Cloudflare restriction)
CF-->>GH: Error response
GH->>GH: ::warning:: skip
else OK
CF-->>GH: 200 OK
GH->>GH: COUNT++
end
end
GH->>GH: Log "Deleted N preview deployments"
end
Last reviewed commit: 8373ac6 |
There was a problem hiding this comment.
Pull request overview
Fixes the GitHub Pages deployment workflow pin and improves reliability/diagnostics of Cloudflare Pages preview cleanup on PR close.
Changes:
- Corrects the pinned
actions/deploy-pagescommit SHA in the GitHub Pages workflow. - Refactors the preview cleanup script to log HTTP failures, centralize the Cloudflare API base URL, and use
jq --argfor safer filtering. - Adds more diagnostic logging when no deployments are found for a PR branch.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| .github/workflows/pages.yml | Fixes the actions/deploy-pages SHA pin to a valid commit for v4.0.5. |
| .github/workflows/pages-preview.yml | Updates Cloudflare preview cleanup logic to be more observable and safer in jq filtering. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
.github/workflows/pages-preview.yml
Outdated
| RESPONSE=$(curl -s -w "\n%{http_code}" \ | ||
| "${API_BASE}?per_page=100" \ | ||
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") | ||
| HTTP_CODE=$(echo "$RESPONSE" | tail -1) | ||
| BODY=$(echo "$RESPONSE" | sed '$d') | ||
|
|
||
| if [ "$HTTP_CODE" -ge 400 ]; then | ||
| echo "::warning::Failed to list deployments (HTTP ${HTTP_CODE})" | ||
| echo "$BODY" | jq -r '.errors[]?.message // empty' 2>/dev/null || true | ||
| exit 0 | ||
| fi |
There was a problem hiding this comment.
curl failures (DNS/auth/network) aren’t handled here. If the curl command fails, HTTP_CODE can end up empty/"000", making the numeric -ge comparison unreliable and potentially falling through to the “No deployments found” path. Consider explicitly checking the curl exit status and validating HTTP_CODE is a number before proceeding, so API/transport errors don’t masquerade as an empty deployment list.
.github/workflows/pages-preview.yml
Outdated
| if [ "$HTTP_CODE" -ge 400 ]; then | ||
| echo "::warning::Failed to list deployments (HTTP ${HTTP_CODE})" | ||
| echo "$BODY" | jq -r '.errors[]?.message // empty' 2>/dev/null || true | ||
| exit 0 |
There was a problem hiding this comment.
On list failures you emit a warning but exit 0, so the cleanup job still succeeds even when it couldn’t talk to Cloudflare. If the goal is to make cleanup reliability visible in CI (so stale previews don’t persist unnoticed), consider failing the step/job on HTTP >= 400 (or at least making it a hard error when the token/account/project is misconfigured).
| exit 0 | |
| exit 1 |
| DEL_RESPONSE=$(curl -s -w "\n%{http_code}" -X DELETE \ | ||
| "${API_BASE}/${DEPLOY_ID}?force=true" \ | ||
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") | ||
| HTTP_CODE=$(echo "$RESPONSE" | tail -1) | ||
| if [ "$HTTP_CODE" -ge 400 ]; then | ||
| echo "::warning::Failed to delete deployment ${DEPLOY_ID} (HTTP ${HTTP_CODE}) — may be the latest deployment (Cloudflare restriction)" | ||
| DEL_CODE=$(echo "$DEL_RESPONSE" | tail -1) | ||
| if [ "$DEL_CODE" -ge 400 ]; then | ||
| echo "::warning::Failed to delete deployment ${DEPLOY_ID} (HTTP ${DEL_CODE}) — may be the most recent deployment (Cloudflare restriction)" | ||
| else | ||
| COUNT=$((COUNT + 1)) | ||
| fi |
There was a problem hiding this comment.
Delete requests also ignore curl exit status. If curl fails, DEL_CODE may be empty/"000", the -ge test becomes unreliable, and the script can incorrectly increment COUNT (reporting deletions that didn’t happen). Check the curl return code and ensure DEL_CODE is a valid HTTP status before treating the deletion as successful.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/pages-preview.yml:
- Around line 260-302: The listing only fetches a single page (per_page=100) so
older deployments on subsequent pages are missed; change the logic around
API_BASE/RESPONSE/DEPLOYMENTS to loop over pages (e.g. add a page counter/query
param) fetching "${API_BASE}?per_page=100&page=$PAGE" until an empty result set
or no more pages, concatenating .result[] ids into DEPLOYMENTS (or accumulate
into a DEPLOYMENTS variable/array), preserve the existing HTTP_CODE/ERROR
handling for each page, and then proceed to delete all accumulated DEPLOYMENTS
and report COUNT as before.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 19b0497f-f4ad-46c3-a66f-5c889501e01a
📒 Files selected for processing (2)
.github/workflows/pages-preview.yml.github/workflows/pages.yml
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Agent
- GitHub Check: Greptile Review
🔇 Additional comments (1)
.github/workflows/pages.yml (1)
93-93: Correct pin update.This keeps the workflow on a full commit SHA while moving
actions/deploy-pagesto thev4.0.5release commit. (github.com)
| API_BASE="https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/pages/projects/synthorg-pr-preview/deployments" | ||
|
|
||
| # List deployments for this branch (per_page=100 to cover busy PRs) | ||
| # Note: Cloudflare prevents deleting the latest deployment per branch — that one will remain | ||
| DEPLOYMENTS=$(curl -sf \ | ||
| "https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/pages/projects/synthorg-pr-preview/deployments?per_page=100" \ | ||
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" \ | ||
| | jq -r ".result[] | select(.deployment_trigger.metadata.branch == \"${BRANCH}\") | .id") || true | ||
| # Note: Cloudflare prevents deleting the most recent deployment per project — that one will remain | ||
| echo "Fetching deployments for branch ${BRANCH}..." | ||
| RESPONSE=$(curl -s -w "\n%{http_code}" \ | ||
| "${API_BASE}?per_page=100" \ | ||
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") | ||
| HTTP_CODE=$(echo "$RESPONSE" | tail -1) | ||
| BODY=$(echo "$RESPONSE" | sed '$d') | ||
|
|
||
| if [ "$HTTP_CODE" -ge 400 ]; then | ||
| echo "::warning::Failed to list deployments (HTTP ${HTTP_CODE})" | ||
| echo "$BODY" | jq -r '.errors[]?.message // empty' 2>/dev/null || true | ||
| exit 0 | ||
| fi | ||
|
|
||
| # Wrangler direct-upload deployments store branch in deployment_trigger.metadata.branch | ||
| DEPLOYMENTS=$(echo "$BODY" | jq -r \ | ||
| --arg branch "$BRANCH" \ | ||
| '.result[] | select(.deployment_trigger.metadata.branch == $branch) | .id' 2>/dev/null) || true | ||
|
|
||
| if [ -z "$DEPLOYMENTS" ]; then | ||
| echo "No deployments found for branch ${BRANCH}" | ||
| echo "Available branches:" | ||
| echo "$BODY" | jq -r '[.result[].deployment_trigger.metadata.branch // "null"] | unique[]' 2>/dev/null || true | ||
| exit 0 | ||
| fi | ||
|
|
||
| COUNT=0 | ||
| for DEPLOY_ID in $DEPLOYMENTS; do | ||
| echo "Deleting deployment ${DEPLOY_ID}..." | ||
| RESPONSE=$(curl -s -w "\n%{http_code}" -X DELETE \ | ||
| "https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/pages/projects/synthorg-pr-preview/deployments/${DEPLOY_ID}?force=true" \ | ||
| DEL_RESPONSE=$(curl -s -w "\n%{http_code}" -X DELETE \ | ||
| "${API_BASE}/${DEPLOY_ID}?force=true" \ | ||
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") | ||
| HTTP_CODE=$(echo "$RESPONSE" | tail -1) | ||
| if [ "$HTTP_CODE" -ge 400 ]; then | ||
| echo "::warning::Failed to delete deployment ${DEPLOY_ID} (HTTP ${HTTP_CODE}) — may be the latest deployment (Cloudflare restriction)" | ||
| DEL_CODE=$(echo "$DEL_RESPONSE" | tail -1) | ||
| if [ "$DEL_CODE" -ge 400 ]; then | ||
| echo "::warning::Failed to delete deployment ${DEPLOY_ID} (HTTP ${DEL_CODE}) — may be the most recent deployment (Cloudflare restriction)" | ||
| else | ||
| COUNT=$((COUNT + 1)) | ||
| fi | ||
| done | ||
| echo "Cleaned up preview deployments for ${BRANCH}" | ||
| echo "Deleted ${COUNT} preview deployments for ${BRANCH}" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Cloudflare Pages deployments list API pagination per_page
💡 Result:
Cloudflare Pages “List deployments” supports page-based pagination using the query params:
page— which page to fetchper_page— how many deployments to return per page- (also)
env— filter by"production"or"preview"deployments
Example:
curl "https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/pages/projects/$PROJECT_NAME/deployments?page=1&per_page=50&env=preview" \
-H "Authorization: Bearer $CLOUDFLARE_API_TOKEN"The response includes result_info with count, page, per_page, and total_count (the docs’ example shows a default per_page of 20 when you don’t set it). [1]
Endpoint reference (Pages REST API): /client/v4/accounts/{account_id}/pages/projects/{project_name}/deployments. [2]
Note: This endpoint’s docs show total_count but not total_pages; if you need it, compute ceil(total_count / per_page). [1]
Sources:
[1] Cloudflare API – Pages → Projects → Deployments → “Get Deployments” (list)
[2] Cloudflare Pages docs – REST API (Pages API overview + deployments endpoint example)
Paginate the deployment listing before declaring cleanup complete.
The deployments endpoint is paginated. Fetching only ?per_page=100 inspects page 1 only, so older pr-* deployments on subsequent pages are missed and remain live while this step reports that nothing matched.
🔁 Minimal pagination sketch
- RESPONSE=$(curl -s -w "\n%{http_code}" \
- "${API_BASE}?per_page=100" \
- -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}")
- HTTP_CODE=$(echo "$RESPONSE" | tail -1)
- BODY=$(echo "$RESPONSE" | sed '$d')
-
- if [ "$HTTP_CODE" -ge 400 ]; then
- echo "::warning::Failed to list deployments (HTTP ${HTTP_CODE})"
- echo "$BODY" | jq -r '.errors[]?.message // empty' 2>/dev/null || true
- exit 0
- fi
-
- # Wrangler direct-upload deployments store branch in deployment_trigger.metadata.branch
- DEPLOYMENTS=$(echo "$BODY" | jq -r \
- --arg branch "$BRANCH" \
- '.result[] | select(.deployment_trigger.metadata.branch == $branch) | .id' 2>/dev/null) || true
+ PAGE=1
+ DEPLOYMENTS=""
+ while :; do
+ RESPONSE=$(curl -s -w "\n%{http_code}" \
+ "${API_BASE}?per_page=100&page=${PAGE}" \
+ -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}")
+ HTTP_CODE=$(echo "$RESPONSE" | tail -1)
+ BODY=$(echo "$RESPONSE" | sed '$d')
+
+ if [ "$HTTP_CODE" -ge 400 ]; then
+ echo "::warning::Failed to list deployments (HTTP ${HTTP_CODE})"
+ echo "$BODY" | jq -r '.errors[]?.message // empty' 2>/dev/null || true
+ exit 0
+ fi
+
+ PAGE_IDS=$(echo "$BODY" | jq -r \
+ --arg branch "$BRANCH" \
+ '.result[] | select(.deployment_trigger.metadata.branch == $branch) | .id' 2>/dev/null) || true
+ if [ -n "$PAGE_IDS" ]; then
+ DEPLOYMENTS="${DEPLOYMENTS}${DEPLOYMENTS:+$'\n'}${PAGE_IDS}"
+ fi
+
+ PAGE_COUNT=$(echo "$BODY" | jq -r '.result | length' 2>/dev/null || echo 0)
+ [ "$PAGE_COUNT" -lt 100 ] && break
+ PAGE=$((PAGE + 1))
+ done📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| API_BASE="https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/pages/projects/synthorg-pr-preview/deployments" | |
| # List deployments for this branch (per_page=100 to cover busy PRs) | |
| # Note: Cloudflare prevents deleting the latest deployment per branch — that one will remain | |
| DEPLOYMENTS=$(curl -sf \ | |
| "https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/pages/projects/synthorg-pr-preview/deployments?per_page=100" \ | |
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" \ | |
| | jq -r ".result[] | select(.deployment_trigger.metadata.branch == \"${BRANCH}\") | .id") || true | |
| # Note: Cloudflare prevents deleting the most recent deployment per project — that one will remain | |
| echo "Fetching deployments for branch ${BRANCH}..." | |
| RESPONSE=$(curl -s -w "\n%{http_code}" \ | |
| "${API_BASE}?per_page=100" \ | |
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") | |
| HTTP_CODE=$(echo "$RESPONSE" | tail -1) | |
| BODY=$(echo "$RESPONSE" | sed '$d') | |
| if [ "$HTTP_CODE" -ge 400 ]; then | |
| echo "::warning::Failed to list deployments (HTTP ${HTTP_CODE})" | |
| echo "$BODY" | jq -r '.errors[]?.message // empty' 2>/dev/null || true | |
| exit 0 | |
| fi | |
| # Wrangler direct-upload deployments store branch in deployment_trigger.metadata.branch | |
| DEPLOYMENTS=$(echo "$BODY" | jq -r \ | |
| --arg branch "$BRANCH" \ | |
| '.result[] | select(.deployment_trigger.metadata.branch == $branch) | .id' 2>/dev/null) || true | |
| if [ -z "$DEPLOYMENTS" ]; then | |
| echo "No deployments found for branch ${BRANCH}" | |
| echo "Available branches:" | |
| echo "$BODY" | jq -r '[.result[].deployment_trigger.metadata.branch // "null"] | unique[]' 2>/dev/null || true | |
| exit 0 | |
| fi | |
| COUNT=0 | |
| for DEPLOY_ID in $DEPLOYMENTS; do | |
| echo "Deleting deployment ${DEPLOY_ID}..." | |
| RESPONSE=$(curl -s -w "\n%{http_code}" -X DELETE \ | |
| "https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/pages/projects/synthorg-pr-preview/deployments/${DEPLOY_ID}?force=true" \ | |
| DEL_RESPONSE=$(curl -s -w "\n%{http_code}" -X DELETE \ | |
| "${API_BASE}/${DEPLOY_ID}?force=true" \ | |
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") | |
| HTTP_CODE=$(echo "$RESPONSE" | tail -1) | |
| if [ "$HTTP_CODE" -ge 400 ]; then | |
| echo "::warning::Failed to delete deployment ${DEPLOY_ID} (HTTP ${HTTP_CODE}) — may be the latest deployment (Cloudflare restriction)" | |
| DEL_CODE=$(echo "$DEL_RESPONSE" | tail -1) | |
| if [ "$DEL_CODE" -ge 400 ]; then | |
| echo "::warning::Failed to delete deployment ${DEPLOY_ID} (HTTP ${DEL_CODE}) — may be the most recent deployment (Cloudflare restriction)" | |
| else | |
| COUNT=$((COUNT + 1)) | |
| fi | |
| done | |
| echo "Cleaned up preview deployments for ${BRANCH}" | |
| echo "Deleted ${COUNT} preview deployments for ${BRANCH}" | |
| API_BASE="https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/pages/projects/synthorg-pr-preview/deployments" | |
| # List deployments for this branch (per_page=100 to cover busy PRs) | |
| # Note: Cloudflare prevents deleting the most recent deployment per project — that one will remain | |
| echo "Fetching deployments for branch ${BRANCH}..." | |
| PAGE=1 | |
| DEPLOYMENTS="" | |
| while :; do | |
| RESPONSE=$(curl -s -w "\n%{http_code}" \ | |
| "${API_BASE}?per_page=100&page=${PAGE}" \ | |
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") | |
| HTTP_CODE=$(echo "$RESPONSE" | tail -1) | |
| BODY=$(echo "$RESPONSE" | sed '$d') | |
| if [ "$HTTP_CODE" -ge 400 ]; then | |
| echo "::warning::Failed to list deployments (HTTP ${HTTP_CODE})" | |
| echo "$BODY" | jq -r '.errors[]?.message // empty' 2>/dev/null || true | |
| exit 0 | |
| fi | |
| PAGE_IDS=$(echo "$BODY" | jq -r \ | |
| --arg branch "$BRANCH" \ | |
| '.result[] | select(.deployment_trigger.metadata.branch == $branch) | .id' 2>/dev/null) || true | |
| if [ -n "$PAGE_IDS" ]; then | |
| DEPLOYMENTS="${DEPLOYMENTS}${DEPLOYMENTS:+$'\n'}${PAGE_IDS}" | |
| fi | |
| PAGE_COUNT=$(echo "$BODY" | jq -r '.result | length' 2>/dev/null || echo 0) | |
| [ "$PAGE_COUNT" -lt 100 ] && break | |
| PAGE=$((PAGE + 1)) | |
| done | |
| if [ -z "$DEPLOYMENTS" ]; then | |
| echo "No deployments found for branch ${BRANCH}" | |
| echo "Available branches:" | |
| echo "$BODY" | jq -r '[.result[].deployment_trigger.metadata.branch // "null"] | unique[]' 2>/dev/null || true | |
| exit 0 | |
| fi | |
| COUNT=0 | |
| for DEPLOY_ID in $DEPLOYMENTS; do | |
| echo "Deleting deployment ${DEPLOY_ID}..." | |
| DEL_RESPONSE=$(curl -s -w "\n%{http_code}" -X DELETE \ | |
| "${API_BASE}/${DEPLOY_ID}?force=true" \ | |
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") | |
| DEL_CODE=$(echo "$DEL_RESPONSE" | tail -1) | |
| if [ "$DEL_CODE" -ge 400 ]; then | |
| echo "::warning::Failed to delete deployment ${DEPLOY_ID} (HTTP ${DEL_CODE}) — may be the most recent deployment (Cloudflare restriction)" | |
| else | |
| COUNT=$((COUNT + 1)) | |
| fi | |
| done | |
| echo "Deleted ${COUNT} preview deployments for ${BRANCH}" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/pages-preview.yml around lines 260 - 302, The listing only
fetches a single page (per_page=100) so older deployments on subsequent pages
are missed; change the logic around API_BASE/RESPONSE/DEPLOYMENTS to loop over
pages (e.g. add a page counter/query param) fetching
"${API_BASE}?per_page=100&page=$PAGE" until an empty result set or no more
pages, concatenating .result[] ids into DEPLOYMENTS (or accumulate into a
DEPLOYMENTS variable/array), preserve the existing HTTP_CODE/ERROR handling for
each page, and then proceed to delete all accumulated DEPLOYMENTS and report
COUNT as before.
…ror isolation Address 8 review findings from local agents, Copilot, Greptile, and CodeRabbit: - Add curl transport failure detection (HTTP 000 / non-zero exit) on list and delete - Paginate deployment listing to catch deployments beyond first 100 - Validate .result field exists before jq filtering (detect API schema changes) - Use exit 1 on list failures (misconfigured token/account warrants attention) - Log raw body on jq parse failure for non-JSON error responses - Remove stdout suppression from wrangler install (npm warnings now visible) - Add continue-on-error on comment deletion so deploy cleanup still runs
| for DEPLOY_ID in $DEPLOYMENTS; do | ||
| echo "Deleting deployment ${DEPLOY_ID}..." | ||
| RESPONSE=$(curl -s -w "\n%{http_code}" -X DELETE \ | ||
| "https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/pages/projects/synthorg-pr-preview/deployments/${DEPLOY_ID}?force=true" \ | ||
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") | ||
| HTTP_CODE=$(echo "$RESPONSE" | tail -1) | ||
| if [ "$HTTP_CODE" -ge 400 ]; then | ||
| echo "::warning::Failed to delete deployment ${DEPLOY_ID} (HTTP ${HTTP_CODE}) — may be the latest deployment (Cloudflare restriction)" | ||
| DEL_CURL_EXIT=0 | ||
| DEL_RESPONSE=$(curl -s -w "\n%{http_code}" -X DELETE \ | ||
| "${API_BASE}/${DEPLOY_ID}?force=true" \ | ||
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") || DEL_CURL_EXIT=$? | ||
| DEL_CODE=$(echo "$DEL_RESPONSE" | tail -1) | ||
|
|
||
| if [ "$DEL_CURL_EXIT" -ne 0 ] || [ "$DEL_CODE" = "000" ]; then | ||
| echo "::warning::Failed to reach Cloudflare API for deletion of ${DEPLOY_ID} (curl transport error)" | ||
| elif [ "$DEL_CODE" -ge 400 ]; then | ||
| echo "::warning::Failed to delete deployment ${DEPLOY_ID} (HTTP ${DEL_CODE}) — may be the most recent deployment (Cloudflare restriction)" | ||
| else | ||
| COUNT=$((COUNT + 1)) | ||
| fi |
There was a problem hiding this comment.
Deletion error body not extracted or logged
On a failed deletion (e.g. HTTP 409, 403), DEL_RESPONSE already holds the full response body, but it's never split or printed. This means the Cloudflare error message (e.g. "cannot delete the most recent deployment for a project") is silently discarded, leaving only the numeric HTTP code in the warning. By contrast, the fetch path properly extracts and logs the error body.
| for DEPLOY_ID in $DEPLOYMENTS; do | |
| echo "Deleting deployment ${DEPLOY_ID}..." | |
| RESPONSE=$(curl -s -w "\n%{http_code}" -X DELETE \ | |
| "https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/pages/projects/synthorg-pr-preview/deployments/${DEPLOY_ID}?force=true" \ | |
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") | |
| HTTP_CODE=$(echo "$RESPONSE" | tail -1) | |
| if [ "$HTTP_CODE" -ge 400 ]; then | |
| echo "::warning::Failed to delete deployment ${DEPLOY_ID} (HTTP ${HTTP_CODE}) — may be the latest deployment (Cloudflare restriction)" | |
| DEL_CURL_EXIT=0 | |
| DEL_RESPONSE=$(curl -s -w "\n%{http_code}" -X DELETE \ | |
| "${API_BASE}/${DEPLOY_ID}?force=true" \ | |
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") || DEL_CURL_EXIT=$? | |
| DEL_CODE=$(echo "$DEL_RESPONSE" | tail -1) | |
| if [ "$DEL_CURL_EXIT" -ne 0 ] || [ "$DEL_CODE" = "000" ]; then | |
| echo "::warning::Failed to reach Cloudflare API for deletion of ${DEPLOY_ID} (curl transport error)" | |
| elif [ "$DEL_CODE" -ge 400 ]; then | |
| echo "::warning::Failed to delete deployment ${DEPLOY_ID} (HTTP ${DEL_CODE}) — may be the most recent deployment (Cloudflare restriction)" | |
| else | |
| COUNT=$((COUNT + 1)) | |
| fi | |
| for DEPLOY_ID in $DEPLOYMENTS; do | |
| echo "Deleting deployment ${DEPLOY_ID}..." | |
| DEL_CURL_EXIT=0 | |
| DEL_RESPONSE=$(curl -s -w "\n%{http_code}" -X DELETE \ | |
| "${API_BASE}/${DEPLOY_ID}?force=true" \ | |
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") || DEL_CURL_EXIT=$? | |
| DEL_CODE=$(echo "$DEL_RESPONSE" | tail -1) | |
| DEL_BODY=$(echo "$DEL_RESPONSE" | sed '$d') | |
| if [ "$DEL_CURL_EXIT" -ne 0 ] || [ "$DEL_CODE" = "000" ]; then | |
| echo "::warning::Failed to reach Cloudflare API for deletion of ${DEPLOY_ID} (curl transport error)" | |
| elif [ "$DEL_CODE" -ge 400 ]; then | |
| echo "::warning::Failed to delete deployment ${DEPLOY_ID} (HTTP ${DEL_CODE}) — may be the most recent deployment (Cloudflare restriction)" | |
| echo "$DEL_BODY" | jq -r '.errors[]?.message // empty' 2>/dev/null || echo "$DEL_BODY" | head -c 500 | |
| else | |
| COUNT=$((COUNT + 1)) | |
| fi | |
| done |
Prompt To Fix With AI
This is a comment left during a code review.
Path: .github/workflows/pages-preview.yml
Line: 317-331
Comment:
**Deletion error body not extracted or logged**
On a failed deletion (e.g. HTTP 409, 403), `DEL_RESPONSE` already holds the full response body, but it's never split or printed. This means the Cloudflare error message (e.g. `"cannot delete the most recent deployment for a project"`) is silently discarded, leaving only the numeric HTTP code in the warning. By contrast, the fetch path properly extracts and logs the error body.
```suggestion
for DEPLOY_ID in $DEPLOYMENTS; do
echo "Deleting deployment ${DEPLOY_ID}..."
DEL_CURL_EXIT=0
DEL_RESPONSE=$(curl -s -w "\n%{http_code}" -X DELETE \
"${API_BASE}/${DEPLOY_ID}?force=true" \
-H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") || DEL_CURL_EXIT=$?
DEL_CODE=$(echo "$DEL_RESPONSE" | tail -1)
DEL_BODY=$(echo "$DEL_RESPONSE" | sed '$d')
if [ "$DEL_CURL_EXIT" -ne 0 ] || [ "$DEL_CODE" = "000" ]; then
echo "::warning::Failed to reach Cloudflare API for deletion of ${DEPLOY_ID} (curl transport error)"
elif [ "$DEL_CODE" -ge 400 ]; then
echo "::warning::Failed to delete deployment ${DEPLOY_ID} (HTTP ${DEL_CODE}) — may be the most recent deployment (Cloudflare restriction)"
echo "$DEL_BODY" | jq -r '.errors[]?.message // empty' 2>/dev/null || echo "$DEL_BODY" | head -c 500
else
COUNT=$((COUNT + 1))
fi
done
```
How can I resolve this? If you propose a fix, please make it concise.| while :; do | ||
| CURL_EXIT=0 | ||
| RESPONSE=$(curl -s -w "\n%{http_code}" \ | ||
| "${API_BASE}?per_page=100&page=${PAGE}" \ | ||
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") || CURL_EXIT=$? | ||
| HTTP_CODE=$(echo "$RESPONSE" | tail -1) | ||
| BODY=$(echo "$RESPONSE" | sed '$d') | ||
|
|
||
| # Fix: detect curl transport failures (DNS, timeout, connection refused → HTTP code 000) | ||
| if [ "$CURL_EXIT" -ne 0 ] || [ "$HTTP_CODE" = "000" ]; then | ||
| echo "::error::Failed to reach Cloudflare API (curl exit code ${CURL_EXIT}, HTTP ${HTTP_CODE})" | ||
| exit 1 | ||
| fi | ||
|
|
||
| # exit 1: if we can't list deployments, the token/account is likely misconfigured — that warrants attention | ||
| if [ "$HTTP_CODE" -ge 400 ]; then | ||
| echo "::error::Failed to list deployments (HTTP ${HTTP_CODE})" | ||
| echo "$BODY" | jq -r '.errors[]?.message // empty' 2>/dev/null || echo "$BODY" | head -c 500 | ||
| exit 1 | ||
| fi | ||
|
|
||
| # Validate response structure before filtering | ||
| if ! echo "$BODY" | jq -e '.result' > /dev/null 2>&1; then | ||
| echo "::warning::Unexpected API response structure — .result field missing" | ||
| echo "$BODY" | head -c 500 | ||
| exit 0 | ||
| fi | ||
|
|
||
| # Wrangler direct-upload deployments store branch in deployment_trigger.metadata.branch | ||
| PAGE_IDS=$(echo "$BODY" | jq -r \ | ||
| --arg branch "$BRANCH" \ | ||
| '.result[] | select(.deployment_trigger.metadata.branch == $branch) | .id') | ||
| if [ -n "$PAGE_IDS" ]; then | ||
| DEPLOYMENTS="${DEPLOYMENTS}${DEPLOYMENTS:+$'\n'}${PAGE_IDS}" | ||
| fi | ||
|
|
||
| PAGE_COUNT=$(echo "$BODY" | jq -r '.result | length') | ||
| [ "$PAGE_COUNT" -lt 100 ] && break | ||
| PAGE=$((PAGE + 1)) | ||
| done |
There was a problem hiding this comment.
No upper bound on pagination loop
The while : loop has no maximum iteration guard. If the Cloudflare API were to unexpectedly return exactly 100 results on every page (e.g., due to an API bug or a project with thousands of deployments), the loop would run indefinitely and consume the job's entire timeout-minutes: 5 budget before being killed. A simple safety ceiling (e.g. 20 pages = 2 000 deployments) would prevent runaway execution:
| while :; do | |
| CURL_EXIT=0 | |
| RESPONSE=$(curl -s -w "\n%{http_code}" \ | |
| "${API_BASE}?per_page=100&page=${PAGE}" \ | |
| -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}") || CURL_EXIT=$? | |
| HTTP_CODE=$(echo "$RESPONSE" | tail -1) | |
| BODY=$(echo "$RESPONSE" | sed '$d') | |
| # Fix: detect curl transport failures (DNS, timeout, connection refused → HTTP code 000) | |
| if [ "$CURL_EXIT" -ne 0 ] || [ "$HTTP_CODE" = "000" ]; then | |
| echo "::error::Failed to reach Cloudflare API (curl exit code ${CURL_EXIT}, HTTP ${HTTP_CODE})" | |
| exit 1 | |
| fi | |
| # exit 1: if we can't list deployments, the token/account is likely misconfigured — that warrants attention | |
| if [ "$HTTP_CODE" -ge 400 ]; then | |
| echo "::error::Failed to list deployments (HTTP ${HTTP_CODE})" | |
| echo "$BODY" | jq -r '.errors[]?.message // empty' 2>/dev/null || echo "$BODY" | head -c 500 | |
| exit 1 | |
| fi | |
| # Validate response structure before filtering | |
| if ! echo "$BODY" | jq -e '.result' > /dev/null 2>&1; then | |
| echo "::warning::Unexpected API response structure — .result field missing" | |
| echo "$BODY" | head -c 500 | |
| exit 0 | |
| fi | |
| # Wrangler direct-upload deployments store branch in deployment_trigger.metadata.branch | |
| PAGE_IDS=$(echo "$BODY" | jq -r \ | |
| --arg branch "$BRANCH" \ | |
| '.result[] | select(.deployment_trigger.metadata.branch == $branch) | .id') | |
| if [ -n "$PAGE_IDS" ]; then | |
| DEPLOYMENTS="${DEPLOYMENTS}${DEPLOYMENTS:+$'\n'}${PAGE_IDS}" | |
| fi | |
| PAGE_COUNT=$(echo "$BODY" | jq -r '.result | length') | |
| [ "$PAGE_COUNT" -lt 100 ] && break | |
| PAGE=$((PAGE + 1)) | |
| done | |
| PAGE=1 | |
| DEPLOYMENTS="" | |
| MAX_PAGES=20 | |
| while [ "$PAGE" -le "$MAX_PAGES" ]; do |
Then update the break condition comment to note the ceiling. This is particularly relevant because cleanup jobs are inherently lower priority and a hung loop silently burns CI minutes.
Prompt To Fix With AI
This is a comment left during a code review.
Path: .github/workflows/pages-preview.yml
Line: 268-307
Comment:
**No upper bound on pagination loop**
The `while :` loop has no maximum iteration guard. If the Cloudflare API were to unexpectedly return exactly 100 results on every page (e.g., due to an API bug or a project with thousands of deployments), the loop would run indefinitely and consume the job's entire `timeout-minutes: 5` budget before being killed. A simple safety ceiling (e.g. 20 pages = 2 000 deployments) would prevent runaway execution:
```suggestion
PAGE=1
DEPLOYMENTS=""
MAX_PAGES=20
while [ "$PAGE" -le "$MAX_PAGES" ]; do
```
Then update the break condition comment to note the ceiling. This is particularly relevant because cleanup jobs are inherently lower priority and a hung loop silently burns CI minutes.
How can I resolve this? If you propose a fix, please make it concise.| if [ -z "$DEPLOYMENTS" ]; then | ||
| echo "No deployments found for branch ${BRANCH}" | ||
| echo "Available branches:" | ||
| echo "$BODY" | jq -r '[.result[].deployment_trigger.metadata.branch // "null"] | unique[]' 2>/dev/null || true | ||
| exit 0 |
There was a problem hiding this comment.
Available-branches diagnostic shows only the last fetched page
When no matching deployments are found after iterating multiple pages, $BODY holds only the last page of results. If the target branch happened to appear on an earlier page (e.g. offset by many non-matching deployments), the diagnostic output would miss it entirely and display branches from a completely unrelated page. For a single-page result set this is fine, but it can silently mislead debugging in a paginated scenario.
Consider collecting unique branch names across pages during the loop, e.g.:
ALL_BRANCHES="${ALL_BRANCHES} $(echo "$BODY" | jq -r '.result[].deployment_trigger.metadata.branch // "null"')"and then printing echo "$ALL_BRANCHES" | tr ' ' '\n' | sort -u in the diagnostic block.
Prompt To Fix With AI
This is a comment left during a code review.
Path: .github/workflows/pages-preview.yml
Line: 309-313
Comment:
**Available-branches diagnostic shows only the last fetched page**
When no matching deployments are found after iterating multiple pages, `$BODY` holds only the last page of results. If the target branch happened to appear on an earlier page (e.g. offset by many non-matching deployments), the diagnostic output would miss it entirely and display branches from a completely unrelated page. For a single-page result set this is fine, but it can silently mislead debugging in a paginated scenario.
Consider collecting unique branch names across pages during the loop, e.g.:
```bash
ALL_BRANCHES="${ALL_BRANCHES} $(echo "$BODY" | jq -r '.result[].deployment_trigger.metadata.branch // "null"')"
```
and then printing `echo "$ALL_BRANCHES" | tr ' ' '\n' | sort -u` in the diagnostic block.
How can I resolve this? If you propose a fix, please make it concise.## Summary - Cloudflare Pages deployments API rejects `per_page=100` when combined with the `page` parameter (HTTP 400: "Invalid list options provided") - Changes pagination to use `per_page=25` (Cloudflare's documented default) via a `PER_PAGE` variable - Updates the pagination break check to use the variable instead of hardcoded `100` **Context:** The pagination was added in #304 to handle PRs with >100 deployments. The cleanup job failed on first run because `per_page=100&page=1` is rejected by the Cloudflare API. This was correctly surfaced as `exit 1` (previously would have been silently swallowed). ## Test plan - [ ] Cleanup Preview job succeeds on PR close (no HTTP 400 from Cloudflare) - [ ] Pagination still works correctly for PRs with >25 deployments
🤖 I have created a release *beep* *boop* --- ## [0.1.1](v0.1.0...v0.1.1) (2026-03-11) ### Features * add PR preview deployments via Cloudflare Pages ([#302](#302)) ([b73c45a](b73c45a)) ### Bug Fixes * correct deploy-pages SHA and improve preview cleanup reliability ([#304](#304)) ([584d64a](584d64a)) * harden API key hashing with HMAC-SHA256 and clean up legacy changelog ([#292](#292)) ([5e85353](5e85353)) * upgrade upload-pages-artifact to v4 and add zizmor workflow linting ([#299](#299)) ([2eac571](2eac571)) * use Cloudflare Pages API default per_page for pagination ([#305](#305)) ([9fec245](9fec245)) ### Documentation * remove milestone references and rebrand to SynthOrg ([#289](#289)) ([57a03e0](57a03e0)) * set up documentation site, release CI, and sandbox hardening ([#298](#298)) ([0dec9da](0dec9da)) * split DESIGN_SPEC.md into 7 focused design pages ([#308](#308)) ([9ea0788](9ea0788)) --- This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please). --------- Signed-off-by: Aurelio <19254254+Aureliolo@users.noreply.github.com>
Summary
actions/deploy-pagesaction inpages.ymlwas pinned to SHA...53fd0d31which doesn't exist. Corrected to...b3c0c03e(v4.0.5). This typo was introduced in PR docs: set up documentation site, release CI, and sandbox hardening #298 but was masked by MkDocs build failures — only surfaced after PR feat: add PR preview deployments via Cloudflare Pages #302 fixed the build.pages-preview.ymlsilently swallowed API errors (curl -sf+|| true), causing PR feat: add PR preview deployments via Cloudflare Pages #302's cleanup to report "No deployments found" while the deployment remained live. Now properly surfaces HTTP errors, logs available branches on miss for diagnostics, and usesjq --argfor safer filtering.Test plan