Conversation
| group: axe-core Tests ${{ github.ref_name }} | ||
| cancel-in-progress: ${{ github.event_name != 'push' }} | ||
|
|
||
| permissions: {} |
There was a problem hiding this comment.
Nifty trick for disabling all permissions in one sweep. So we refine permissions needed (if any) at the job level.
|
|
||
| if (process.env.FIREFOX_BIN) { | ||
| options.setBinaryPath(process.env.FIREFOX_BIN); | ||
| options.setBinary(process.env.FIREFOX_BIN); |
There was a problem hiding this comment.
I got confused somewhere and had the wrong method in here. It just wasn't hit in the nightly testing where I was setting this env change up since it used a different variable.
Why does Webdriver not have uniform method naming for doing the same action between browsers? No clue. But we also can't use // @ts-check in this file since so many other things are broken. That will need a bigger effort to go fix on its own to help avoid these issues.
There was a problem hiding this comment.
Pull Request Overview
This PR updates the CI workflow configuration and fixes the Firefox WebDriver binary path configuration method. The workflow has been restructured to separate concerns into individual jobs with improved security practices and updated action versions.
- Changed Firefox WebDriver configuration from
setBinaryPathtosetBinary - Restructured GitHub Actions workflow into separate jobs for linting, formatting, building, and testing
- Added workflow concurrency control and the
always()condition to nightly tests
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| test/integration/full/test-webdriver.js | Updates Firefox options to use setBinary() method instead of setBinaryPath() |
| .github/workflows/test.yml | Restructures CI workflow into separate jobs with pinned action versions, YAML anchors, and Node 24 support |
| .github/workflows/nightly-tests.yml | Adds always() condition to Chrome Beta tests to ensure they run even if previous tests fail |
Comments suppressed due to low confidence (1)
test/integration/full/test-webdriver.js:137
- Inconsistent API usage between Chrome and Firefox options. Chrome uses
setBinaryPath()while Firefox usessetBinary(). For consistency and correctness, Chrome should also usesetBinary()if both selenium-webdriver Options classes support this method, or there should be a comment explaining why different methods are used.
options.setBinaryPath(process.env.CHROME_BIN);
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
All LGTM, but I would like @dbjorge or @WilcoFiers approval so we don't miss anything.
I did notice we didn't port over the test_rule_help_version step which should be fine as that should only run on a pr to master but we were running it on develop, which I don't think we need to do since a pr to develop will never increase the version number before it goes to master.
| - uses: actions/checkout@v5 | ||
| - uses: actions/setup-node@v6 | ||
| - *checkout | ||
| - &install-deps-with-xvfb |
There was a problem hiding this comment.
So if I understand everything correctly, calling &install-deps-with-xvfb will reinstall the node modules for every step and not cache it, whereas in circle we found a way to cache the build and the node modules so it only happened once for the entire run. This isn't so much a problem for now as as you said for the build we can worry about optimizing after. Looking at the runs, each of the tests take ~2mins longer than their circleCI counterpart, but since it all runs in parallel it doesn't add too much time in the long run.
There was a problem hiding this comment.
That feels like a step back. I don't care if we do that in this PR or a separate one but I don't think we should call this done until we've sorted this.
There was a problem hiding this comment.
So, cacheing of node modules is handled by actions/setup-node. It caches the local cache of node_modules and not the modules in the repo after installation. Reason being, just restoring the cache in the repo may miss running install scripts that make things work. (This is probably why some of the tests broke after the move.)
The build time is like 30 seconds of effort when we do need to re-run it. Not enough time to run up a bill too much.
Ideally, we'd ultimately end up building docker images we run against that have all the OS deps pre-installed (aside from nightly stuff which of course we need to install at runtime.) This would reduce the bulk of that extra ~2m time. Since most of it is spent downloading and installing the same OS packages on every runner.
No one that I'm aware of in the org has done Docker Images for CI yet in GHA. This "install deps" shared action is how every other project has done this that I've seen. So it's a trade-off the org has accepted for the initial conversion of these projects to GHA. If we want to prioritize setting up the infrastructure for making docker images and refreshing them regularly for CI to run against, I'm all for it. But that's a bigger task that will need to be planned out with PO buy-in so it has accounted time to work on it and get it done.
There was a problem hiding this comment.
Garbee is correct, actions/setup-node provides caching that is more robust than what we were doing in circleci and I agree that we should use that rather than caching the whole node_modules folder - I've had to debug issues caused by over-caching of whole node_modules folders before, it's a rough debugging experience I don't think caching node_modules instead is worth 20s of wall time.
This PR is caching the build between most jobs already (that's what &restore-axe-build does). But I agree with Garbee's choice to not use the cached build in the long-pole jobs; because build is faster than install_deps (even cached), the total wall-time of "wait for build job -> install_deps_with_xvfb -> download build" is less than the wall-time of "start immediately -> install_deps_with_xvfb -> redundant build". Unfortunately, GitHub Actions' needs dependencies only operate at the job level, not the step level; I'm not aware of a great way to represent "let the test_chrome job start doing its dep installation in parallel with the build job, and don't block on the build job until you actually get to the 'download artifact' step", which is the behavior you'd really want here.
I agree with Garbee that the part of dep installation that is actually most expensive in the long-pole test_chrome and test_firefox jobs is the xvfb/browser installation , and that the way you'd want to optimize that is to run builds against docker images that already have those pre-installed. We have an issue at https://github.com/dequelabs/on-prem-services/issues/8 tracking that we'd like to implement a centralized setup for automatically-refreshed common docker images for different teams to use, but haven't found time to implement it yet. I've updated that ticket to note that we'd like an image for this use case.
Playwright publishes docker images that come with xvfb and browser deps pre-installed that we might consider using here even without otherwise using Playwright (docs). But I think we should leave experimenting with that for a separate PR/issue, the jobs this PR sets up are already faster than circle as-is (the total runtime for GHA here is 5m59s, vs 6m49s for circle)
|
On |
dbjorge
left a comment
There was a problem hiding this comment.
Looks good to me overall, only two suggested changes are both minor.
| - uses: actions/checkout@v5 | ||
| - uses: actions/setup-node@v6 | ||
| - *checkout | ||
| - &install-deps-with-xvfb |
There was a problem hiding this comment.
Garbee is correct, actions/setup-node provides caching that is more robust than what we were doing in circleci and I agree that we should use that rather than caching the whole node_modules folder - I've had to debug issues caused by over-caching of whole node_modules folders before, it's a rough debugging experience I don't think caching node_modules instead is worth 20s of wall time.
This PR is caching the build between most jobs already (that's what &restore-axe-build does). But I agree with Garbee's choice to not use the cached build in the long-pole jobs; because build is faster than install_deps (even cached), the total wall-time of "wait for build job -> install_deps_with_xvfb -> download build" is less than the wall-time of "start immediately -> install_deps_with_xvfb -> redundant build". Unfortunately, GitHub Actions' needs dependencies only operate at the job level, not the step level; I'm not aware of a great way to represent "let the test_chrome job start doing its dep installation in parallel with the build job, and don't block on the build job until you actually get to the 'download artifact' step", which is the behavior you'd really want here.
I agree with Garbee that the part of dep installation that is actually most expensive in the long-pole test_chrome and test_firefox jobs is the xvfb/browser installation , and that the way you'd want to optimize that is to run builds against docker images that already have those pre-installed. We have an issue at https://github.com/dequelabs/on-prem-services/issues/8 tracking that we'd like to implement a centralized setup for automatically-refreshed common docker images for different teams to use, but haven't found time to implement it yet. I've updated that ticket to note that we'd like an image for this use case.
Playwright publishes docker images that come with xvfb and browser deps pre-installed that we might consider using here even without otherwise using Playwright (docs). But I think we should leave experimenting with that for a separate PR/issue, the jobs this PR sets up are already faster than circle as-is (the total runtime for GHA here is 5m59s, vs 6m49s for circle)
… previous pr runs
Co-authored-by: Dan Bjorge <dan.bjorge@deque.com>
Both changes verified and were applied. Thank you for catching those.
|
The code changes look good to me, but we should see if we can root cause/fix the flaky |
|
On Firefox being flaky, this was observed in the development of converting the tests. Part of me thought it might have been more pre-ready changes maybe conflicting at times. But now that the last few commits have settled and we still see this, it's definitely a flaky test. There are two main things that I know of which can be in play. First, pure resource difference between the runners at Circle CI and GHA's default runners. Maybe bumping the runner resources up one notch so it has slightly more CPU would fix it without any effort in the code. Second, because Circle CI was caching Short term, we have the OIDC deployment that needs to get done by the 19th, so 7 days from now. I think we need to try and get this merged, but not disable Circle CI yet. Once deployment is functional, we can circle back to triaging why the runtime difference exists and try to nail down what the solution is once the true root cause is known between the two platforms. Trying to do that before the Nov 19th deadline puts the OIDC conversion at risk. Unless we move other commitments just to work on triaging this situation in time. |
#4938) In #4919, we found that the `test_firefox` job was unacceptably flaky in the `preload-cssom` tests. I believe this is because the GHA runners are slightly more resource-constrained than the circleci ones, which broke an assumption in `axe.testUtils.addStyleSheet` that adding a `<style>` element would result in `document.styleSheets` being updated accordingly within 100ms (in Chrome this happens instantly, but in Firefox there is a delay). I updated the test util to poll until it actually *sees* `document.styleSheets` update as expected, and to reject with a more actionable error if a timeout occurs (rather than letting a test run in an unexpected state). This also has a minor side-benefit of making the tests marginally faster on Chrome, since they no longer wait 100ms-per-injected-sheet unnecessarily. You can see the results of this in #4936 - the preload-cssom test_firefox case [only passed 6/10 runs without this fix](https://github.com/dequelabs/axe-core/pull/4936/checks?sha=1516bacef0b7e039030adbd4d55c1a627f9899fb) and [passed 10/10 runs with this fix](https://github.com/dequelabs/axe-core/pull/4936/checks?sha=339c2580bb82f2f7833b8e2447bfeff92354350e). The updates in `preload-cssom.js` turned out to be unnecessary to fix the issue, but they make errors in the test more actionable and make the test more consistent with others, so I left them in. No QA required.
scottmries
left a comment
There was a problem hiding this comment.
Looks ok to me, given the above the discussion.
|
The last push updated the concurrency model again. Taking it back to how it was before in desire. Pull requests are the same, cancelling the previous run if it is still on the stack. While push events run sequentially. The reason to run sequentially is to avoid the condition where 2 (or more) commits land in short succession. Merge 1 gets a box having network issues so it is slower, but still finishes. While Merge 2 gets a box with no network issue and finishes before Merge 1. Once the deploy is in, with concurrent operation, Merge 2 would be behind Merge 1 in the This prevents that, since one run has to finish before the next. It is NOT bulletproof in ensuring Visual RepresentationA diagram of the explained flows to highlight where potential issues can occur.%%{init: {'theme': 'base', 'themeVariables': { 'primaryTextColor':'#000000', 'tertiaryTextColor':'#000000'}}}%%
graph LR
subgraph "Git History"
direction LR
M0["(earlier commits)"]
M1["Merge 1<br/>commit: abc123"]
M2["Merge 2<br/>commit: def456"]
M0 --> M1 --> M2
end
subgraph "Concurrent Deploys ❌"
direction TB
Q1["Queue: Merge 1, Merge 2"]
R1["Box A: Slow ⏳"]
R2["Box B: Fast ⚡"]
E1["Merge 1 finish: 5s later"]
E2["Merge 2 finish: 1s later"]
D2["Deploy Merge 2 FIRST<br/>@next → def456"]
D1["Deploy Merge 1 SECOND<br/>@next → abc123"]
Q1 -.->|splits| R1
Q1 -.->|splits| R2
R1 -.-> E1
R2 -.-> E2
E2 -->|faster| D2
E1 -->|slower| D1
style D1 fill:#FFB6C6,color:#000000
style D2 fill:#FFB6C6,color:#000000
style M1 fill:#ADD8E6,color:#000000
style M2 fill:#90EE90,color:#000000
end
subgraph "Sequential Deploys ✓"
direction TB
Q3["Queue: [Merge 1→Merge 2]"]
P1["Deploy Merge 1<br/>@next → abc123"]
P2["Deploy Merge 2<br/>@next → def456"]
Q3 --> P1
P1 -->|releases lock| P2
style P1 fill:#ADD8E6,color:#000000
style P2 fill:#90EE90,color:#000000
end
subgraph "Problem: Manual Re-run ⚠️"
direction TB
S1["Merge 2 deploys first<br/>@next → def456"]
S2["Merge 1 fails, skipped"]
S3["Manual re-run Merge 1<br/>@next → abc123"]
S1 --> S2
S2 --> S3
style S3 fill:#FFB6C6,color:#000000
style S1 fill:#90EE90,color:#000000
end
M2 -.-> Q1
M2 -.-> Q3
M2 -.-> S1
|
# Background In order to continue publishing to npm, the org is moving to OIDC tokens. This is setup through GitHub Actions. Which has prompted a conversion from CircleCI to GHA for this repository. In this the two previous PRs, #4918 and #4919, have slowly ported chunks of the dependency workflows over. This PR now provides the deployment support to ship changes from GHA. # Key Changes ## Addition of Deployment Workflow **Concurrency**: One queue per branch. Does not cancel previous runs. The deployment workflow is triggered only on `develop` and `master` commits once the tests have completed. First, the status of the tests is checked before proceeding with any work. If any failures happened, this workflow does not do anything. First, the "Tests" workflow for the associated commit is waited on. About 12 minutes (overhead for network slowness) is allocated, 14 total runtime minutes on the job before it is force killed. If the workflow was successful, the deployment for the branch in question begins. If any other conclusion is found, or none at the end of the timer, the job exits with an error so no deployment occurs. If the branch is `develop`, execution runs autonomously to build and test the package before publishing. Once all is successful, a publish of the next tag is conducted. If the branch is `master`, execution starts with the `prod-hold` job. This is a job that accesses an environment that is defined with "Required Reviewers". Meaning it will not allow the job to continue until someone on that reviewer list has approved it to happen. Once permission is granted, the `prod-deploy` job kicks off. Conducting the same actions as the next release does, but no special tags on the version which makes it `latest` and stable. Once the publish is out, some key information is retrieved as output for the job. Once a successful `prod-deploy` has happened, two jobs kick off. One creates the GitHub Release, which uses the same script as was used in CircleCI. The other sets up NodeJS, waits for the package to be visible on npm if it isn't already, then installs the package globally. After global installation finishes, a few tests of the package that was deployed are run to ensure it is functional. A key part of both workflows, is ensuring the package contents look correct before publishing. To support more expansive testing of the required behavior, a new node script is introduced. Which helps separate the pre-publish and post-publish testing. As well as expands the capability of that test to be more comprehensive. ### New Environment A new environment is introduced, `production-deploy`. This is restricted to only being accessible from the `master` branch. It is configured with required reviewers. So *before* the environment can be accessed in a workflow run, someone from the required reviewer group must manually approve the access in the GitHub Actions UX. This secondary environment is needed since we auto-deploy `next` tags off the `develop` branch. But for `master` with stable deploys, we want to have manual approval before it goes out. Since we can't add the required reviewers to the main environment as that would impact `develop`, this is made to target just stable releases. ## New pre-publish validation script CircleCI is configured with a shell script that did some pre and post deploy validation of the contents. While it is effective for the basics, it was fairly minimal in what it did as a whole. This now has a far more comprehensive suite of checks that runs for this step. The new script pulls key data from the package to build a report. It checks that _all_ contents of the `files` definition exist on the filesystem. It then checks to ensure that the package can be imported in CommonJS. After that, ensuring it is importable (along with all importable files shipped) are capable of that in ESM. Finally, it validates the SRI hashes defined in the sri history are what are computed from the current version. The SRI check only occurs on `master` and `release-` branches. As on `develop` the hash is not updated with every change. This script works by running as a Node ESM script. It pulls key information, does the filesystem check, then links the package to itself. This makes resolving the package pull the linked version on the filesystem instead of the version installed by `@axe-core/webdriverjs` for integration testing. The output of this is logged to the console and compiled into a more easily viewable step summary. The successful output of which can be seen in [a comment on this PR](#4929 (comment)). ## Addition of final tests before deployment of stable One test not added previously, `test_rule_help_version`, which checks for the docs to be live; is now added. As this only impacted stable releases going out, it made sense to reduce the initial scope and introduce this here. The next test added was an explicit SRI validation. This runs on all commits to `master` and `release-*` branches as well as all PRs targeting them. It uses the old validation pathway for quickly getting the check back in place. We can look into what to do with the SRI stuff itself later. ## Removal of CircleCI Publishing Configuration Within this patch, the CircleCI configuration to deploy to NPM is removed since we now want to run only from OIDC on GitHub Actions. The full test suite is not removed with this. It will be removed shortly after this lands when we convert the required checks over to GHA for merging. ## Visual Overviews ### Deploy Workflow <figure><legend>Workflow Control Color Coding</legend> | Color | Element | |-------|---------| | Light Blue - `#e1f5ff` | Start (Trigger) | | Light Red - `#ffe1e1` | End (No Deploy) | | Light Green - `#e1ffe1` | End (Success states) | | Light Yellow - `#fff4e1` | Decision gates & Manual approval | </figure> <figure><legend>Step Color Coding</legend> | Color | Step Name | |-------|-----------| | Blue - `#b3d9ff` | Checkout Code | | Purple - `#d9b3ff` | Install Dependencies | | Pink - `#ffb3d9` | Build Package | | Green - `#b3ffb3` | Validate Package | | Orange - `#ffd9b3` | Publish to NPM | </figure> <details><summary>Click to open diagram</summary> ```mermaid flowchart TD Start([Push to master/develop]) --> WaitTests[wait-for-tests Job] WaitTests --> WT1[Checkout Code] WT1 --> WT2[Wait for Tests Workflow] WT2 --> TestSuccess{Tests Successful?} TestSuccess -->|No| End([End - Tests Failed]) TestSuccess -->|Yes| BranchCheck{Which Branch?} BranchCheck -->|develop| DeployNext[deploy-next Job] BranchCheck -->|master| ProdHold[prod-hold Job] DeployNext --> DN1[Checkout Code] DN1 --> DN2[Install Dependencies] DN2 --> DN3[Build Package<br/>npm prepare & build] DN3 --> DN4[Determine Prerelease Version] DN4 --> DN5[Bump Version] DN5 --> DN6[Validate Package] DN6 --> DN7[Publish to NPM<br/>--tag=next] DN7 --> ValidateNextDeploy[validate-next-deploy Job] ValidateNextDeploy --> VND1[Checkout Code] VND1 --> VND2[Setup Node.js] VND2 --> VND3[Wait for Package on NPM] VND3 --> VND4[Validate Installation of next] VND4 --> EndNext([End - Next Version Validated]) ProdHold --> PH1[Manual Approval Gate] PH1 --> ProdDeploy[prod-deploy Job] ProdDeploy --> PD1[Checkout Code] PD1 --> PD2[Install Dependencies] PD2 --> PD3[Build Package<br/>npm prepare & build] PD3 --> PD4[Validate Package] PD4 --> PD5[Publish to NPM<br/>stable version] PD5 --> PD6[Get Package Data<br/>version & name] PD6 --> CreateRelease[create-github-release Job] PD6 --> ValidateDeploy[validate-deploy Job] CreateRelease --> CR1[Checkout Code] CR1 --> CR2[Install Release Helper<br/>github-release tool] CR2 --> CR3[Download Release Script] CR3 --> CR4[Make Script Executable] CR4 --> CR5[Create GitHub Release] CR5 --> EndRelease([GitHub Release Created]) ValidateDeploy --> VD1[Checkout Code] VD1 --> VD2[Setup Node.js] VD2 --> VD3[Wait for Package on NPM] VD3 --> VD4[Validate Installation of Stable] VD4 --> EndValidate([End - Deploy Validated]) %% Consistent step colors with accessible text style DN1 fill:#b3d9ff,color:#001a33,stroke:#0066cc style PD1 fill:#b3d9ff,color:#001a33,stroke:#0066cc style CR1 fill:#b3d9ff,color:#001a33,stroke:#0066cc style WT1 fill:#b3d9ff,color:#001a33,stroke:#0066cc style VND1 fill:#b3d9ff,color:#001a33,stroke:#0066cc style VD1 fill:#b3d9ff,color:#001a33,stroke:#0066cc style DN2 fill:#d9b3ff,color:#1a0033,stroke:#6600cc style PD2 fill:#d9b3ff,color:#1a0033,stroke:#6600cc style DN3 fill:#ffb3d9,color:#330011,stroke:#cc0066 style PD3 fill:#ffb3d9,color:#330011,stroke:#cc0066 style DN6 fill:#b3ffb3,color:#2200,stroke:#00aa00 style PD4 fill:#b3ffb3,color:#2200,stroke:#00aa00 style DN7 fill:#ffd9b3,color:#331100,stroke:#cc6600 style PD5 fill:#ffd9b3,color:#331100,stroke:#cc6600 %% Decision/Gate styling style Start fill:#e1f5ff,color:#001a33 style End fill:#ffe1e1,color:#330000 style EndNext fill:#e1ffe1,color:#2200 style EndRelease fill:#e1ffe1,color:#2200 style EndValidate fill:#e1ffe1,color:#2200 style PH1 fill:#fff4e1,color:#331100 style TestSuccess fill:#fff4e1,color:#331100 style BranchCheck fill:#fff4e1,color:#331100 ``` </details> ### Validation Script <figure><legend>Color Coding of Scopes</legend> | Color | Scope | |-------|-------| | Dark Gray - `#1a1a1a` | Setup/Teardown | | Blue - `#004d99` | File Existence Check | | Orange/Brown - `#664d00` | CommonJS Compatibility Check | | Purple - `#4d004d` | Importable Check | | Teal - `#004d4d` | SRI Hash Validation | | Gray - `#5c5c5c` | Skipped operations | </figure> <figure><legend>Color Coding of States</legend> | Color | State | |-------|-------| | Green - `#2d5016` | Success | | Red - `#7d1007` | Error/Failure | | Gray - `#5c5c5c` | Skipped | </figure> <details><summary>Click to open diagram</summary> ```mermaid flowchart TD Start([Start Script]) --> Init[Initialize Variables] Init --> FileCheck[File Existence Check] FileCheck --> FileLoop[For each file in pkg.files array] FileLoop --> FileExists{File exists?} FileExists -->|yes| FilePass[✓ Mark as Found] FileExists -->|no| FileFail[✗ Mark as Missing<br/>exitCode++] FilePass --> LinkSetup FileFail --> LinkSetup LinkSetup[Create npm link] -->|success| CJS[CommonJS Compatibility Check] LinkSetup -->|fail| LinkError[Log error and skip remaining checks] CJS --> CJSRequire{Require package successful?} CJSRequire -->|yes| CJSValidate{Validate export type and version exists?} CJSRequire -->|no| CJSFail[✗ CommonJS Failed<br/>exitCode++] CJSValidate -->|yes| CJSPass[✓ CommonJS Compatible] CJSValidate -->|no| CJSFail CJSPass --> Import CJSFail --> Import Import[Importable Check] --> ImportMain{Import main package?} ImportMain -->|success| ValidateMainVersion{Version property exists?} ImportMain -->|fail| ImportMainFail[✗ Not Importable<br/>anyCaught = true] ValidateMainVersion -->|yes| ImportMainPass[✓ Mark as Importable] ValidateMainVersion -->|no| ImportMainFail ImportMainPass --> ImportLoop ImportMainFail --> ImportLoop ImportLoop[For each file in pkg.files] --> FileType{File type?} FileType -->|skip .txt, .d.ts, folders| CheckAnyCaught FileType -->|check resolve| NodeModCheck{Resolves to node_modules?} NodeModCheck -->|yes| ImportNodeFail[✗ Resolves to node_modules<br/>exitCode++] NodeModCheck -->|no| ImportTry{Import successful?} ImportTry -->|yes| CheckVersion{Has version property?} ImportTry -->|no| ImportFileFail[✗ Not Importable<br/>anyCaught = true] CheckVersion -->|yes| ImportPass[✓ Importable] CheckVersion -->|no| ImportFileFail ImportPass --> CheckAnyCaught ImportFileFail --> CheckAnyCaught ImportNodeFail --> CheckAnyCaught CheckAnyCaught{anyCaught true?} -->|yes| ImportExit[exitCode++] CheckAnyCaught -->|no| SRICheck ImportExit --> SRICheck SRICheck[SRI Hash Validation] --> BranchCheck{Branch is master or release-*?} BranchCheck -->|no| SkipSRI[Skip SRI validation] BranchCheck -->|yes| SRILoad[Load sri-history.json<br/>Calculate hashes for axe.js and axe.min.js] SRILoad --> SRICompare{Hash matches expected?} SRICompare -->|yes| SRIPass[✓ Valid SRI] SRICompare -->|no| SRIFail[✗ Invalid SRI<br/>exitCode++] SRIPass --> Cleanup SRIFail --> Cleanup SkipSRI --> Cleanup Cleanup[Unlink npm package] --> End([Exit with exitCode]) LinkError --> CleanupError[Attempt unlink if needed] CleanupError --> End style Start fill:#1a1a1a,stroke:#000,stroke-width:2px,color:#fff style Init fill:#1a1a1a,stroke:#000,stroke-width:2px,color:#fff style End fill:#1a1a1a,stroke:#000,stroke-width:2px,color:#fff style FileCheck fill:#004d99,stroke:#000,stroke-width:2px,color:#fff style FileLoop fill:#004d99,stroke:#000,stroke-width:2px,color:#fff style FileExists fill:#004d99,stroke:#000,stroke-width:2px,color:#fff style FilePass fill:#2d5016,stroke:#000,stroke-width:2px,color:#fff style FileFail fill:#7d1007,stroke:#000,stroke-width:2px,color:#fff style LinkSetup fill:#1a1a1a,stroke:#000,stroke-width:2px,color:#fff style LinkError fill:#7d1007,stroke:#000,stroke-width:2px,color:#fff style CJS fill:#664d00,stroke:#000,stroke-width:2px,color:#fff style CJSRequire fill:#664d00,stroke:#000,stroke-width:2px,color:#fff style CJSValidate fill:#664d00,stroke:#000,stroke-width:2px,color:#fff style CJSPass fill:#2d5016,stroke:#000,stroke-width:2px,color:#fff style CJSFail fill:#7d1007,stroke:#000,stroke-width:2px,color:#fff style Import fill:#4d004d,stroke:#000,stroke-width:2px,color:#fff style ImportMain fill:#4d004d,stroke:#000,stroke-width:2px,color:#fff style ValidateMainVersion fill:#4d004d,stroke:#000,stroke-width:2px,color:#fff style ImportMainPass fill:#2d5016,stroke:#000,stroke-width:2px,color:#fff style ImportLoop fill:#4d004d,stroke:#000,stroke-width:2px,color:#fff style FileType fill:#4d004d,stroke:#000,stroke-width:2px,color:#fff style NodeModCheck fill:#4d004d,stroke:#000,stroke-width:2px,color:#fff style ImportTry fill:#4d004d,stroke:#000,stroke-width:2px,color:#fff style CheckVersion fill:#4d004d,stroke:#000,stroke-width:2px,color:#fff style CheckAnyCaught fill:#4d004d,stroke:#000,stroke-width:2px,color:#fff style ImportExit fill:#4d004d,stroke:#000,stroke-width:2px,color:#fff style ImportPass fill:#2d5016,stroke:#000,stroke-width:2px,color:#fff style ImportMainFail fill:#7d1007,stroke:#000,stroke-width:2px,color:#fff style ImportFileFail fill:#7d1007,stroke:#000,stroke-width:2px,color:#fff style ImportNodeFail fill:#7d1007,stroke:#000,stroke-width:2px,color:#fff style SRICheck fill:#004d4d,stroke:#000,stroke-width:2px,color:#fff style BranchCheck fill:#004d4d,stroke:#000,stroke-width:2px,color:#fff style SRILoad fill:#004d4d,stroke:#000,stroke-width:2px,color:#fff style SRICompare fill:#004d4d,stroke:#000,stroke-width:2px,color:#fff style SRIPass fill:#2d5016,stroke:#000,stroke-width:2px,color:#fff style SRIFail fill:#7d1007,stroke:#000,stroke-width:2px,color:#fff style SkipSRI fill:#5c5c5c,stroke:#000,stroke-width:2px,color:#fff style Cleanup fill:#1a1a1a,stroke:#000,stroke-width:2px,color:#fff style CleanupError fill:#1a1a1a,stroke:#000,stroke-width:2px,color:#fff ``` </details> Fixes: #4912
This patch ports the main test workflow over to GHA. This is not removing Circle CI configuration as we need to convert required checks over. As well as only removing that after the publishing is ported over to GitHub.
One key change in the patterns from the CircleCI system, is the browser tests and the examples have to do the build internally instead of using the build artifact. I have not had the time to fully investigate what is going on with the differences. I believe it fails in re-use now because of some slight difference in the way the caches were done before vs how they are done in GitHub Actions.
As an aside, I realized in the previous nightly PR I did mess up one aspect. Since the browsers are individual steps now, if the first running fails it was blocking the second from proceeding. For that situation, I added an instruction to always run the second browser. This is good enough for now and we can further optimize later when we setup better failure reporting.
Refs: #4912