Summary
The new configure_gh_for_ghe.sh step (introduced in v0.60.0) fails on GHE Cloud with data residency because it runs gh auth login while GH_TOKEN is already set in the environment. The gh CLI rejects this:
The value of the GH_TOKEN environment variable is being used for authentication.
To have GitHub CLI store credentials instead, first clear the value from the environment.
This causes the agent job to fail before reaching the Execute step.
Environment
What Happens
The compiled lock file includes a new step:
- name: Configure gh CLI for GitHub Enterprise
run: bash ${RUNNER_TEMP}/gh-aw/actions/configure_gh_for_ghe.sh
env:
GH_TOKEN: ***
The script correctly detects the GHE host from GITHUB_SERVER_URL:
Detected GitHub host from GITHUB_SERVER_URL: contoso-aw.ghe.com
Configuring gh CLI for GitHub Enterprise host: contoso-aw.ghe.com
Authenticating gh CLI with host: contoso-aw.ghe.com
But then gh auth login fails because GH_TOKEN is already in the environment (exit code 1).
Suggested Fix: Use GH_HOST Instead of gh auth login
When GH_TOKEN is present, the gh CLI already authenticates via that token — it just needs to know which host to target. Setting GH_HOST=contoso-aw.ghe.com as a job-level environment variable is sufficient and avoids the gh auth login conflict entirely.
This is simpler and more robust than running gh auth login:
agent:
env:
GH_HOST: contoso-aw.ghe.com # derived from GITHUB_SERVER_URL
# ... other env vars ...
With GH_HOST set at the job level, all steps — user-authored (e.g., gh issue list), compiler-generated, and the Copilot CLI Execute step — automatically target the correct host. No gh auth login needed.
We previously confirmed this approach works: run 22240726 passed with manual GH_HOST entries added to individual steps. A job-level GH_HOST would be even cleaner.
If the script must also handle cases where GH_TOKEN is not set, it could check for it first:
if [ -n "$GH_TOKEN" ]; then
# GH_TOKEN handles auth; just export GH_HOST
echo "GH_HOST=${GH_HOST}" >> "$GITHUB_ENV"
else
# No token in env; use gh auth login
gh auth login --hostname "$GH_HOST" --with-token <<< "$SOME_TOKEN"
fi
Related
Summary
The new
configure_gh_for_ghe.shstep (introduced in v0.60.0) fails on GHE Cloud with data residency because it runsgh auth loginwhileGH_TOKENis already set in the environment. TheghCLI rejects this:This causes the agent job to fail before reaching the Execute step.
Environment
contoso-aw.ghe.comWhat Happens
The compiled lock file includes a new step:
The script correctly detects the GHE host from
GITHUB_SERVER_URL:But then
gh auth loginfails becauseGH_TOKENis already in the environment (exit code 1).Suggested Fix: Use
GH_HOSTInstead ofgh auth loginWhen
GH_TOKENis present, theghCLI already authenticates via that token — it just needs to know which host to target. SettingGH_HOST=contoso-aw.ghe.comas a job-level environment variable is sufficient and avoids thegh auth loginconflict entirely.This is simpler and more robust than running
gh auth login:With
GH_HOSTset at the job level, all steps — user-authored (e.g.,gh issue list), compiler-generated, and the Copilot CLI Execute step — automatically target the correct host. Nogh auth loginneeded.We previously confirmed this approach works: run 22240726 passed with manual
GH_HOSTentries added to individual steps. A job-levelGH_HOSTwould be even cleaner.If the script must also handle cases where
GH_TOKENis not set, it could check for it first:Related