{"version":"https://jsonfeed.org/version/1","title":"Human Who Codes","home_page_url":"https://humanwhocodes.com","feed_url":"https://humanwhocodes.com/feeds/all.json","description":"The Official Web Site of Nicholas C. Zakas","expired":false,"author":{"name":"Nicholas C. Zakas"},"items":[{"id":"https://humanwhocodes.com/blog/2026/04/improving-developer-velocity-github-merge-queue/","url":"https://humanwhocodes.com/blog/2026/04/improving-developer-velocity-github-merge-queue/","title":"Improving developer velocity with GitHub merge queue","author":{"name":"Nicholas C. Zakas"},"summary":"GitHub merge queue reduces the manual churn of keeping pull requests current by automatically retesting them in order before merge.","content_text":"\nImagine this: you're working at a company that uses GitHub as its source control platform. Each repository follows established best practices, such as requiring continuous integration (CI) tests to pass before a pull request can be merged. Branch protection rules maintain a linear commit history and require pull requests to be approved before merging. Developers may merge their own pull requests once CI passes and they have the required approvals.\n\nThe linear commit history requirement means that each pull request must be tested on top of `HEAD` in CI before it can be merged. GitHub provides an \"Update Branch\" button on pull requests for this purpose. As a developer, you click \"Update Branch\" and wait for CI to pass again.\n\nUnfortunately, CI takes ten minutes to run, so you step away for a bathroom break and a snack. When you get back to your desk, CI has passed, but the \"Update Branch\" button is enabled again. In the time it took CI to finish, someone else merged a pull request, so you have to start the entire process over.\n\nThe situation is frustrating with just one pull request, so imagine what happens if you and other engineers often have more than one open at a time. That means jumping back and forth between pull requests, repeatedly clicking \"Update Branch,\" and waiting to see whether this is the time you actually make it through before someone else merges a different pull request.\n\nIn situations like this, the GitHub merge queue can free up developer time by eliminating the babysitting of pull requests.\n\n## What does a GitHub merge queue do?\n\nA GitHub merge queue acts as an intermediate step between approving a pull request and landing its commit on the target branch. Instead of merging a pull request as soon as it's ready, you add it to the merge queue. The queue collects pending pull requests and tests each one on top of the others, up to a maximum batch size of five by default. You can configure the same CI tests to run for the merge queue, and pull requests are kicked back to their authors if those tests fail.\n\nBehind the scenes, GitHub waits until the maximum batch size is available, unless a configurable timeout is reached, and then adds the pull requests to a temporary branch based on `HEAD` in queue order. CI runs for each pull request as it is added to the temporary branch, and the next pull request is added only after the previous one's CI checks pass. If a pull request fails CI, it is removed from the queue for further review by the author. The original pull request remains open on GitHub until it is successfully merged through the merge queue. The temporary branch containing the previously passing pull requests is merged into `HEAD`, and a new temporary branch starts with the pull request immediately after the failing one. The process then continues.\n\nOnce GitHub has a temporary branch where all pull requests have passed CI, it merges those commits into `HEAD` in a single operation. The commits land in the same order they appeared in the merge queue.\n\nThe biggest advantage of this approach is that every pull request is automatically tested on top of all the pull requests that came before it in the queue. You never have to click \"Update Branch\" because that effectively happens as part of the merge queue process. CI now runs twice: once before the pull request is added to the queue and again during temporary branch creation. That dramatically reduces the manual work required to land a pull request on `HEAD`.\n\nAnother advantage is that the merge queue is editable. You can reorder pull requests or remove them from the queue before they are merged, giving you full control over which pull requests move forward and which ones wait.\n\n## Setting up a GitHub merge queue\n\nTo set up a merge queue for a repository, there are three steps:\n\n1. Ensure your CI tests run in the merge queue\n2. Enable squash merges for the repository\n3. Enable the merge queue with a ruleset\n\nEach step is important for the overall operation of the merge queue.\n\n### Ensure your CI tests run in the merge queue\n\nThe first step in setting up a GitHub merge queue is configuring CI for the queue. You can do that by adding the `merge_group` trigger[^merge-group-trigger] to the same GitHub workflow file that runs CI for pull requests:\n\n```yaml\non:\n  pull_request:\n    branches: [main]\n  merge_group:\n    branches: [main]\n    types: [checks_requested]\n```\n\nThis ensures you're running the same checks when a pull request is opened or updated and when a merge queue group is tested. As of this writing, `checks_requested` is the only available `merge_group` type.\n\nYou should also review the jobs in your CI workflow to make sure they all make sense to run on the merge queue branch. The merge queue CI process won't have access to pull request-specific information, such as the pull request title or pull request branch name. You can scope jobs to not run for the merge queue using an `if` condition, like this:\n\n```yaml\njobs:\n  lint:\n    if: github.event_name != 'merge_group'\n    runs-on: ubuntu-latest\n    steps:\n      - run: echo \"linting...\"\n```\n\nAlternately, you can check the current branch name to see if it's a merge queue branch, which exists under `refs/head/gh-readonly-queue/`:\n\n```yaml\njobs:\n  lint:\n    if: ${{ !startsWith(github.ref, 'refs/heads/gh-readonly-queue/') }}\n    runs-on: ubuntu-latest\n    steps:\n      - run: echo \"linting...\"\n```\n\n**Important:** Double-check your ruleset's required status checks. You cannot specify different required status checks for a merge queue so you need to ensure they work in both cases. A skipped job (such as when the `if` condition is not met) is considered a successful check for this purpose.\n\n### Enable squash merges for the repository\n\nBecause a merge queue takes all commits from the temporary branch and merges them into `HEAD`, it's important to squash pull requests[^squash-pull-requests] before they are merged. That ensures each pull request adds a single commit to `HEAD`, which makes individual pull requests much easier to revert if necessary. To do that:\n\n1. Go to the repository settings page\n2. Scroll down to the Pull Requests section\n3. Check the box next to \"Allow squash merges\"\n4. Uncheck the other merge types to avoid confusion\n\nThis matters because the merge queue can use only merge strategies that are enabled for the repository.\n\n### Enable the merge queue with a ruleset\n\nMerge queues can be enabled only through a ruleset applied to a branch. You can add a merge queue to an existing ruleset or create a new one. Here are the steps:\n\n1. Go to the repository settings page\n2. On the left navigation, click \"Rules\" and then \"Rulesets\".\n3. Click on an existing ruleset or create a new one.\n4. Scroll down and check the box next to \"Require merge queue\".\n\nThere are several merge queue settings you can customize for your use case:\n\n* **Build concurrency:** The maximum number of queued pull requests running checks at the same time.\n* **Minimum group size:** The minimum number of queued pull requests required before a temporary branch is created to test them together. This is set to 1 by default; otherwise, a single queued pull request could get stuck waiting for others. Consider increasing this only in a high-velocity repository where you want to throttle temporary branch creation.\n* **Maximum group size:** The maximum number of queued pull requests to include in a single temporary branch. The default is 5, which strikes a good balance between being too small, which creates more temporary branches, and being too large, which increases the likelihood of a failure as more pull requests must work together.\n* **Wait time to meet minimum group size:** The number of minutes to wait for the minimum group size to be met. The default is five minutes, which means GitHub waits up to five minutes to see whether the minimum group size is reached. If it is not, GitHub starts merging queued pull requests anyway. This backstop keeps queued pull requests from waiting too long before moving to a temporary branch.\n* **Require all queue entries to pass required checks:** When checked, which is the default, each queued pull request must pass status checks after being added to the temporary branch on top of the preceding pull requests. When unchecked, only the `HEAD` pull request on the temporary branch must pass status checks. Checking only `HEAD` provides faster feedback and uses fewer resources, but it also makes failures harder to diagnose because you do not immediately know which pull request caused the problem.\n* **Status check timeout:** The number of minutes GitHub waits for status checks to complete for queued pull requests. If status checks take longer than this limit, GitHub assumes they failed. This prevents queued pull requests from waiting indefinitely for checks that will never complete.\n\nBe sure to click \"Save changes\" at the bottom of the page whenever you edit these settings.\n\n## Example\n\nAssume you're using the default settings for a merge queue on the `main` branch and there are seven pull requests, numbered 1 through 7, that have passed CI and are ready to be added to the merge queue. The developer clicks the \"Merge when ready\" button at the bottom of each pull request to add them, in order, to the queue. The merge queue looks like this:\n\n1. Pull request 1\n1. Pull request 2\n1. Pull request 3\n1. Pull request 4\n1. Pull request 5\n1. Pull request 6\n1. Pull request 7\n\nBecause the minimum group size of 1 has been met, GitHub creates a temporary branch and squashes pull request 1's commits into it. Pull request 1's status checks run, and because they pass, pull request 2's commits are squashed into the temporary branch. Its status checks also pass, so pull request 3's commits are squashed into the branch. Unfortunately, pull request 3's status checks fail. The pull request on GitHub is marked as failed, and the squashed commit is removed from the temporary branch. Pull requests 1 and 2 are merged into `main` and closed. The merge queue now looks like this:\n\n1. Pull request 4\n1. Pull request 5\n1. Pull request 6\n1. Pull request 7\n\nPull request 3 has been removed from the queue pending developer intervention to fix the failing status checks. A new temporary branch is created, and pull request 4's commits are squashed into it. It turns out that fixing pull request 3 is fairly easy, so the developer clicks \"Merge when ready\" again. The merge queue now looks like this:\n\n1. Pull request 5\n1. Pull request 6\n1. Pull request 7\n1. Pull request 3\n\nThe remaining pull requests are then added to the temporary branch one by one so their status checks can run. This time, all five pull requests pass their checks, so they are all merged into `main`.\n\n## Conclusion\n\nThe GitHub merge queue is one of those features that sounds like a small quality-of-life improvement until you start using it and realize how much time you were spending babysitting pull requests. By automating the \"update and wait\" cycle, it frees developers to focus on writing code instead of monitoring CI dashboards. The setup is straightforward: add the `merge_group` trigger to your CI workflow, enable squash merges, and turn on the merge queue through a ruleset. From there, the defaults work well for most teams, and the configurable settings give you room to tune behavior as your repository's velocity grows. If your team spends meaningful time managing pull request ordering or waiting for CI to clear, the merge queue is worth enabling.\n\n[^merge-group-trigger]: [Events that trigger workflows](https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows#merge_group)\n[^squash-pull-requests]: [Configuring commit squashing for pull requests](https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/configuring-commit-squashing-for-pull-requests?versionId=free-pro-team%40latest&productId=actions&restPage=reference%2Cworkflows-and-actions%2Cevents-that-trigger-workflows)\n","content_html":"&lt;p&gt;Imagine this: you’re working at a company that uses GitHub as its source control platform. Each repository follows established best practices, such as requiring continuous integration (CI) tests to pass before a pull request can be merged. Branch protection rules maintain a linear commit history and require pull requests to be approved before merging. Developers may merge their own pull requests once CI passes and they have the required approvals.&lt;/p&gt;\n&lt;p&gt;The linear commit history requirement means that each pull request must be tested on top of &lt;code&gt;HEAD&lt;/code&gt; in CI before it can be merged. GitHub provides an “Update Branch” button on pull requests for this purpose. As a developer, you click “Update Branch” and wait for CI to pass again.&lt;/p&gt;\n&lt;p&gt;Unfortunately, CI takes ten minutes to run, so you step away for a bathroom break and a snack. When you get back to your desk, CI has passed, but the “Update Branch” button is enabled again. In the time it took CI to finish, someone else merged a pull request, so you have to start the entire process over.&lt;/p&gt;\n&lt;p&gt;The situation is frustrating with just one pull request, so imagine what happens if you and other engineers often have more than one open at a time. That means jumping back and forth between pull requests, repeatedly clicking “Update Branch,” and waiting to see whether this is the time you actually make it through before someone else merges a different pull request.&lt;/p&gt;\n&lt;p&gt;In situations like this, the GitHub merge queue can free up developer time by eliminating the babysitting of pull requests.&lt;/p&gt;\n&lt;h2 id=&quot;what-does-a-github-merge-queue-do&quot;&gt;What does a GitHub merge queue do?&lt;/h2&gt;\n&lt;p&gt;A GitHub merge queue acts as an intermediate step between approving a pull request and landing its commit on the target branch. Instead of merging a pull request as soon as it’s ready, you add it to the merge queue. The queue collects pending pull requests and tests each one on top of the others, up to a maximum batch size of five by default. You can configure the same CI tests to run for the merge queue, and pull requests are kicked back to their authors if those tests fail.&lt;/p&gt;\n&lt;p&gt;Behind the scenes, GitHub waits until the maximum batch size is available, unless a configurable timeout is reached, and then adds the pull requests to a temporary branch based on &lt;code&gt;HEAD&lt;/code&gt; in queue order. CI runs for each pull request as it is added to the temporary branch, and the next pull request is added only after the previous one’s CI checks pass. If a pull request fails CI, it is removed from the queue for further review by the author. The original pull request remains open on GitHub until it is successfully merged through the merge queue. The temporary branch containing the previously passing pull requests is merged into &lt;code&gt;HEAD&lt;/code&gt;, and a new temporary branch starts with the pull request immediately after the failing one. The process then continues.&lt;/p&gt;\n&lt;p&gt;Once GitHub has a temporary branch where all pull requests have passed CI, it merges those commits into &lt;code&gt;HEAD&lt;/code&gt; in a single operation. The commits land in the same order they appeared in the merge queue.&lt;/p&gt;\n&lt;p&gt;The biggest advantage of this approach is that every pull request is automatically tested on top of all the pull requests that came before it in the queue. You never have to click “Update Branch” because that effectively happens as part of the merge queue process. CI now runs twice: once before the pull request is added to the queue and again during temporary branch creation. That dramatically reduces the manual work required to land a pull request on &lt;code&gt;HEAD&lt;/code&gt;.&lt;/p&gt;\n&lt;p&gt;Another advantage is that the merge queue is editable. You can reorder pull requests or remove them from the queue before they are merged, giving you full control over which pull requests move forward and which ones wait.&lt;/p&gt;\n&lt;h2 id=&quot;setting-up-a-github-merge-queue&quot;&gt;Setting up a GitHub merge queue&lt;/h2&gt;\n&lt;p&gt;To set up a merge queue for a repository, there are three steps:&lt;/p&gt;\n&lt;ol&gt;\n&lt;li&gt;Ensure your CI tests run in the merge queue&lt;/li&gt;\n&lt;li&gt;Enable squash merges for the repository&lt;/li&gt;\n&lt;li&gt;Enable the merge queue with a ruleset&lt;/li&gt;\n&lt;/ol&gt;\n&lt;p&gt;Each step is important for the overall operation of the merge queue.&lt;/p&gt;\n&lt;h3 id=&quot;ensure-your-ci-tests-run-in-the-merge-queue&quot;&gt;Ensure your CI tests run in the merge queue&lt;/h3&gt;\n&lt;p&gt;The first step in setting up a GitHub merge queue is configuring CI for the queue. You can do that by adding the &lt;code&gt;merge_group&lt;/code&gt; trigger&lt;sup&gt;&lt;a href=&quot;#user-content-fn-merge-group-trigger&quot; id=&quot;user-content-fnref-merge-group-trigger&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; to the same GitHub workflow file that runs CI for pull requests:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;on&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;:&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;pull_request&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;:&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;branches&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: [&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;main&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;]&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;merge_group&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;:&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;branches&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: [&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;main&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;]&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;types&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: [&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;checks_requested&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;]&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;This ensures you’re running the same checks when a pull request is opened or updated and when a merge queue group is tested. As of this writing, &lt;code&gt;checks_requested&lt;/code&gt; is the only available &lt;code&gt;merge_group&lt;/code&gt; type.&lt;/p&gt;\n&lt;p&gt;You should also review the jobs in your CI workflow to make sure they all make sense to run on the merge queue branch. The merge queue CI process won’t have access to pull request-specific information, such as the pull request title or pull request branch name. You can scope jobs to not run for the merge queue using an &lt;code&gt;if&lt;/code&gt; condition, like this:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;jobs&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;:&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;lint&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;:&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;if&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;github.event_name != &apos;merge_group&apos;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;runs-on&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;ubuntu-latest&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;steps&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;:&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;      - &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;run&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;echo &quot;linting...&quot;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;Alternately, you can check the current branch name to see if it’s a merge queue branch, which exists under &lt;code&gt;refs/head/gh-readonly-queue/&lt;/code&gt;:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;jobs&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;:&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;lint&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;:&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;if&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;${{ !startsWith(github.ref, &apos;refs/heads/gh-readonly-queue/&apos;) }}&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;runs-on&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;ubuntu-latest&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;steps&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;:&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;      - &lt;/span&gt;&lt;span style=&quot;color: #85E89D&quot;&gt;run&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;echo &quot;linting...&quot;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Double-check your ruleset’s required status checks. You cannot specify different required status checks for a merge queue so you need to ensure they work in both cases. A skipped job (such as when the &lt;code&gt;if&lt;/code&gt; condition is not met) is considered a successful check for this purpose.&lt;/p&gt;\n&lt;h3 id=&quot;enable-squash-merges-for-the-repository&quot;&gt;Enable squash merges for the repository&lt;/h3&gt;\n&lt;p&gt;Because a merge queue takes all commits from the temporary branch and merges them into &lt;code&gt;HEAD&lt;/code&gt;, it’s important to squash pull requests&lt;sup&gt;&lt;a href=&quot;#user-content-fn-squash-pull-requests&quot; id=&quot;user-content-fnref-squash-pull-requests&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; before they are merged. That ensures each pull request adds a single commit to &lt;code&gt;HEAD&lt;/code&gt;, which makes individual pull requests much easier to revert if necessary. To do that:&lt;/p&gt;\n&lt;ol&gt;\n&lt;li&gt;Go to the repository settings page&lt;/li&gt;\n&lt;li&gt;Scroll down to the Pull Requests section&lt;/li&gt;\n&lt;li&gt;Check the box next to “Allow squash merges”&lt;/li&gt;\n&lt;li&gt;Uncheck the other merge types to avoid confusion&lt;/li&gt;\n&lt;/ol&gt;\n&lt;p&gt;This matters because the merge queue can use only merge strategies that are enabled for the repository.&lt;/p&gt;\n&lt;h3 id=&quot;enable-the-merge-queue-with-a-ruleset&quot;&gt;Enable the merge queue with a ruleset&lt;/h3&gt;\n&lt;p&gt;Merge queues can be enabled only through a ruleset applied to a branch. You can add a merge queue to an existing ruleset or create a new one. Here are the steps:&lt;/p&gt;\n&lt;ol&gt;\n&lt;li&gt;Go to the repository settings page&lt;/li&gt;\n&lt;li&gt;On the left navigation, click “Rules” and then “Rulesets”.&lt;/li&gt;\n&lt;li&gt;Click on an existing ruleset or create a new one.&lt;/li&gt;\n&lt;li&gt;Scroll down and check the box next to “Require merge queue”.&lt;/li&gt;\n&lt;/ol&gt;\n&lt;p&gt;There are several merge queue settings you can customize for your use case:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;&lt;strong&gt;Build concurrency:&lt;/strong&gt; The maximum number of queued pull requests running checks at the same time.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Minimum group size:&lt;/strong&gt; The minimum number of queued pull requests required before a temporary branch is created to test them together. This is set to 1 by default; otherwise, a single queued pull request could get stuck waiting for others. Consider increasing this only in a high-velocity repository where you want to throttle temporary branch creation.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Maximum group size:&lt;/strong&gt; The maximum number of queued pull requests to include in a single temporary branch. The default is 5, which strikes a good balance between being too small, which creates more temporary branches, and being too large, which increases the likelihood of a failure as more pull requests must work together.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Wait time to meet minimum group size:&lt;/strong&gt; The number of minutes to wait for the minimum group size to be met. The default is five minutes, which means GitHub waits up to five minutes to see whether the minimum group size is reached. If it is not, GitHub starts merging queued pull requests anyway. This backstop keeps queued pull requests from waiting too long before moving to a temporary branch.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Require all queue entries to pass required checks:&lt;/strong&gt; When checked, which is the default, each queued pull request must pass status checks after being added to the temporary branch on top of the preceding pull requests. When unchecked, only the &lt;code&gt;HEAD&lt;/code&gt; pull request on the temporary branch must pass status checks. Checking only &lt;code&gt;HEAD&lt;/code&gt; provides faster feedback and uses fewer resources, but it also makes failures harder to diagnose because you do not immediately know which pull request caused the problem.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Status check timeout:&lt;/strong&gt; The number of minutes GitHub waits for status checks to complete for queued pull requests. If status checks take longer than this limit, GitHub assumes they failed. This prevents queued pull requests from waiting indefinitely for checks that will never complete.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;Be sure to click “Save changes” at the bottom of the page whenever you edit these settings.&lt;/p&gt;\n&lt;h2 id=&quot;example&quot;&gt;Example&lt;/h2&gt;\n&lt;p&gt;Assume you’re using the default settings for a merge queue on the &lt;code&gt;main&lt;/code&gt; branch and there are seven pull requests, numbered 1 through 7, that have passed CI and are ready to be added to the merge queue. The developer clicks the “Merge when ready” button at the bottom of each pull request to add them, in order, to the queue. The merge queue looks like this:&lt;/p&gt;\n&lt;ol&gt;\n&lt;li&gt;Pull request 1&lt;/li&gt;\n&lt;li&gt;Pull request 2&lt;/li&gt;\n&lt;li&gt;Pull request 3&lt;/li&gt;\n&lt;li&gt;Pull request 4&lt;/li&gt;\n&lt;li&gt;Pull request 5&lt;/li&gt;\n&lt;li&gt;Pull request 6&lt;/li&gt;\n&lt;li&gt;Pull request 7&lt;/li&gt;\n&lt;/ol&gt;\n&lt;p&gt;Because the minimum group size of 1 has been met, GitHub creates a temporary branch and squashes pull request 1’s commits into it. Pull request 1’s status checks run, and because they pass, pull request 2’s commits are squashed into the temporary branch. Its status checks also pass, so pull request 3’s commits are squashed into the branch. Unfortunately, pull request 3’s status checks fail. The pull request on GitHub is marked as failed, and the squashed commit is removed from the temporary branch. Pull requests 1 and 2 are merged into &lt;code&gt;main&lt;/code&gt; and closed. The merge queue now looks like this:&lt;/p&gt;\n&lt;ol&gt;\n&lt;li&gt;Pull request 4&lt;/li&gt;\n&lt;li&gt;Pull request 5&lt;/li&gt;\n&lt;li&gt;Pull request 6&lt;/li&gt;\n&lt;li&gt;Pull request 7&lt;/li&gt;\n&lt;/ol&gt;\n&lt;p&gt;Pull request 3 has been removed from the queue pending developer intervention to fix the failing status checks. A new temporary branch is created, and pull request 4’s commits are squashed into it. It turns out that fixing pull request 3 is fairly easy, so the developer clicks “Merge when ready” again. The merge queue now looks like this:&lt;/p&gt;\n&lt;ol&gt;\n&lt;li&gt;Pull request 5&lt;/li&gt;\n&lt;li&gt;Pull request 6&lt;/li&gt;\n&lt;li&gt;Pull request 7&lt;/li&gt;\n&lt;li&gt;Pull request 3&lt;/li&gt;\n&lt;/ol&gt;\n&lt;p&gt;The remaining pull requests are then added to the temporary branch one by one so their status checks can run. This time, all five pull requests pass their checks, so they are all merged into &lt;code&gt;main&lt;/code&gt;.&lt;/p&gt;\n&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;\n&lt;p&gt;The GitHub merge queue is one of those features that sounds like a small quality-of-life improvement until you start using it and realize how much time you were spending babysitting pull requests. By automating the “update and wait” cycle, it frees developers to focus on writing code instead of monitoring CI dashboards. The setup is straightforward: add the &lt;code&gt;merge_group&lt;/code&gt; trigger to your CI workflow, enable squash merges, and turn on the merge queue through a ruleset. From there, the defaults work well for most teams, and the configurable settings give you room to tune behavior as your repository’s velocity grows. If your team spends meaningful time managing pull request ordering or waiting for CI to clear, the merge queue is worth enabling.&lt;/p&gt;\n&lt;section data-footnotes=&quot;&quot; class=&quot;footnotes&quot;&gt;&lt;h2 class=&quot;sr-only&quot; id=&quot;footnote-label&quot;&gt;Footnotes&lt;/h2&gt;\n&lt;ol&gt;\n&lt;li id=&quot;user-content-fn-merge-group-trigger&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows#merge_group&quot;&gt;Events that trigger workflows&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-merge-group-trigger&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-squash-pull-requests&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/configuring-pull-request-merges/configuring-commit-squashing-for-pull-requests?versionId=free-pro-team%40latest&amp;#x26;productId=actions&amp;#x26;restPage=reference%2Cworkflows-and-actions%2Cevents-that-trigger-workflows&quot;&gt;Configuring commit squashing for pull requests&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-squash-pull-requests&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;/ol&gt;\n&lt;/section&gt;","tags":["GitHub","Git","Merge Queue","CI"],"date_published":"2026-04-06T00:00:00.000Z","date_updated":"2026-04-06T00:00:00.000Z"},{"id":"https://humanwhocodes.com/blog/2026/03/proxying-fetch-requests-server-side-javascript/","url":"https://humanwhocodes.com/blog/2026/03/proxying-fetch-requests-server-side-javascript/","title":"Proxying fetch requests in server-side JavaScript","author":{"name":"Nicholas C. Zakas"},"summary":"Learn how to proxy fetch() requests in Node.js, Deno, Bun, and Cloudflare Workers to better monitor and control your server-side traffic.","content_text":"\nIf you retrieve content from the internet, you'll eventually need to proxy requests. Proxying is useful for logging traffic, modifying headers, altering content, improving performance through caching, or hiding an originating IP address. Most server-side JavaScript runtimes allow you to proxy requests using `fetch()`.\n\nThe Fetch standard[^1] doesn't specify request proxying because it's a browser-focused API. Consequently, server-side runtimes implement proxying differently.\n\n## Node.js\n\nNode.js natively supports proxying `fetch()` requests through environment variables as of Node.js v22.21.0 and v24.5.0[^2]. You can set the `HTTP_PROXY` and `HTTPS_PROXY` environment variables to specify the proxy server for HTTP and HTTPS requests, respectively:\n\n```shell\n# Enable Node.js to use environment variables for proxying\nexport NODE_USE_ENV_PROXY=1\n\nexport HTTP_PROXY=http://username:password@proxy-server.com:8080\nexport HTTPS_PROXY=https://username:password@proxy-server.com:8080\n```\n\nThe `NODE_USE_ENV_PROXY` variable is required to enable this behavior, as it is disabled by default to avoid unintended proxying. You can also use the `--use-env-proxy` flag when running your Node.js application to enable this feature without setting the environment variable:\n\n```shell\nnode --use-env-proxy your-app.js\n```\n\nYou can also proxy specific requests programmatically. While the Node.js `fetch()` API doesn't natively support proxying, it's built on top of the `undici` package[^3], which does. Since Node.js doesn't expose the `ProxyAgent` class[^4] directly, you'll first need to install `undici`:\n\n```shell\nnpm i undici\n```\n\nTo use a proxy, create a `ProxyAgent` instance and pass it as the `dispatcher` option:\n\n```js\nimport { ProxyAgent } from 'undici';\n\nconst agent = new ProxyAgent('http://username:password@proxy-server.com:8080');\n\nconst response = await fetch('https://api.example.com', {\n    dispatcher: agent\n});\n\nconst body = await response.json();\n```\n\n## Deno\n\nDeno's `fetch()` uses the `client` property in the options object to specify a proxy client:\n\n```js\nconst client = Deno.createHttpClient({\n  proxy: { url: \"http://username:password@proxy-server.com:8080\" },\n});\n\nconst response = await fetch(\"https://api.example.com\", { client });\nconst data = await response.json();\n\nclient.close(); // Remember to close the client when done\n```\n\nDespite its name, `Deno.createHttpClient()` also supports HTTPS proxies:\n\n```js\nconst client = Deno.createHttpClient({\n  proxy: { url: \"https://username:password@proxy-server.com:8080\" },\n});\n\nconst response = await fetch(\"https://api.example.com\", { client });\nconst data = await response.json();\n\nclient.close(); // Remember to close the client when done\n```\n\n## Bun\n\nBun provides native support for `fetch()` proxying via the `proxy` property:\n\n```js\nconst response = await fetch(\"https://api.example.com\", {\n  proxy: \"http://username:password@proxy-server.com:8080\"\n});\n\nconst body = await response.json();\n```\n\n## Cloudflare Workers\n\nThe Cloudflare Workers runtime does not natively support proxying `fetch()` requests through environment variables or programmatic options. However, you can work around this by using a Node.js Docker container to handle the requests.\n\nThe `@humanwhocodes/proxy-fetch-server`[^5] package is a small utility I created to simplify this. Here’s how to run it in a container:\n\n```dockerfile\nFROM node:22-slim\n\nWORKDIR /app\nRUN npm install -g @humanwhocodes/proxy-fetch-server@2\n\nEXPOSE 8080\nENV PORT=8080\n\nCMD [\"npx\", \"@humanwhocodes/proxy-fetch-server\"]\n```\n\nNext, define a container class:\n\n```js\nimport { Container } from \"@cloudflare/containers\";\n\nexport class ProxyFetchContainer extends Container {\n    defaultPort = 8080;\n}\n```\n\nUpdate your `wrangler.jsonc` file to include the binding:\n\n```jsonc\n{\n    \"name\": \"proxy-fetcher\",\n\n    \"containers\": [\n        {\n            \"class_name\": \"ProxyFetchContainer\",\n            \"image\": \"./Dockerfile\"\n        }\n    ],\n    \"durable_objects\": {\n        \"bindings\": [\n            {\n                \"name\": \"PROXY_FETCH_CONTAINER\",\n                \"class_name\": \"ProxyFetchContainer\"\n            }\n        ]\n    },\n    \"migrations\": [\n        {\n            \"tag\": \"v1\",\n            \"new_sqlite_classes\": [\n                \"ProxyFetchContainer\"\n            ]\n        }\n    ],\n}\n```\n\nFinally, access the proxy container in your code:\n\n```js\n// get and start container\nconst container = env.PROXY_FETCH_CONTAINER.getByName(\"proxy-fetch-server\");\n\nawait container.startAndWaitForPorts({\n    startOptions: {\n        envVars: {\n            FETCH_PROXY: \"http://username:password@proxy-server.com:8080\"\n        }\n    }\n});\n\n// Make request to the container\nconst containerResponse = await container.fetch(\n    new Request(\"http://container/\", {\n        method: \"POST\",\n        headers: {\n            \"Content-Type\": \"application/json\"\n        },\n        body: JSON.stringify({ url })\n    })\n);\n```\n\nWhile this requires more setup than other runtimes, it's a reliable solution for this use case.\n\n## Conclusion\n\nProxying `fetch()` requests is a common requirement in server-side development, whether for security, monitoring, or performance. While runtimes like Deno and Bun offer straightforward programmatic APIs, others like Node.js and Cloudflare Workers require a bit more legwork. Regardless of your choice, understanding these patterns ensures your applications can communicate reliably with the outside world. Give these methods a try in your next project; you'll find that handling proxies is just another essential tool in your server-side JavaScript toolkit.\n\n**Update(2026-03-04):** Added information about Node.js support for proxy-related environment variables.\n\n[^1]: [Fetch Standard](https://fetch.spec.whatwg.org/)\n[^2]: [Node.js Enterprise Network Configuration](https://nodejs.org/en/learn/http/enterprise-network-configuration)\n[^3]: [Undici](https://undici.nodejs.org)\n[^4]: [Issue #43187: Expose Undici ProxyAgent](https://github.com/nodejs/node/issues/43187)\n[^5]: [`@humanwhocodes/proxy-fetch-server`](https://npmjs.com/package/@humanwhocodes/proxy-fetch-server)\n","content_html":"&lt;p&gt;If you retrieve content from the internet, you’ll eventually need to proxy requests. Proxying is useful for logging traffic, modifying headers, altering content, improving performance through caching, or hiding an originating IP address. Most server-side JavaScript runtimes allow you to proxy requests using &lt;code&gt;fetch()&lt;/code&gt;.&lt;/p&gt;\n&lt;p&gt;The Fetch standard&lt;sup&gt;&lt;a href=&quot;#user-content-fn-1&quot; id=&quot;user-content-fnref-1&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; doesn’t specify request proxying because it’s a browser-focused API. Consequently, server-side runtimes implement proxying differently.&lt;/p&gt;\n&lt;h2 id=&quot;nodejs&quot;&gt;Node.js&lt;/h2&gt;\n&lt;p&gt;Node.js natively supports proxying &lt;code&gt;fetch()&lt;/code&gt; requests through environment variables as of Node.js v22.21.0 and v24.5.0&lt;sup&gt;&lt;a href=&quot;#user-content-fn-2&quot; id=&quot;user-content-fnref-2&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;. You can set the &lt;code&gt;HTTP_PROXY&lt;/code&gt; and &lt;code&gt;HTTPS_PROXY&lt;/code&gt; environment variables to specify the proxy server for HTTP and HTTPS requests, respectively:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;# Enable Node.js to use environment variables for proxying&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;export&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; NODE_USE_ENV_PROXY&lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;1&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;export&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; HTTP_PROXY&lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;http://username:password@proxy-server.com:8080&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;export&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; HTTPS_PROXY&lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;https://username:password@proxy-server.com:8080&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;The &lt;code&gt;NODE_USE_ENV_PROXY&lt;/code&gt; variable is required to enable this behavior, as it is disabled by default to avoid unintended proxying. You can also use the &lt;code&gt;--use-env-proxy&lt;/code&gt; flag when running your Node.js application to enable this feature without setting the environment variable:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;node&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;--use-env-proxy&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;your-app.js&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;You can also proxy specific requests programmatically. While the Node.js &lt;code&gt;fetch()&lt;/code&gt; API doesn’t natively support proxying, it’s built on top of the &lt;code&gt;undici&lt;/code&gt; package&lt;sup&gt;&lt;a href=&quot;#user-content-fn-3&quot; id=&quot;user-content-fnref-3&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;, which does. Since Node.js doesn’t expose the &lt;code&gt;ProxyAgent&lt;/code&gt; class&lt;sup&gt;&lt;a href=&quot;#user-content-fn-4&quot; id=&quot;user-content-fnref-4&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; directly, you’ll first need to install &lt;code&gt;undici&lt;/code&gt;:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;npm&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;i&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;undici&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;To use a proxy, create a &lt;code&gt;ProxyAgent&lt;/code&gt; instance and pass it as the &lt;code&gt;dispatcher&lt;/code&gt; option:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;import&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; { ProxyAgent } &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;from&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&apos;undici&apos;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;agent&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;new&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;ProxyAgent&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&apos;http://username:password@proxy-server.com:8080&apos;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;response&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;await&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;fetch&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&apos;https://api.example.com&apos;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    dispatcher: agent&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;});&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;body&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;await&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; response.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;json&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;();&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;h2 id=&quot;deno&quot;&gt;Deno&lt;/h2&gt;\n&lt;p&gt;Deno’s &lt;code&gt;fetch()&lt;/code&gt; uses the &lt;code&gt;client&lt;/code&gt; property in the options object to specify a proxy client:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;client&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; Deno.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;createHttpClient&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;({&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  proxy: { url: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;http://username:password@proxy-server.com:8080&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; },&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;});&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;response&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;await&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;fetch&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;https://api.example.com&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, { client });&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;data&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;await&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; response.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;json&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;();&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;client.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;close&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(); &lt;/span&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;// Remember to close the client when done&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;Despite its name, &lt;code&gt;Deno.createHttpClient()&lt;/code&gt; also supports HTTPS proxies:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;client&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; Deno.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;createHttpClient&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;({&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  proxy: { url: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;https://username:password@proxy-server.com:8080&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; },&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;});&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;response&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;await&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;fetch&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;https://api.example.com&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, { client });&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;data&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;await&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; response.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;json&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;();&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;client.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;close&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(); &lt;/span&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;// Remember to close the client when done&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;h2 id=&quot;bun&quot;&gt;Bun&lt;/h2&gt;\n&lt;p&gt;Bun provides native support for &lt;code&gt;fetch()&lt;/code&gt; proxying via the &lt;code&gt;proxy&lt;/code&gt; property:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;response&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;await&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;fetch&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;https://api.example.com&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  proxy: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;http://username:password@proxy-server.com:8080&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;});&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;body&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;await&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; response.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;json&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;();&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;h2 id=&quot;cloudflare-workers&quot;&gt;Cloudflare Workers&lt;/h2&gt;\n&lt;p&gt;The Cloudflare Workers runtime does not natively support proxying &lt;code&gt;fetch()&lt;/code&gt; requests through environment variables or programmatic options. However, you can work around this by using a Node.js Docker container to handle the requests.&lt;/p&gt;\n&lt;p&gt;The &lt;code&gt;@humanwhocodes/proxy-fetch-server&lt;/code&gt;&lt;sup&gt;&lt;a href=&quot;#user-content-fn-5&quot; id=&quot;user-content-fnref-5&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; package is a small utility I created to simplify this. Here’s how to run it in a container:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;FROM&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; node:22-slim&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;WORKDIR&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; /app&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;RUN&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; npm install -g @humanwhocodes/proxy-fetch-server@2&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;EXPOSE&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; 8080&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;ENV&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; PORT=8080&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;CMD&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; [&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;npx&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;@humanwhocodes/proxy-fetch-server&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;]&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;Next, define a container class:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;import&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; { Container } &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;from&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;@cloudflare/containers&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;export&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;class&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;ProxyFetchContainer&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;extends&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;Container&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;defaultPort&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;8080&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;Update your &lt;code&gt;wrangler.jsonc&lt;/code&gt; file to include the binding:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;{&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;proxy-fetcher&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;containers&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: [&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;class_name&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;ProxyFetchContainer&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;image&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;./Dockerfile&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    ],&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;durable_objects&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;bindings&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: [&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;                &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;PROXY_FETCH_CONTAINER&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;                &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;class_name&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;ProxyFetchContainer&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        ]&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    },&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;migrations&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: [&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;tag&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;v1&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;new_sqlite_classes&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: [&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;                &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;ProxyFetchContainer&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            ]&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    ],&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;Finally, access the proxy container in your code:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;// get and start container&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;container&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; env.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;PROXY_FETCH_CONTAINER&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;getByName&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;proxy-fetch-server&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;await&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; container.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;startAndWaitForPorts&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;({&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    startOptions: {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        envVars: {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            FETCH_PROXY: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;http://username:password@proxy-server.com:8080&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;});&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;// Make request to the container&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;containerResponse&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;await&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; container.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;fetch&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;new&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;Request&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;http://container/&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        method: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;POST&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        headers: {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;Content-Type&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;application/json&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        },&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        body: &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;JSON&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;stringify&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;({ url })&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    })&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;);&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;While this requires more setup than other runtimes, it’s a reliable solution for this use case.&lt;/p&gt;\n&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;\n&lt;p&gt;Proxying &lt;code&gt;fetch()&lt;/code&gt; requests is a common requirement in server-side development, whether for security, monitoring, or performance. While runtimes like Deno and Bun offer straightforward programmatic APIs, others like Node.js and Cloudflare Workers require a bit more legwork. Regardless of your choice, understanding these patterns ensures your applications can communicate reliably with the outside world. Give these methods a try in your next project; you’ll find that handling proxies is just another essential tool in your server-side JavaScript toolkit.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Update(2026-03-04):&lt;/strong&gt; Added information about Node.js support for proxy-related environment variables.&lt;/p&gt;\n&lt;section data-footnotes=&quot;&quot; class=&quot;footnotes&quot;&gt;&lt;h2 class=&quot;sr-only&quot; id=&quot;footnote-label&quot;&gt;Footnotes&lt;/h2&gt;\n&lt;ol&gt;\n&lt;li id=&quot;user-content-fn-1&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://fetch.spec.whatwg.org/&quot;&gt;Fetch Standard&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-1&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-2&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://nodejs.org/en/learn/http/enterprise-network-configuration&quot;&gt;Node.js Enterprise Network Configuration&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-2&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-3&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://undici.nodejs.org&quot;&gt;Undici&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-3&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-4&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://github.com/nodejs/node/issues/43187&quot;&gt;Issue #43187: Expose Undici ProxyAgent&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-4&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-5&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://npmjs.com/package/@humanwhocodes/proxy-fetch-server&quot;&gt;&lt;code&gt;@humanwhocodes/proxy-fetch-server&lt;/code&gt;&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-5&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;/ol&gt;\n&lt;/section&gt;","tags":["Fetch","Node.js","Cloudflare","Proxy"],"date_published":"2026-03-03T00:00:00.000Z","date_updated":"2026-03-04T00:00:00.000Z"},{"id":"https://humanwhocodes.com/snippets/2026/02/using-your-own-supabase-signing-keys/","url":"https://humanwhocodes.com/snippets/2026/02/using-your-own-supabase-signing-keys/","title":"Using your own Supabase signing keys","author":{"name":"Nicholas C. Zakas"},"summary":"Supabase generates JWT signing keys automatically, but you can also use your own if you want to share keys across multiple instances or generate tokens manually.","content_text":"\n[Supabase](https://supabase.com) uses JWT signing keys to sign authentication tokens. By default, Supabase generates a new signing key on startup, but you can also use your own signing keys. This is useful if you want to generate your own tokens or if you want to use the same signing keys across multiple Supabase instances.\n\n## Generate a signing key\n\nFirst, generate a new signing key using the Supabase CLI:\n\n```shell\nnpx supabase gen signing-key --algorithm ES256\n```\n\nThis will output a new signing key in JSON format to the terminal.\n\n**Note:** If you already have a `supabase/signing_key.json` file, the CLI will ask if you want to overwrite it. If you want to keep the existing signing key, you can choose \"No\" and the CLI will not output a new signing key.\n\n## Local Development\n\nFor local development, save the signing key in your `supabase` directory, such as `supabase/signing_key.json`. Supabase requires the local signing key to be contained in an array, so you need to wrap the output in square brackets. For example, if your output looks like this:\n\n```json\n{\n  \"kty\": \"EC\",\n  \"d\": \"N8sXo9n2e5Z19sXo9n2e5Z1\",\n  \"use\": \"sig\",\n  \"crv\": \"P-256\",\n  \"kid\": \"my-key-id\",\n  \"x\": \"f83OJ3D2xF4\",\n  \"y\": \"x_FEzRu9c\"\n}\n```\n\nYou need to wrap it like this:\n\n```json\n[\n  {\n    \"kty\": \"EC\",\n    \"d\": \"N8sXo9n2e5Z19sXo9n2e5Z1\",\n    \"use\": \"sig\",\n    \"crv\": \"P-256\",\n    \"kid\": \"my-key-id\",\n    \"x\": \"f83OJ3D2xF4\",\n    \"y\": \"x_FEzRu9c\"\n  }\n]\n```\n\nSave this into a file called `supabase/signing_key.json`.\n\nThen, edit the `config.toml` file and add the following lines to the `[auth]` section:\n\n```toml\n[auth]\nsigning_keys_path = \"./signing_key.json\"\n```\n\nThis will tell Supabase to use your signing key instead of generating a new one on startup. After editing `config.toml`, you need to restart Supabase to reload the configuration:\n\n```shell\nnpx supabase stop\nnpx supabase start\n```\n\n## Hosted Supabase\n\nIf you are using the hosted version of Supabase, you can set the signing keys in the Supabase dashboard. Go to the \"Settings\" tab, then click on \"JWT Signing Keys\".\n\nIf you already have a standby key, you'll need to remove it before you can add a new one. To remove a standby key, click the three dots next to the key and select \"Move to previously used\". After removing the existing standby key, you can add your new signing key as described above.\n\n**Important:** This signing key must *not* by in an array. It should be the raw JSON object that was generated by the Supabase CLI. \n\nClick \"Create Standby Key\". In the dialog select \"Import an existing key\" and paste in your previously generated signing key. Click the \"Create Standby Key\" button to save the new signing key.\n\nClick \"Rotate Keys\" to make the new signing key active. This will rotate the keys and make the new signing key the active key for signing tokens.\n\n## Important Notes\n\n* The Supabase CLI-generated signing key contains both `verify` and `sign` keys because Supabase itself needs to do both. However, some tools like [`jose`](https://npmjs.com/package/jose) will fail signing if object contains a `verify` key. If you encounter this issue, you can remove the `verify` key from the signing key JSON file before using it with `jose`.\n* If your app has users with a persisted session, changing the signing key will invalidate all existing tokens. This means that users will need to log in again to obtain new tokens signed with the new key. Make sure to communicate this change to your users if you are changing the signing key in a production environment. If you're using the JavaScript client, you can call `supabase.auth.refreshSession()` to refresh the session and obtain a new token without requiring the user to log in again.\n* You can tell if a user has an invalid JWT by checking the `error` property of the user object returned by `supabase.auth.getUser()`. If the JWT is invalid, the `error` property will contain a `code` property of `\"bad_jwt\"`.\n\n ","content_html":"&lt;p&gt;&lt;a href=&quot;https://supabase.com&quot;&gt;Supabase&lt;/a&gt; uses JWT signing keys to sign authentication tokens. By default, Supabase generates a new signing key on startup, but you can also use your own signing keys. This is useful if you want to generate your own tokens or if you want to use the same signing keys across multiple Supabase instances.&lt;/p&gt;\n&lt;h2 id=&quot;generate-a-signing-key&quot;&gt;Generate a signing key&lt;/h2&gt;\n&lt;p&gt;First, generate a new signing key using the Supabase CLI:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;npx&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;supabase&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;gen&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;signing-key&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;--algorithm&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;ES256&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;This will output a new signing key in JSON format to the terminal.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you already have a &lt;code&gt;supabase/signing_key.json&lt;/code&gt; file, the CLI will ask if you want to overwrite it. If you want to keep the existing signing key, you can choose “No” and the CLI will not output a new signing key.&lt;/p&gt;\n&lt;h2 id=&quot;local-development&quot;&gt;Local Development&lt;/h2&gt;\n&lt;p&gt;For local development, save the signing key in your &lt;code&gt;supabase&lt;/code&gt; directory, such as &lt;code&gt;supabase/signing_key.json&lt;/code&gt;. Supabase requires the local signing key to be contained in an array, so you need to wrap the output in square brackets. For example, if your output looks like this:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;{&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;kty&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;EC&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;d&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;N8sXo9n2e5Z19sXo9n2e5Z1&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;use&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;sig&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;crv&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;P-256&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;kid&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;my-key-id&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;x&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;f83OJ3D2xF4&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;y&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;x_FEzRu9c&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;You need to wrap it like this:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;[&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;kty&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;EC&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;d&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;N8sXo9n2e5Z19sXo9n2e5Z1&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;use&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;sig&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;crv&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;P-256&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;kid&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;my-key-id&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;x&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;f83OJ3D2xF4&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;y&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;x_FEzRu9c&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;]&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;Save this into a file called &lt;code&gt;supabase/signing_key.json&lt;/code&gt;.&lt;/p&gt;\n&lt;p&gt;Then, edit the &lt;code&gt;config.toml&lt;/code&gt; file and add the following lines to the &lt;code&gt;[auth]&lt;/code&gt; section:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;[&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;auth&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;]&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;signing_keys_path = &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;./signing_key.json&quot;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;This will tell Supabase to use your signing key instead of generating a new one on startup. After editing &lt;code&gt;config.toml&lt;/code&gt;, you need to restart Supabase to reload the configuration:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;npx&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;supabase&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;stop&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;npx&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;supabase&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;start&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;h2 id=&quot;hosted-supabase&quot;&gt;Hosted Supabase&lt;/h2&gt;\n&lt;p&gt;If you are using the hosted version of Supabase, you can set the signing keys in the Supabase dashboard. Go to the “Settings” tab, then click on “JWT Signing Keys”.&lt;/p&gt;\n&lt;p&gt;If you already have a standby key, you’ll need to remove it before you can add a new one. To remove a standby key, click the three dots next to the key and select “Move to previously used”. After removing the existing standby key, you can add your new signing key as described above.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; This signing key must &lt;em&gt;not&lt;/em&gt; by in an array. It should be the raw JSON object that was generated by the Supabase CLI.&lt;/p&gt;\n&lt;p&gt;Click “Create Standby Key”. In the dialog select “Import an existing key” and paste in your previously generated signing key. Click the “Create Standby Key” button to save the new signing key.&lt;/p&gt;\n&lt;p&gt;Click “Rotate Keys” to make the new signing key active. This will rotate the keys and make the new signing key the active key for signing tokens.&lt;/p&gt;\n&lt;h2 id=&quot;important-notes&quot;&gt;Important Notes&lt;/h2&gt;\n&lt;ul&gt;\n&lt;li&gt;\n&lt;p&gt;The Supabase CLI-generated signing key contains both &lt;code&gt;verify&lt;/code&gt; and &lt;code&gt;sign&lt;/code&gt; keys because Supabase itself needs to do both. However, some tools like &lt;a href=&quot;https://npmjs.com/package/jose&quot;&gt;&lt;code&gt;jose&lt;/code&gt;&lt;/a&gt; will fail signing if object contains a &lt;code&gt;verify&lt;/code&gt; key. If you encounter this issue, you can remove the &lt;code&gt;verify&lt;/code&gt; key from the signing key JSON file before using it with &lt;code&gt;jose&lt;/code&gt;.&lt;/p&gt;\n&lt;/li&gt;\n&lt;li&gt;\n&lt;p&gt;If your app has users with a persisted session, changing the signing key will invalidate all existing tokens. This means that users will need to log in again to obtain new tokens signed with the new key. Make sure to communicate this change to your users if you are changing the signing key in a production environment. If you’re using the JavaScript client, you can call &lt;code&gt;supabase.auth.refreshSession()&lt;/code&gt; to refresh the session and obtain a new token without requiring the user to log in again.&lt;/p&gt;\n&lt;/li&gt;\n&lt;li&gt;\n&lt;p&gt;You can tell if a user has an invalid JWT by checking the &lt;code&gt;error&lt;/code&gt; property of the user object returned by &lt;code&gt;supabase.auth.getUser()&lt;/code&gt;. If the JWT is invalid, the &lt;code&gt;error&lt;/code&gt; property will contain a &lt;code&gt;code&lt;/code&gt; property of &lt;code&gt;&quot;bad_jwt&quot;&lt;/code&gt;.&lt;/p&gt;\n&lt;/li&gt;\n&lt;/ul&gt;","tags":["Supabase","JWT","Security"],"date_published":"2026-02-16T00:00:00.000Z","date_updated":"2026-02-16T00:00:00.000Z"},{"id":"https://humanwhocodes.com/blog/2026/02/artifacts-ai-assisted-programming/","url":"https://humanwhocodes.com/blog/2026/02/artifacts-ai-assisted-programming/","title":"The importance of artifacts in AI-assisted programming","author":{"name":"Nicholas C. Zakas"},"summary":"Your AI pair programmer has no memory, which is why proper documentation matters more than ever in AI-assisted development.","content_text":"\nMuch of the hype around AI-assisted programming focuses on \"vibe coding,\" which means working prompt by prompt to build an application without worrying too much about the generated code. As long as something works, that's all that matters. While this approach works well for personal and hobby projects, it's impractical for professional software engineering\n\nProfessional software engineering involves a team of developers working on a shared codebase that's maintained over time. This creates concerns that go beyond whether the code is working. Significant changes need to be documented to provide traceability when something goes wrong. *Traceability* is the ability to look through changes and track them back to their root cause, whether that's a bug fix, a requirements change, or a new feature. Understanding why a change was made, especially one that caused a problem, is essential for maintaining an application over time. That's why professional software organizations prioritize creating artifacts as the application evolves.\n\n## What are artifacts?\n\nAn *artifact* is any output that's created, used, or modified during the software development process. Issues and pull requests are artifacts, as are diagrams, documents, mockups, and the code itself. Most software teams generate a large number of artifacts, which represent an important part of the process. When something goes wrong, artifacts provide traceability to help determine the origin of the problem.\n\nWhen I was learning software development in college, the focus was on front-loading document artifacts to ensure you knew what you were building. We had to create a functional requirements document (FRD, now often called a product requirements document or PRD) describing what the project or feature would do. Then we had to create a technical specification describing the overall design of the code changes. Only once these two documents were complete could we write any code. This is called the *waterfall approach*, where each stage must be completed before the next can begin.\n\nWith agile methodologies becoming more popular, many teams saw this as license to forego the PRDs and tech specs, and instead \"just start coding.\" The focus on iterative development, where software is never truly \"complete,\" meant that these document artifacts took precious time away from the development process. Many teams convinced themselves that two-sentence issue summaries and face-to-face discussions with product managers were enough to implement good software. Documents like PRDs and tech specs became rare in technology companies (outside of high-level architecture and infrastructure discussions, which often had extensive specifications and diagrams). The argument was that if you had a question about why something was done a particular way, you could just ask the person who did it.\n\n## Your AI teammate's forgetfulness\n\nWhen AI-assisted programming began, developers quickly realized that the models didn't always produce reliable results. Far from deterministic, the models may produce wildly different output when prompts aren't specific enough. AI-powered IDEs do their best to provide additional context to the models to make output more reliable, yet developers continue to find that longer prompts produce better results.\n\nHere's the problem: Your AI pair programmer has a very short memory. It can't tell you why it made a decision six months ago that brought down your server today. Human programmers are both implementers and stores of knowledge, while AI is only an implementer. The model doesn't provide traceability beyond the scope of its current context window. If you \"vibe coded\" an implementation and your server goes down, there's no way to tell why. Did the model generate insecure code that left the server vulnerable to attack? If so, why? Did it not know the code had to be secure? Was it something in the original prompt that led it to believe the code didn't need to be secure? There's no way to know what part of the process broke down, which means additional time spent before the problem can be addressed and a higher likelihood of a similar problem in the future.\n\nWith the proper artifacts available, however, the investigation proceeds quickly and systematically. Because some artifacts are used directly as prompts, it's much easier to discover where an ambiguity might have led a model to make an incorrect decision. The type of artifacts you use and the level of detail in each are important for this process.\n\n## Artifacts for AI-assisted programming\n\nWhile there are many different artifacts that can be useful for AI-assisted programming, I’ve found the following to be among the most important:\n\n* Product requirements documents (PRDs)  \n* Architectural decision records (ADRs)  \n* Technical design documents (TDDs)  \n* Task lists\n\nThese artifacts allow you to step back through the development process and, if there’s a problem, better determine what went wrong and how to address it.\n\n**Recommendation:** Store these artifacts in your source control system for easy reference and to track changes as they’re made.\n\n### Product requirements documents (PRDs)\n\nA product requirements document[^1] tells you *what* you're building and *why* you're building it without digging into *how* it will be built. In traditional software engineering, a product manager conducts research to discover what use cases need to be addressed by a feature or product. This may involve user studies, interviews, market research, or other fact-finding processes. The product manager then synthesizes the data into a set of requirements to address the findings and presents that information to the software engineering manager or lead for review and implementation.\n\nPRD formats vary from organization to organization, but they generally contain this type of information:\n\n* **Background** \\- Contains necessary context to understand what the PRD is describing. This may be a problem statement or an explanation of a use case to be addressed.  \n* **Goals** \\- What the requirements are meant to address.  \n* **Target Audience/Personas** \\- The intended users of the functionality.  \n* **Functional Requirements** \\- Describes what the changes do.   \n* **User Stories** \\- Describes what a given user or persona experiences when using this functionality. (For example, “As a user, I want to see my task list when I log in.”)  \n* **Constraints** \\- Factors that limit the solution space or otherwise constrain what can be accomplished with this PRD.  \n* **Success Metrics** \\- The measurable data indicating that the goals have been achieved.\n\nYou can review the PRD at any point to ensure that the technical design and implementation are proceeding correctly. This is especially helpful when you’re deep into implementation and there’s disagreement on whether the functionality works as intended.\n\nUse AI to:\n\n* Create the PRD from a feature or product description  \n* Review the PRD for inconsistencies or ambiguities  \n* Ensure alignment of goals and features\n\nIn AI-assisted programming, the PRD is a key input the AI uses to develop a technical design. If there are errors in the PRD, then the ensuing technical design will also contain errors, so it’s important to review a PRD thoroughly to catch any problems. It can be helpful to use one AI to write the PRD and another to review it for inconsistencies or ambiguities.\n\n## Architectural decision records (ADRs)\n\nAn architectural decision record[^2] explains *why* a technical choice was made. While the word “architectural” is in the name, ADRs don’t need to be tied specifically to architectural decisions. Any given project has the potential to yield a number of important technical decisions, such as:\n\n* Do we use a SQL or NoSQL database?  \n* Which cloud infrastructure provider do we use?  \n* Which framework do we use?  \n* What programming language do we use for APIs?\n\nThese decisions are made by humans and likely will remain the domain of humans for quite some time, as these choices often depend on business context to be made correctly.\n\nAn ADR typically has the following sections:\n\n* **Title** \\- A brief description of the decision.  \n* **Status** \\- Whether the decision is pending, approved, deprecated, or superseded.  \n* **Context** \\- Any relevant background information about the decision, including references to other artifacts.  \n* **Decision** \\- The decision that was made.  \n* **Consequences** \\- Both the positive and negative consequences of the decision.  \n* **Alternatives Considered** \\- Any other options and the reasons for rejection.\n\nADRs are especially beneficial because they remain immutable. The reason why a decision was made doesn’t change. Even as PRDs and TDDs are corrected during implementation to clear up inconsistencies, the ADRs remain as they were. The only potential change is when an ADR status changes (from accepted to deprecated or superseded). \n\nUse AI to:\n\n* Generate a first draft of an ADR based on a decision  \n* Propose alternatives during the decision-making process  \n* Identify potential consequences  \n* Find related ADRs\n\nNot all features require ADRs, especially when building on top of an existing framework or architecture. However, when a significant technical decision is made, documenting it as an ADR provides additional inputs for generating a good TDD and traceability when reviewing the TDD.\n\n## Technical design documents (TDDs)\n\nA technical design document[^3] describes *how* the functionality described in a PRD is implemented. In some companies, this is called an architectural design document or technical specification, but the underlying purpose is the same: to give a high-level overview of the software solution that fulfills the requirements of the PRD. In traditional software engineering, a tech lead or architect is responsible for reviewing a PRD and developing an appropriate technical design. The design is then reviewed by other developers to gather feedback and ensure the scope is correct.\n\nTypical sections in a TDD include:\n\n* **Overview** \\- What the design is meant to cover.  \n* **Goals** \\- Specific technical goals to achieve with the design. (As opposed to functional or business goals, which are described in the PRD.)  \n* **Background/Context** \\- Any relevant technical information related to the design, such as existing infrastructure or design patterns to be followed.  \n* **Requirements** \\- Specific functional (what the system should do) and non-functional (how the system should behave) technical requirements. Non-functional requirements include security, scalability, performance, and reliability.  \n* **High-Level Design/Architecture** \\- Describes the technical resources required and how they relate to one another.  \n* **Data Design** \\- Any new or existing data sources required. Traditionally, this would involve an entity-relationship diagram.  \n* **API Design** \\- The interaction points between existing systems or users and the new design.  \n* **Technology Stack** \\- The languages, frameworks, tools, infrastructure, etc., necessary to implement the design.  \n* **Security Considerations \\-** Guidelines for how security best practices are followed.  \n* **Scalability/Performance** \\- How the system will work under heavy load, including anticipated performance bottlenecks and mitigations.  \n* **Testing Strategy** \\- What types of tests (unit, integration, etc.) will be used.  \n* **Deployment Plan** \\- How the system will be made available to users.  \n* **Monitoring/Observability** \\- How you’ll know if the system is functioning correctly once deployed.  \n* **Out of Scope** \\- Anything that’s explicitly not included in the design and won’t be completed as part of this work. This ensures time isn’t wasted by someone (or an AI) inferring a requirement that wasn’t explicitly stated.  \n* **Alternatives Considered** \\- Any other approaches that were considered and discarded. This helps answer the question, “Did you think about X?” when it inevitably comes up later.\n\nThe overall purpose of a TDD is to ensure that all the complexities of a software design are taken into account up front, before implementation begins. Catching problems at this stage is significantly cheaper than discovering the same problems during implementation, in which case you often need to undo work.\n\nUse AI to:\n\n* Create the TDD from the PRD and any relevant ADRs.  \n* Review the TDD against the PRD to look for inconsistencies, omissions, and errors.  \n* Simultaneously update the PRD and TDD to resolve any issues found.\n\nIn AI-assisted programming, the TDD is the last stop before implementation planning, so it’s critical to catch any problems with the technical design here. AI should be able to implement the design using the TDD in conjunction with the PRD and the codebase (IDEs automatically pass the codebase as context to AI). However, it’s often best to generate a task list from the TDD instead of jumping right into implementation.\n\n## Task lists\n\nIn traditional software engineering, the next step after creating a TDD is for the tech lead, engineering manager, and potentially other engineers to break down the design into a series of tasks. The tasks are then scoped to determine the effort required for each task (and potentially a time estimate) and arranged in sequence to ensure that each task can proceed without being blocked by incomplete tasks. The tasks are then assigned to different engineers on the team for implementation.\n\nFor AI-assisted programming, it’s helpful to add even more context than a human would need to complete a task. For human programming, tasks frequently contain a small amount of information and then refer back to the TDD. For AI implementation, each task should contain:\n\n* **Overview/Description** \\- The purpose of this task.  \n* **Deliverable** \\- The expected output of the task.  \n* **Dependencies** \\- How it relates to other tasks.  \n* **Acceptance Criteria** \\- How to verify that the task is complete and working as expected.  \n* **References** \\- Specific sections in the PRD and TDD that this task addresses.  \n* **Out of Scope** \\- Anything you specifically don’t want completed as part of this task.  \n* **Tips** \\- Anything additional that will help a human or AI implement the task (libraries to use, existing code to reuse, resources needed, etc.).\n\nUse AI to:\n\n* Convert a TDD into a list of tasks.  \n* Estimate the relative effort associated with each task.  \n* Create issues or tickets in your task management software for each task.  \n* Verify that the task list completely implements the TDD.\n\nIn AI-assisted programming, generating a task list from the TDD makes it easier to:\n\n1. Validate that the entire TDD is represented in the tasks.  \n2. Catch gaps or ambiguities in the TDD.  \n3. Use AI to implement one task at a time to reduce context size and verify task completion.  \n4. Pause and resume work as necessary.  \n5. Iterate on individual tasks when the output isn’t as expected.  \n6. Track implementation progress, especially across multiple days or sessions.\n\n## Example: “Save for later” feature\n\nA “save for later” feature was implemented by AI as part of an online shopping cart. Users are complaining that the items they save are disappearing soon after they’re added, often within one day. There seems to be some confusion on the team, as everyone agrees this isn’t the desired behavior.\n\nThe investigation starts by looking at the PRD, which states the following:\n\n* **Requirement**: Users should be able to save items for later and access them anytime they return to the site.  \n* **Success metric**: Increase conversion rate by letting users defer purchase decisions.  \n* **User story**: As a shopper, I want to save items so I can think about purchases without losing track of products I'm interested in.\n\nThe PRD clearly states that saved items should be accessible “anytime they return to the site.” That means the data is meant to persist and the implementation is out of alignment with the PRD. The next step is to review the TDD.\n\nThe TDD references two ADRs:\n\n* The *Storage for “save for later” items* ADR indicates that PostgreSQL will be used to store the data. PostgreSQL is a persistent data storage mechanism, so we know the data isn’t missing, just unavailable.  \n* The *Caching for “save for later” items* ADR indicates that Redis will be used as an in-memory cache for “save for later” items to speed up UI rendering.\n\nFurther down in the TDD:\n\n* **Data design:** Saved items are stored in PostgreSQL `saved_items` table and Redis with key pattern `saved:{user_id}`.  \n* **Implementation details:**  \n  * Default Redis TTL: 86400 seconds (24 hours) to prevent unbounded growth.  \n  * `POST /saved_items` endpoint writes to both PostgreSQL and Redis.  \n  * `GET /saved_items` endpoint reads from Redis for better performance.\n\nWhile each piece of the TDD is technically correct, there’s a key missing detail: the `GET /saved_items` endpoint needs to implement a read-through cache so if the data doesn’t exist in Redis it will be fetched from PostgreSQL and then repopulate the cache. Users are seeing their saved items disappear not because they’re being deleted, but because the Redis cache is empty and the entrypoint then didn’t fetch the data from PostgreSQL.  \n   \nAI missed this crucial detail when generating the TDD. In all likelihood, human reviewers didn’t catch the omission because a read-through cache was implied by the surrounding details. When AI generated a task list from the TDD, a read-through cache was never explicitly planned for or implemented.\n\nIn this case, the TDD generation needed information in addition to the PRD and ADRs to generate the correct design. The team can update the prompt used for this portion of the development process to reduce the likelihood of a similar error of omission in the future. For example, a document describing how the team typically uses read-through caches for PostgreSQL and Redis can be added to the context of TDD generation.\n\nWithout the artifacts to review, it would be difficult to track down the source of the error and prevent similar errors in the future.\n\n## Conclusion\n\nAI-assisted programming offers tremendous potential to accelerate software development, but it requires a fundamental shift in how we approach the development process. The \"vibe coding\" approach that works for quick prototypes becomes a liability in professional settings where maintainability, traceability, and long-term success matter. Unlike human developers who remember why decisions were made, AI has no memory beyond its current context window, making comprehensive documentation essential rather than optional.\n\nBy creating and maintaining proper artifacts (such as PRDs, ADRs, TDDs, and task lists), you build a knowledge foundation that compensates for AI's lack of memory. These artifacts serve dual purposes: they guide AI toward better implementations during development and provide the investigative trail you need when problems occur. When your server goes down at 3 a.m., these artifacts let you quickly trace the issue back to its source, whether that's an ambiguous requirement, a flawed technical decision, or a gap in the implementation. They transform AI from an unpredictable code generator into a reliable development partner.\n\nThe investment in documentation may seem like it slows down initial development, but it pays dividends when you need to debug problems, onboard new team members, or understand decisions made months or years ago. In a world where AI is increasingly involved in writing code, the artifacts you create become even more valuable than the code itself. You'll likely find that the process not only improves your AI-generated code but also clarifies your own thinking about the problem you're solving. Start with your next feature and experience the difference proper artifacts make.\n\n[^1]: [Product requirements](https://www.atlassian.com/agile/product-management/requirements)\n[^2]: [Documenting Architecture Decisions](https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions)\n[^3]: [Technical Design Document Template](https://www.chatprd.ai/templates/technical-design-document-template)\n","content_html":"&lt;p&gt;Much of the hype around AI-assisted programming focuses on “vibe coding,” which means working prompt by prompt to build an application without worrying too much about the generated code. As long as something works, that’s all that matters. While this approach works well for personal and hobby projects, it’s impractical for professional software engineering&lt;/p&gt;\n&lt;p&gt;Professional software engineering involves a team of developers working on a shared codebase that’s maintained over time. This creates concerns that go beyond whether the code is working. Significant changes need to be documented to provide traceability when something goes wrong. &lt;em&gt;Traceability&lt;/em&gt; is the ability to look through changes and track them back to their root cause, whether that’s a bug fix, a requirements change, or a new feature. Understanding why a change was made, especially one that caused a problem, is essential for maintaining an application over time. That’s why professional software organizations prioritize creating artifacts as the application evolves.&lt;/p&gt;\n&lt;h2 id=&quot;what-are-artifacts&quot;&gt;What are artifacts?&lt;/h2&gt;\n&lt;p&gt;An &lt;em&gt;artifact&lt;/em&gt; is any output that’s created, used, or modified during the software development process. Issues and pull requests are artifacts, as are diagrams, documents, mockups, and the code itself. Most software teams generate a large number of artifacts, which represent an important part of the process. When something goes wrong, artifacts provide traceability to help determine the origin of the problem.&lt;/p&gt;\n&lt;p&gt;When I was learning software development in college, the focus was on front-loading document artifacts to ensure you knew what you were building. We had to create a functional requirements document (FRD, now often called a product requirements document or PRD) describing what the project or feature would do. Then we had to create a technical specification describing the overall design of the code changes. Only once these two documents were complete could we write any code. This is called the &lt;em&gt;waterfall approach&lt;/em&gt;, where each stage must be completed before the next can begin.&lt;/p&gt;\n&lt;p&gt;With agile methodologies becoming more popular, many teams saw this as license to forego the PRDs and tech specs, and instead “just start coding.” The focus on iterative development, where software is never truly “complete,” meant that these document artifacts took precious time away from the development process. Many teams convinced themselves that two-sentence issue summaries and face-to-face discussions with product managers were enough to implement good software. Documents like PRDs and tech specs became rare in technology companies (outside of high-level architecture and infrastructure discussions, which often had extensive specifications and diagrams). The argument was that if you had a question about why something was done a particular way, you could just ask the person who did it.&lt;/p&gt;\n&lt;h2 id=&quot;your-ai-teammates-forgetfulness&quot;&gt;Your AI teammate’s forgetfulness&lt;/h2&gt;\n&lt;p&gt;When AI-assisted programming began, developers quickly realized that the models didn’t always produce reliable results. Far from deterministic, the models may produce wildly different output when prompts aren’t specific enough. AI-powered IDEs do their best to provide additional context to the models to make output more reliable, yet developers continue to find that longer prompts produce better results.&lt;/p&gt;\n&lt;p&gt;Here’s the problem: Your AI pair programmer has a very short memory. It can’t tell you why it made a decision six months ago that brought down your server today. Human programmers are both implementers and stores of knowledge, while AI is only an implementer. The model doesn’t provide traceability beyond the scope of its current context window. If you “vibe coded” an implementation and your server goes down, there’s no way to tell why. Did the model generate insecure code that left the server vulnerable to attack? If so, why? Did it not know the code had to be secure? Was it something in the original prompt that led it to believe the code didn’t need to be secure? There’s no way to know what part of the process broke down, which means additional time spent before the problem can be addressed and a higher likelihood of a similar problem in the future.&lt;/p&gt;\n&lt;p&gt;With the proper artifacts available, however, the investigation proceeds quickly and systematically. Because some artifacts are used directly as prompts, it’s much easier to discover where an ambiguity might have led a model to make an incorrect decision. The type of artifacts you use and the level of detail in each are important for this process.&lt;/p&gt;\n&lt;h2 id=&quot;artifacts-for-ai-assisted-programming&quot;&gt;Artifacts for AI-assisted programming&lt;/h2&gt;\n&lt;p&gt;While there are many different artifacts that can be useful for AI-assisted programming, I’ve found the following to be among the most important:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;Product requirements documents (PRDs)&lt;/li&gt;\n&lt;li&gt;Architectural decision records (ADRs)&lt;/li&gt;\n&lt;li&gt;Technical design documents (TDDs)&lt;/li&gt;\n&lt;li&gt;Task lists&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;These artifacts allow you to step back through the development process and, if there’s a problem, better determine what went wrong and how to address it.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Recommendation:&lt;/strong&gt; Store these artifacts in your source control system for easy reference and to track changes as they’re made.&lt;/p&gt;\n&lt;h3 id=&quot;product-requirements-documents-prds&quot;&gt;Product requirements documents (PRDs)&lt;/h3&gt;\n&lt;p&gt;A product requirements document&lt;sup&gt;&lt;a href=&quot;#user-content-fn-1&quot; id=&quot;user-content-fnref-1&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; tells you &lt;em&gt;what&lt;/em&gt; you’re building and &lt;em&gt;why&lt;/em&gt; you’re building it without digging into &lt;em&gt;how&lt;/em&gt; it will be built. In traditional software engineering, a product manager conducts research to discover what use cases need to be addressed by a feature or product. This may involve user studies, interviews, market research, or other fact-finding processes. The product manager then synthesizes the data into a set of requirements to address the findings and presents that information to the software engineering manager or lead for review and implementation.&lt;/p&gt;\n&lt;p&gt;PRD formats vary from organization to organization, but they generally contain this type of information:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;&lt;strong&gt;Background&lt;/strong&gt; - Contains necessary context to understand what the PRD is describing. This may be a problem statement or an explanation of a use case to be addressed.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Goals&lt;/strong&gt; - What the requirements are meant to address.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Target Audience/Personas&lt;/strong&gt; - The intended users of the functionality.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Functional Requirements&lt;/strong&gt; - Describes what the changes do.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;User Stories&lt;/strong&gt; - Describes what a given user or persona experiences when using this functionality. (For example, “As a user, I want to see my task list when I log in.”)&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Constraints&lt;/strong&gt; - Factors that limit the solution space or otherwise constrain what can be accomplished with this PRD.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Success Metrics&lt;/strong&gt; - The measurable data indicating that the goals have been achieved.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;You can review the PRD at any point to ensure that the technical design and implementation are proceeding correctly. This is especially helpful when you’re deep into implementation and there’s disagreement on whether the functionality works as intended.&lt;/p&gt;\n&lt;p&gt;Use AI to:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;Create the PRD from a feature or product description&lt;/li&gt;\n&lt;li&gt;Review the PRD for inconsistencies or ambiguities&lt;/li&gt;\n&lt;li&gt;Ensure alignment of goals and features&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;In AI-assisted programming, the PRD is a key input the AI uses to develop a technical design. If there are errors in the PRD, then the ensuing technical design will also contain errors, so it’s important to review a PRD thoroughly to catch any problems. It can be helpful to use one AI to write the PRD and another to review it for inconsistencies or ambiguities.&lt;/p&gt;\n&lt;h2 id=&quot;architectural-decision-records-adrs&quot;&gt;Architectural decision records (ADRs)&lt;/h2&gt;\n&lt;p&gt;An architectural decision record&lt;sup&gt;&lt;a href=&quot;#user-content-fn-2&quot; id=&quot;user-content-fnref-2&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; explains &lt;em&gt;why&lt;/em&gt; a technical choice was made. While the word “architectural” is in the name, ADRs don’t need to be tied specifically to architectural decisions. Any given project has the potential to yield a number of important technical decisions, such as:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;Do we use a SQL or NoSQL database?&lt;/li&gt;\n&lt;li&gt;Which cloud infrastructure provider do we use?&lt;/li&gt;\n&lt;li&gt;Which framework do we use?&lt;/li&gt;\n&lt;li&gt;What programming language do we use for APIs?&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;These decisions are made by humans and likely will remain the domain of humans for quite some time, as these choices often depend on business context to be made correctly.&lt;/p&gt;\n&lt;p&gt;An ADR typically has the following sections:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;&lt;strong&gt;Title&lt;/strong&gt; - A brief description of the decision.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Status&lt;/strong&gt; - Whether the decision is pending, approved, deprecated, or superseded.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Context&lt;/strong&gt; - Any relevant background information about the decision, including references to other artifacts.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Decision&lt;/strong&gt; - The decision that was made.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Consequences&lt;/strong&gt; - Both the positive and negative consequences of the decision.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Alternatives Considered&lt;/strong&gt; - Any other options and the reasons for rejection.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;ADRs are especially beneficial because they remain immutable. The reason why a decision was made doesn’t change. Even as PRDs and TDDs are corrected during implementation to clear up inconsistencies, the ADRs remain as they were. The only potential change is when an ADR status changes (from accepted to deprecated or superseded).&lt;/p&gt;\n&lt;p&gt;Use AI to:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;Generate a first draft of an ADR based on a decision&lt;/li&gt;\n&lt;li&gt;Propose alternatives during the decision-making process&lt;/li&gt;\n&lt;li&gt;Identify potential consequences&lt;/li&gt;\n&lt;li&gt;Find related ADRs&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;Not all features require ADRs, especially when building on top of an existing framework or architecture. However, when a significant technical decision is made, documenting it as an ADR provides additional inputs for generating a good TDD and traceability when reviewing the TDD.&lt;/p&gt;\n&lt;h2 id=&quot;technical-design-documents-tdds&quot;&gt;Technical design documents (TDDs)&lt;/h2&gt;\n&lt;p&gt;A technical design document&lt;sup&gt;&lt;a href=&quot;#user-content-fn-3&quot; id=&quot;user-content-fnref-3&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; describes &lt;em&gt;how&lt;/em&gt; the functionality described in a PRD is implemented. In some companies, this is called an architectural design document or technical specification, but the underlying purpose is the same: to give a high-level overview of the software solution that fulfills the requirements of the PRD. In traditional software engineering, a tech lead or architect is responsible for reviewing a PRD and developing an appropriate technical design. The design is then reviewed by other developers to gather feedback and ensure the scope is correct.&lt;/p&gt;\n&lt;p&gt;Typical sections in a TDD include:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;&lt;strong&gt;Overview&lt;/strong&gt; - What the design is meant to cover.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Goals&lt;/strong&gt; - Specific technical goals to achieve with the design. (As opposed to functional or business goals, which are described in the PRD.)&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Background/Context&lt;/strong&gt; - Any relevant technical information related to the design, such as existing infrastructure or design patterns to be followed.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Requirements&lt;/strong&gt; - Specific functional (what the system should do) and non-functional (how the system should behave) technical requirements. Non-functional requirements include security, scalability, performance, and reliability.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;High-Level Design/Architecture&lt;/strong&gt; - Describes the technical resources required and how they relate to one another.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Data Design&lt;/strong&gt; - Any new or existing data sources required. Traditionally, this would involve an entity-relationship diagram.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;API Design&lt;/strong&gt; - The interaction points between existing systems or users and the new design.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Technology Stack&lt;/strong&gt; - The languages, frameworks, tools, infrastructure, etc., necessary to implement the design.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Security Considerations -&lt;/strong&gt; Guidelines for how security best practices are followed.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Scalability/Performance&lt;/strong&gt; - How the system will work under heavy load, including anticipated performance bottlenecks and mitigations.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Testing Strategy&lt;/strong&gt; - What types of tests (unit, integration, etc.) will be used.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Deployment Plan&lt;/strong&gt; - How the system will be made available to users.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Monitoring/Observability&lt;/strong&gt; - How you’ll know if the system is functioning correctly once deployed.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Out of Scope&lt;/strong&gt; - Anything that’s explicitly not included in the design and won’t be completed as part of this work. This ensures time isn’t wasted by someone (or an AI) inferring a requirement that wasn’t explicitly stated.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Alternatives Considered&lt;/strong&gt; - Any other approaches that were considered and discarded. This helps answer the question, “Did you think about X?” when it inevitably comes up later.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;The overall purpose of a TDD is to ensure that all the complexities of a software design are taken into account up front, before implementation begins. Catching problems at this stage is significantly cheaper than discovering the same problems during implementation, in which case you often need to undo work.&lt;/p&gt;\n&lt;p&gt;Use AI to:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;Create the TDD from the PRD and any relevant ADRs.&lt;/li&gt;\n&lt;li&gt;Review the TDD against the PRD to look for inconsistencies, omissions, and errors.&lt;/li&gt;\n&lt;li&gt;Simultaneously update the PRD and TDD to resolve any issues found.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;In AI-assisted programming, the TDD is the last stop before implementation planning, so it’s critical to catch any problems with the technical design here. AI should be able to implement the design using the TDD in conjunction with the PRD and the codebase (IDEs automatically pass the codebase as context to AI). However, it’s often best to generate a task list from the TDD instead of jumping right into implementation.&lt;/p&gt;\n&lt;h2 id=&quot;task-lists&quot;&gt;Task lists&lt;/h2&gt;\n&lt;p&gt;In traditional software engineering, the next step after creating a TDD is for the tech lead, engineering manager, and potentially other engineers to break down the design into a series of tasks. The tasks are then scoped to determine the effort required for each task (and potentially a time estimate) and arranged in sequence to ensure that each task can proceed without being blocked by incomplete tasks. The tasks are then assigned to different engineers on the team for implementation.&lt;/p&gt;\n&lt;p&gt;For AI-assisted programming, it’s helpful to add even more context than a human would need to complete a task. For human programming, tasks frequently contain a small amount of information and then refer back to the TDD. For AI implementation, each task should contain:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;&lt;strong&gt;Overview/Description&lt;/strong&gt; - The purpose of this task.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Deliverable&lt;/strong&gt; - The expected output of the task.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Dependencies&lt;/strong&gt; - How it relates to other tasks.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Acceptance Criteria&lt;/strong&gt; - How to verify that the task is complete and working as expected.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;References&lt;/strong&gt; - Specific sections in the PRD and TDD that this task addresses.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Out of Scope&lt;/strong&gt; - Anything you specifically don’t want completed as part of this task.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Tips&lt;/strong&gt; - Anything additional that will help a human or AI implement the task (libraries to use, existing code to reuse, resources needed, etc.).&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;Use AI to:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;Convert a TDD into a list of tasks.&lt;/li&gt;\n&lt;li&gt;Estimate the relative effort associated with each task.&lt;/li&gt;\n&lt;li&gt;Create issues or tickets in your task management software for each task.&lt;/li&gt;\n&lt;li&gt;Verify that the task list completely implements the TDD.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;In AI-assisted programming, generating a task list from the TDD makes it easier to:&lt;/p&gt;\n&lt;ol&gt;\n&lt;li&gt;Validate that the entire TDD is represented in the tasks.&lt;/li&gt;\n&lt;li&gt;Catch gaps or ambiguities in the TDD.&lt;/li&gt;\n&lt;li&gt;Use AI to implement one task at a time to reduce context size and verify task completion.&lt;/li&gt;\n&lt;li&gt;Pause and resume work as necessary.&lt;/li&gt;\n&lt;li&gt;Iterate on individual tasks when the output isn’t as expected.&lt;/li&gt;\n&lt;li&gt;Track implementation progress, especially across multiple days or sessions.&lt;/li&gt;\n&lt;/ol&gt;\n&lt;h2 id=&quot;example-save-for-later-feature&quot;&gt;Example: “Save for later” feature&lt;/h2&gt;\n&lt;p&gt;A “save for later” feature was implemented by AI as part of an online shopping cart. Users are complaining that the items they save are disappearing soon after they’re added, often within one day. There seems to be some confusion on the team, as everyone agrees this isn’t the desired behavior.&lt;/p&gt;\n&lt;p&gt;The investigation starts by looking at the PRD, which states the following:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;&lt;strong&gt;Requirement&lt;/strong&gt;: Users should be able to save items for later and access them anytime they return to the site.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Success metric&lt;/strong&gt;: Increase conversion rate by letting users defer purchase decisions.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;User story&lt;/strong&gt;: As a shopper, I want to save items so I can think about purchases without losing track of products I’m interested in.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;The PRD clearly states that saved items should be accessible “anytime they return to the site.” That means the data is meant to persist and the implementation is out of alignment with the PRD. The next step is to review the TDD.&lt;/p&gt;\n&lt;p&gt;The TDD references two ADRs:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;The &lt;em&gt;Storage for “save for later” items&lt;/em&gt; ADR indicates that PostgreSQL will be used to store the data. PostgreSQL is a persistent data storage mechanism, so we know the data isn’t missing, just unavailable.&lt;/li&gt;\n&lt;li&gt;The &lt;em&gt;Caching for “save for later” items&lt;/em&gt; ADR indicates that Redis will be used as an in-memory cache for “save for later” items to speed up UI rendering.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;Further down in the TDD:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;&lt;strong&gt;Data design:&lt;/strong&gt; Saved items are stored in PostgreSQL &lt;code&gt;saved_items&lt;/code&gt; table and Redis with key pattern &lt;code&gt;saved:{user_id}&lt;/code&gt;.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Implementation details:&lt;/strong&gt;\n&lt;ul&gt;\n&lt;li&gt;Default Redis TTL: 86400 seconds (24 hours) to prevent unbounded growth.&lt;/li&gt;\n&lt;li&gt;&lt;code&gt;POST /saved_items&lt;/code&gt; endpoint writes to both PostgreSQL and Redis.&lt;/li&gt;\n&lt;li&gt;&lt;code&gt;GET /saved_items&lt;/code&gt; endpoint reads from Redis for better performance.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;While each piece of the TDD is technically correct, there’s a key missing detail: the &lt;code&gt;GET /saved_items&lt;/code&gt; endpoint needs to implement a read-through cache so if the data doesn’t exist in Redis it will be fetched from PostgreSQL and then repopulate the cache. Users are seeing their saved items disappear not because they’re being deleted, but because the Redis cache is empty and the entrypoint then didn’t fetch the data from PostgreSQL.&lt;/p&gt;\n&lt;p&gt;AI missed this crucial detail when generating the TDD. In all likelihood, human reviewers didn’t catch the omission because a read-through cache was implied by the surrounding details. When AI generated a task list from the TDD, a read-through cache was never explicitly planned for or implemented.&lt;/p&gt;\n&lt;p&gt;In this case, the TDD generation needed information in addition to the PRD and ADRs to generate the correct design. The team can update the prompt used for this portion of the development process to reduce the likelihood of a similar error of omission in the future. For example, a document describing how the team typically uses read-through caches for PostgreSQL and Redis can be added to the context of TDD generation.&lt;/p&gt;\n&lt;p&gt;Without the artifacts to review, it would be difficult to track down the source of the error and prevent similar errors in the future.&lt;/p&gt;\n&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;\n&lt;p&gt;AI-assisted programming offers tremendous potential to accelerate software development, but it requires a fundamental shift in how we approach the development process. The “vibe coding” approach that works for quick prototypes becomes a liability in professional settings where maintainability, traceability, and long-term success matter. Unlike human developers who remember why decisions were made, AI has no memory beyond its current context window, making comprehensive documentation essential rather than optional.&lt;/p&gt;\n&lt;p&gt;By creating and maintaining proper artifacts (such as PRDs, ADRs, TDDs, and task lists), you build a knowledge foundation that compensates for AI’s lack of memory. These artifacts serve dual purposes: they guide AI toward better implementations during development and provide the investigative trail you need when problems occur. When your server goes down at 3 a.m., these artifacts let you quickly trace the issue back to its source, whether that’s an ambiguous requirement, a flawed technical decision, or a gap in the implementation. They transform AI from an unpredictable code generator into a reliable development partner.&lt;/p&gt;\n&lt;p&gt;The investment in documentation may seem like it slows down initial development, but it pays dividends when you need to debug problems, onboard new team members, or understand decisions made months or years ago. In a world where AI is increasingly involved in writing code, the artifacts you create become even more valuable than the code itself. You’ll likely find that the process not only improves your AI-generated code but also clarifies your own thinking about the problem you’re solving. Start with your next feature and experience the difference proper artifacts make.&lt;/p&gt;\n&lt;section data-footnotes=&quot;&quot; class=&quot;footnotes&quot;&gt;&lt;h2 class=&quot;sr-only&quot; id=&quot;footnote-label&quot;&gt;Footnotes&lt;/h2&gt;\n&lt;ol&gt;\n&lt;li id=&quot;user-content-fn-1&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://www.atlassian.com/agile/product-management/requirements&quot;&gt;Product requirements&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-1&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-2&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions&quot;&gt;Documenting Architecture Decisions&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-2&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-3&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://www.chatprd.ai/templates/technical-design-document-template&quot;&gt;Technical Design Document Template&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-3&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;/ol&gt;\n&lt;/section&gt;","tags":["AI","Specifications","PRDs","ADRs","Artifacts"],"date_published":"2026-02-03T00:00:00.000Z","date_updated":"2026-02-03T00:00:00.000Z"},{"id":"https://humanwhocodes.com/blog/2026/01/coder-orchestrator-future-software-engineering/","url":"https://humanwhocodes.com/blog/2026/01/coder-orchestrator-future-software-engineering/","title":"From Coder to Orchestrator: The future of software engineering with AI","author":{"name":"Nicholas C. Zakas"},"summary":"The software engineering job of the future won't involve writing code; it will involve orchestrating AI agents to write code for you.","content_text":"\nThe software engineering industry is undergoing a major AI-driven transition in how we work. The days when humans needed to write every line of code are already behind us as LLMs become more capable and reliable. The improvement in code output during 2025 alone has been astounding. I've personally watched LLMs struggle with certain problems, then a few months later, solve them completely and efficiently. Progress will likely accelerate even further in 2026\\.\n\nAs Addy Osmani put it, we’re moving from coder to conductor to orchestrator[^1].\n\n## From autocomplete to agent-based development\n\nAt the start of 2024, AI-assisted programming resembled a significantly improved autocomplete. You could maximize AI's value by having it write tedious, common patterns learned from its training data. Coders had a form of \"cruise control\"—models filled in the blanks as you worked, but you remained the driver of the development process. The time savings came primarily from mundane tasks like writing documentation comments, utility functions, data models, and tests.\n\nIn 2025, we got our first taste of partially autonomous models that could take a single prompt and figure out how to implement the request. The coder now acted more like a *conductor*, giving an LLM instructions to complete a task, then sitting back to see what it produced, providing feedback, and repeating the cycle. Models became capable of creating complex, multi-file solutions, and in some cases, complete applications, from a single prompt. While we still initiated the code creation process, we acted like an orchestra conductor guiding the violin section through a carefully measured motif. We remained ultimately responsible for the output but spent more time reviewing than coding. We'd been upgraded from cruise control to partially self-driving.\n\nTowards the end of 2025, fully autonomous background agents gave rise to a new way of working. Instead of limiting ourselves to one synchronous AI session at a time, we can now act as an *orchestrator*, arranging work across multiple agents and determining how to combine them later. Through offerings like Cursor cloud agents[^2] and GitHub Copilot coding agent[^3], you can give instructions to be completed without active monitoring. These cloud-based agents continue executing until the task is complete, then notify you when it's time to review their work. The isolated cloud environments remove much of the security risk of running agents on personal computers, allowing them to try, fail, and retry approaches without risking collateral damage.\n\nThis human-as-orchestrator trend appears to be the future of software development, making it worth exploring how the human development experience will change in the coming years.\n\n## From text-focused IDEs to task-focused IDEs\n\nWith humans no longer responsible for writing code, the question arises: what will the integrated development environment (IDE) interface look like? Undoubtedly, humans will still initiate code generation, so what kind of interface will that require?\n\nIf humans are primarily orchestrators, then IDEs will evolve to focus on managing coding agents instead of editing code. We can see this evolution already happening with GitHub Mission Control[^4] on the web and Visual Studio Code agents panel[^5] and  Google Antigravity[^6] on the desktop.\n\nBoth interfaces put agent management front and center, pushing code into the background. The user experience is optimized for quickly scanning agent sessions to see which are pending, complete, or stuck and need intervention. You can also easily switch between different sessions, each represented by a chat interface that allows you to interact and iterate with them all in one place.\n\nIn the future, I expect these agent-focused interfaces to evolve to provide ways to combine outputs from different agent sessions into a finished project. I can imagine an IDE that manages creation of the front end, back end, and data layer of an application in separate sessions, then easily incorporates all three into a single, functional codebase.\n\nI think we could see this evolution move quickly, with most IDEs in 2028 being primarily agent-focused.\n\n## From hands-on coding to no-hands-on coding\n\nIf agents are primarily responsible for coding, it's not a stretch to think that at some point, humans may not be allowed to code by hand at work. After all, once AI can code better than humans, why would you want a human to touch the code? Doing so would introduce unnecessary risk and increase the potential for bugs.\n\nThink of this as similar to self-driving cars. Once self-driving cars become established and ubiquitous, human driving will be considered dangerous and unpredictable. Insurance rates for human drivers will likely skyrocket due to the risk compared to the alternative. While driving competitions probably won't go away, public streets will be dominated by self-driving cars, nearly guaranteeing safe, traffic-free passage anywhere you want to go. Certain roads may even ban human drivers due to the inefficiencies and associated danger.\n\nCoding will face a similar shift, especially in high-risk systems such as self-driving, medical, and financial. It will be deemed \"too important\" to leave in human hands. Just as no Fortune 500 company would consider doing its bookkeeping by hand, the idea of leaving such important functionality to fallible humans will seem laughable at best and irresponsible at worst.\n\nInsurance companies will likely force this transition through higher business insurance rates for companies that allow humans to code. Policies already require certain business practices, such as daily backups, to get coverage. These policies are constantly updated to include technical best practices, and at some point, it will be a best practice to ensure no human makes direct edits to source code. All edits must be done through an AI that constantly checks for errors and security problems simultaneously.\n\nThis transition is likely to take more time, but I’d bet that we start seeing the first glimpses of this approach as we approach 2030\\.\n\n## From code reviewer to task verifier\n\nAs I'm writing this, AI is already capable of writing both source code and unit tests for that code. When integration tests already exist, AI is also capable of adding and modifying them. Right now, I haven't experienced AI being able to set up a full integration test environment on its own, though I suspect this will change relatively quickly. The important human value added to the process is reviewing the code to ensure it meets requirements and follows associated best practices. At this stage, the source code serves primarily as a human auditing tool.\n\nOnce AI is better than humans at writing code, it will also be better at reviewing code. Reviewing code is a bit more complex than writing it because you (or a model) need the correct context to understand the code. Nevertheless, AI will eventually meet this bar, and at that point, humans will no longer need to review the generated code. Through a series of non-deterministic and deterministic analysis, AI will identify any problems in the code and fix them automatically.\n\nIn all likelihood, this will become part of the development process where one agent generates the code and another agent reviews it and provides feedback. The first agent then makes changes to address the feedback, triggering another review. The process continues until the code is deemed acceptable. Google Antigravity already implements something along these lines by having the agent generate and execute a verification plan[^7] after code generation is complete.\n\nWhen the agents agree on the state of the code, the human's job is reduced to verifying that the requested task is completed as specified. Even this step will rely more on artifacts generated by the agents rather than on reviewing code. For instance, instead of conducting a code review to determine that a button to create a new task was added, an agent will provide screenshots of the relevant new UI and a video showing the new interactions. If there are any problems, the human provides feedback and the coding agent makes the appropriate changes. Google Antigravity has already started down this path by taking screenshots and videos[^8] of UI changes.\n\nThe change will be automatically deployed after the human reviewer verifies the task is complete.\n\nGiven that this is the way Google Antigravity currently works, I expect this approach to be common in 2028 and the norm by 2030\\.\n\n## Future programming languages\n\n“Programs must be written for people to read, and only incidentally for machines to execute.”  \n― Harold Abelson, Structure and Interpretation of Computer Programs \n\nWhile AI currently uses existing programming languages, I don't think that will necessarily remain the case. The current generation of higher-level languages was created to make programming easier for humans compared to manually executing processor instructions. In a world where humans don't need to write or review code, the utility of these languages is significantly lessened, especially considering the extra tokens necessary to create visually appealing patterns for humans to recognize.\n\nMore token-efficient programming languages can certainly be created, allowing AI to generate the same code using fewer resources. We already have Token-Oriented Object Notation[^9] (TOON) as a more token-efficient alternative to JSON for use with LLMs. I don't think new imperative languages are far off. Given the amount of AI-generated code we already have, and how much that will increase in the future, creating languages with more compact syntax could both increase the speed of code generation and reduce the cost.\n\nA common pushback to this point is that AI is only good at coding because it was trained on a massive amount of code spanning decades. That code represents both best practices and common patterns to be mimicked in generated code. While that's true of today's LLMs, I don't think we should assume that all future models will need the same amount of training or sample data to produce functionally equivalent code in a new programming language. LLMs may not end up being the code generation model of choice as AI continues to evolve and new model types are created. It's possible we'll end up with a model that can write excellent code simply by reviewing the language specification and discovering best practices along the way.\n\nIn the end, code will be written primarily for computers to execute and only incidentally for humans to read. A full transition to this approach may take until 2035 to have compilers and interpreters for new languages replace existing ones. However, new languages could be developed in the meantime to act as intermediate languages to be transpiled into their final form by deterministic tools.\n\n## From engineering teams to minimum viable engineering teams (MVET)\n\nIn the new orchestrator role, a single engineer will guide multiple agents producing code equivalent to multiple engineers. The result will necessarily be smaller engineering teams with the bare minimum number of engineers to get the desired output. I call this a minimum viable engineering team (MVET). How small the MVET will be is based primarily on non-technical factors, such as:\n\n* **Will a single engineer working in isolation be happy and dedicated?** While many engineers claim they wish they could just do all the work themselves without dealing with the messiness of other humans, that might not be the reality. Humans are social creatures, and part of the fun is collaborating with others to solve interesting problems. There may be some minimum team size that's optimal for productive engineers.  \n* **What value do companies place on eliminating single points of failure (SPOFs)?** It may be tempting to have a single engineer, especially at a small company or startup, but what if that engineer goes on vacation, gets sick, or leaves? Or, knowing their worth, leverages their position for more pay or other concessions? It may be worth the extra payroll to have redundancy on the engineering team to eliminate this SPOF.  \n* **How important are subject matter experts?** Will companies want to hire specialists or generalists? In a complex system, expecting one person to be an expert in all aspects of an application is unrealistic. Companies may still want a front-end expert, a back-end expert, and a database expert to orchestrate the agents associated with those areas.  \n* **What kind of a talent pipeline does the company want?** Companies will want some continuity plan for when engineers leave. Having at least one senior and one junior engineer on a task both eliminates a SPOF while planning for a future in which the senior engineer leaves.  \n* **Does AI orchestration productivity increase with more engineers?** We may discover that engineers are more effective orchestrators when they have other engineers to collaborate with. Traditional pair programming techniques such as driver-navigator[^10] may reduce errors and produce higher-quality output. Even without following any strict collaboration setup, just having someone else to bounce ideas off of and provide a different perspective may yield better results. In that case, it would benefit the company to have at least two engineers per orchestration.\n\nThere will undoubtedly be companies that try to pare down their engineering teams to the bare minimum. My hypothesis is that these companies will suffer as a result and eventually realize that an MVET is never just one person.\n\nEngineering teams will be smaller in the future, but how much smaller will vary by company. I've worked at many companies that overhired engineers because they were focused on the desired lines of code or parallelization necessary for a given project. With so much code needed, it was easy for subpar coders to find and keep well-paying jobs even as they underperformed. These individuals will not have software engineering jobs in the future. AI can already produce better quality code at a much lower cost.\n\nWe’ve already seen the transition to smaller teams in 2025, and I think we’ll see a permanent rethinking of engineering team size by 2028\\.\n\n## From delivering code to delivering value\n\nSo who will have software engineering jobs? In the near term, the engineers who keep their jobs will show an aptitude not just for creating code but also for organizing work. These are typically skills people learn as they progress from junior engineer to senior engineer to tech lead and beyond. At that point, the job becomes less about delivering code and more about delivering value.\n\nMany tech leads are initially frustrated that they need to spend less time coding to take on other tasks like reviewing technical specifications and pull requests, mentoring, and assigning tasks. They're frequently forced to start delegating work that they would otherwise prefer to do themselves. Eventually, though, tech leads relax into this role and understand that they can deliver value in many ways, not just through writing code. In fact, they can act as a multiplier on the team and make everyone else more productive.\n\nFuture software engineers will need to behave more like tech leads right from the start of their careers. These are the skills I think will be important going forward:\n\n* **Organization.** Software development using agents means much more work to organize tasks across agents. Engineers will need to develop \"flight plans\" that map out how work will be split, parallelized, and merged to achieve the desired result. This includes logging and observability work to ensure traceability for agent workstreams.  \n* **Communication.** With fewer engineers per team, ironically, communication among humans becomes more important. There will be much more cross-functional communication, and miscommunication is more likely to result in wasted work.  \n* **Systems thinking.** Engineers will no longer be able to focus on a particular component in a system. Instead, they'll need to think about how pieces fit together into a greater whole.  \n* **Model selection.** Just as engineers today need to understand which languages to use for specific types of problems, we'll also need to know which model is best suited for which types of tasks. Even within a particular model family, it's important to know which generation and variant to use. For example, when is it better to use Claude 3.5 Sonnet vs. Claude 4.5 Haiku?  \n* **Prompt engineering.** With all the agent interaction, writing prompts is a key skill. Prompts need to be specific enough to ensure agents aren't wasting time misinterpreting commands. Practices like providing examples and counterexamples, using steps and lists, and breaking complex tasks into individual prompts are all important tools for this role.  \n* **Output validation.** Much of an engineer's job will be validating AI output. To do so, we will need to create deterministic workflows that can evaluate it. Engineers must have a good sense of when something is incomplete or error-prone before it deploys to production.  \n* **Debugging workflows.** When agents don't produce the desired result or the output isn't working correctly, engineers will need to debug workflows to determine the source of the problem.  \n* **Agent-focused information management.** Documentation is critical when agents are doing all the work. Information needs to be structured for AI consumption, and retrieval systems need to be available to agents so they can fetch the up-to-date data they need on demand.  \n* **Security.** Making sure agents don't access sensitive systems or fall victim to prompt injection attacks or jailbreaking will be a constant concern. Even cloud agents may be able to escape their sandbox, so this must be taken into account.  \n* **Budget management.** Engineers may be given an AI budget on a weekly or monthly basis to keep costs in check. Whether that budget comes in the form of dollars, tokens, or credits, understanding the value you'll get from a specific model's output versus the cost to generate it will likely become an important skill.\n\nUp to this point, the ability to create functional code could land you an entry-level job as a software engineer. With code generation now commoditized through AI, this is no longer the case. The skills companies look for are already changing, making it more difficult for bootcamp-taught individuals to attain jobs. The necessary skills are those that are more difficult to teach and often hard-earned through on-the-job experience. As such, software engineering jobs will likely attract fewer people, as the \"fun\" part of the job is now being handled by AI and the more manager-like part of the job becomes more important.\n\nThis transition is already taking place and I expect it to be mostly complete by 2028\\.\n\n## Conclusion\n\nThe transition from coder to orchestrator represents a fundamental shift in how software is created. While the timeline for each phase varies, the direction is clear: humans will spend less time writing code and more time directing AI agents to do so. This isn't a distant future scenario. The tools and practices described here are already emerging, with full adoption of orchestrator-based development likely within the next five years.\n\nThis shift will be disruptive, particularly for those whose primary value proposition is writing code. Entry-level positions will become scarcer, and the barrier to entry will rise as companies seek engineers with organizational and strategic skills typically acquired through years of experience. However, for those who adapt, the role of software engineer will become more creative and strategic, focused on solving problems at a higher level of abstraction.\n\nThe future of software engineering isn't about humans versus AI. It's about humans and AI working together, with humans providing the vision, judgment, and orchestration while AI handles the implementation details. Those who embrace this partnership and develop the skills to thrive in an orchestrator role will find themselves not replaced by AI, but empowered by it.\n\n[^1]: [Conductors to Orchestrators: The Future of Agentic Coding](https://addyo.substack.com/p/conductors-to-orchestrators-the-future)\n[^2]: [Cloud Agents](https://cursor.com/docs/cloud-agent)\n[^3]: [About GitHub Copilot coding agent](https://docs.github.com/en/copilot/concepts/agents/coding-agent/about-coding-agent)\n[^4]: [GitHub Mission Control](https://github.com/copilot/agents)\n[^5]: [Using agents in Visual Studio Code](https://code.visualstudio.com/docs/copilot/agents/overview#_manage-agent-sessions)\n[^6]: [Google Antigravity](https://antigravity.google/)\n[^7]: [Verification Plans in Google Antigravity](https://youtu.be/htV29JrMXmA)\n[^8]: [Frontend - Google Antigravity](https://antigravity.google/use-cases/frontend)\n[^9]: [TOON - Token-Oriented Object Notation](https://toonformat.dev/)\n[^10]: [On Pair Programming](https://martinfowler.com/articles/on-pair-programming.html#Styles)\n","content_html":"&lt;p&gt;The software engineering industry is undergoing a major AI-driven transition in how we work. The days when humans needed to write every line of code are already behind us as LLMs become more capable and reliable. The improvement in code output during 2025 alone has been astounding. I’ve personally watched LLMs struggle with certain problems, then a few months later, solve them completely and efficiently. Progress will likely accelerate even further in 2026.&lt;/p&gt;\n&lt;p&gt;As Addy Osmani put it, we’re moving from coder to conductor to orchestrator&lt;sup&gt;&lt;a href=&quot;#user-content-fn-1&quot; id=&quot;user-content-fnref-1&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;\n&lt;h2 id=&quot;from-autocomplete-to-agent-based-development&quot;&gt;From autocomplete to agent-based development&lt;/h2&gt;\n&lt;p&gt;At the start of 2024, AI-assisted programming resembled a significantly improved autocomplete. You could maximize AI’s value by having it write tedious, common patterns learned from its training data. Coders had a form of “cruise control”—models filled in the blanks as you worked, but you remained the driver of the development process. The time savings came primarily from mundane tasks like writing documentation comments, utility functions, data models, and tests.&lt;/p&gt;\n&lt;p&gt;In 2025, we got our first taste of partially autonomous models that could take a single prompt and figure out how to implement the request. The coder now acted more like a &lt;em&gt;conductor&lt;/em&gt;, giving an LLM instructions to complete a task, then sitting back to see what it produced, providing feedback, and repeating the cycle. Models became capable of creating complex, multi-file solutions, and in some cases, complete applications, from a single prompt. While we still initiated the code creation process, we acted like an orchestra conductor guiding the violin section through a carefully measured motif. We remained ultimately responsible for the output but spent more time reviewing than coding. We’d been upgraded from cruise control to partially self-driving.&lt;/p&gt;\n&lt;p&gt;Towards the end of 2025, fully autonomous background agents gave rise to a new way of working. Instead of limiting ourselves to one synchronous AI session at a time, we can now act as an &lt;em&gt;orchestrator&lt;/em&gt;, arranging work across multiple agents and determining how to combine them later. Through offerings like Cursor cloud agents&lt;sup&gt;&lt;a href=&quot;#user-content-fn-2&quot; id=&quot;user-content-fnref-2&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;2&lt;/a&gt;&lt;/sup&gt; and GitHub Copilot coding agent&lt;sup&gt;&lt;a href=&quot;#user-content-fn-3&quot; id=&quot;user-content-fnref-3&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;3&lt;/a&gt;&lt;/sup&gt;, you can give instructions to be completed without active monitoring. These cloud-based agents continue executing until the task is complete, then notify you when it’s time to review their work. The isolated cloud environments remove much of the security risk of running agents on personal computers, allowing them to try, fail, and retry approaches without risking collateral damage.&lt;/p&gt;\n&lt;p&gt;This human-as-orchestrator trend appears to be the future of software development, making it worth exploring how the human development experience will change in the coming years.&lt;/p&gt;\n&lt;h2 id=&quot;from-text-focused-ides-to-task-focused-ides&quot;&gt;From text-focused IDEs to task-focused IDEs&lt;/h2&gt;\n&lt;p&gt;With humans no longer responsible for writing code, the question arises: what will the integrated development environment (IDE) interface look like? Undoubtedly, humans will still initiate code generation, so what kind of interface will that require?&lt;/p&gt;\n&lt;p&gt;If humans are primarily orchestrators, then IDEs will evolve to focus on managing coding agents instead of editing code. We can see this evolution already happening with GitHub Mission Control&lt;sup&gt;&lt;a href=&quot;#user-content-fn-4&quot; id=&quot;user-content-fnref-4&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;4&lt;/a&gt;&lt;/sup&gt; on the web and Visual Studio Code agents panel&lt;sup&gt;&lt;a href=&quot;#user-content-fn-5&quot; id=&quot;user-content-fnref-5&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;5&lt;/a&gt;&lt;/sup&gt; and  Google Antigravity&lt;sup&gt;&lt;a href=&quot;#user-content-fn-6&quot; id=&quot;user-content-fnref-6&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; on the desktop.&lt;/p&gt;\n&lt;p&gt;Both interfaces put agent management front and center, pushing code into the background. The user experience is optimized for quickly scanning agent sessions to see which are pending, complete, or stuck and need intervention. You can also easily switch between different sessions, each represented by a chat interface that allows you to interact and iterate with them all in one place.&lt;/p&gt;\n&lt;p&gt;In the future, I expect these agent-focused interfaces to evolve to provide ways to combine outputs from different agent sessions into a finished project. I can imagine an IDE that manages creation of the front end, back end, and data layer of an application in separate sessions, then easily incorporates all three into a single, functional codebase.&lt;/p&gt;\n&lt;p&gt;I think we could see this evolution move quickly, with most IDEs in 2028 being primarily agent-focused.&lt;/p&gt;\n&lt;h2 id=&quot;from-hands-on-coding-to-no-hands-on-coding&quot;&gt;From hands-on coding to no-hands-on coding&lt;/h2&gt;\n&lt;p&gt;If agents are primarily responsible for coding, it’s not a stretch to think that at some point, humans may not be allowed to code by hand at work. After all, once AI can code better than humans, why would you want a human to touch the code? Doing so would introduce unnecessary risk and increase the potential for bugs.&lt;/p&gt;\n&lt;p&gt;Think of this as similar to self-driving cars. Once self-driving cars become established and ubiquitous, human driving will be considered dangerous and unpredictable. Insurance rates for human drivers will likely skyrocket due to the risk compared to the alternative. While driving competitions probably won’t go away, public streets will be dominated by self-driving cars, nearly guaranteeing safe, traffic-free passage anywhere you want to go. Certain roads may even ban human drivers due to the inefficiencies and associated danger.&lt;/p&gt;\n&lt;p&gt;Coding will face a similar shift, especially in high-risk systems such as self-driving, medical, and financial. It will be deemed “too important” to leave in human hands. Just as no Fortune 500 company would consider doing its bookkeeping by hand, the idea of leaving such important functionality to fallible humans will seem laughable at best and irresponsible at worst.&lt;/p&gt;\n&lt;p&gt;Insurance companies will likely force this transition through higher business insurance rates for companies that allow humans to code. Policies already require certain business practices, such as daily backups, to get coverage. These policies are constantly updated to include technical best practices, and at some point, it will be a best practice to ensure no human makes direct edits to source code. All edits must be done through an AI that constantly checks for errors and security problems simultaneously.&lt;/p&gt;\n&lt;p&gt;This transition is likely to take more time, but I’d bet that we start seeing the first glimpses of this approach as we approach 2030.&lt;/p&gt;\n&lt;h2 id=&quot;from-code-reviewer-to-task-verifier&quot;&gt;From code reviewer to task verifier&lt;/h2&gt;\n&lt;p&gt;As I’m writing this, AI is already capable of writing both source code and unit tests for that code. When integration tests already exist, AI is also capable of adding and modifying them. Right now, I haven’t experienced AI being able to set up a full integration test environment on its own, though I suspect this will change relatively quickly. The important human value added to the process is reviewing the code to ensure it meets requirements and follows associated best practices. At this stage, the source code serves primarily as a human auditing tool.&lt;/p&gt;\n&lt;p&gt;Once AI is better than humans at writing code, it will also be better at reviewing code. Reviewing code is a bit more complex than writing it because you (or a model) need the correct context to understand the code. Nevertheless, AI will eventually meet this bar, and at that point, humans will no longer need to review the generated code. Through a series of non-deterministic and deterministic analysis, AI will identify any problems in the code and fix them automatically.&lt;/p&gt;\n&lt;p&gt;In all likelihood, this will become part of the development process where one agent generates the code and another agent reviews it and provides feedback. The first agent then makes changes to address the feedback, triggering another review. The process continues until the code is deemed acceptable. Google Antigravity already implements something along these lines by having the agent generate and execute a verification plan&lt;sup&gt;&lt;a href=&quot;#user-content-fn-7&quot; id=&quot;user-content-fnref-7&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;7&lt;/a&gt;&lt;/sup&gt; after code generation is complete.&lt;/p&gt;\n&lt;p&gt;When the agents agree on the state of the code, the human’s job is reduced to verifying that the requested task is completed as specified. Even this step will rely more on artifacts generated by the agents rather than on reviewing code. For instance, instead of conducting a code review to determine that a button to create a new task was added, an agent will provide screenshots of the relevant new UI and a video showing the new interactions. If there are any problems, the human provides feedback and the coding agent makes the appropriate changes. Google Antigravity has already started down this path by taking screenshots and videos&lt;sup&gt;&lt;a href=&quot;#user-content-fn-8&quot; id=&quot;user-content-fnref-8&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;8&lt;/a&gt;&lt;/sup&gt; of UI changes.&lt;/p&gt;\n&lt;p&gt;The change will be automatically deployed after the human reviewer verifies the task is complete.&lt;/p&gt;\n&lt;p&gt;Given that this is the way Google Antigravity currently works, I expect this approach to be common in 2028 and the norm by 2030.&lt;/p&gt;\n&lt;h2 id=&quot;future-programming-languages&quot;&gt;Future programming languages&lt;/h2&gt;\n&lt;p&gt;“Programs must be written for people to read, and only incidentally for machines to execute.”&lt;br&gt;\n― Harold Abelson, Structure and Interpretation of Computer Programs&lt;/p&gt;\n&lt;p&gt;While AI currently uses existing programming languages, I don’t think that will necessarily remain the case. The current generation of higher-level languages was created to make programming easier for humans compared to manually executing processor instructions. In a world where humans don’t need to write or review code, the utility of these languages is significantly lessened, especially considering the extra tokens necessary to create visually appealing patterns for humans to recognize.&lt;/p&gt;\n&lt;p&gt;More token-efficient programming languages can certainly be created, allowing AI to generate the same code using fewer resources. We already have Token-Oriented Object Notation&lt;sup&gt;&lt;a href=&quot;#user-content-fn-9&quot; id=&quot;user-content-fnref-9&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;9&lt;/a&gt;&lt;/sup&gt; (TOON) as a more token-efficient alternative to JSON for use with LLMs. I don’t think new imperative languages are far off. Given the amount of AI-generated code we already have, and how much that will increase in the future, creating languages with more compact syntax could both increase the speed of code generation and reduce the cost.&lt;/p&gt;\n&lt;p&gt;A common pushback to this point is that AI is only good at coding because it was trained on a massive amount of code spanning decades. That code represents both best practices and common patterns to be mimicked in generated code. While that’s true of today’s LLMs, I don’t think we should assume that all future models will need the same amount of training or sample data to produce functionally equivalent code in a new programming language. LLMs may not end up being the code generation model of choice as AI continues to evolve and new model types are created. It’s possible we’ll end up with a model that can write excellent code simply by reviewing the language specification and discovering best practices along the way.&lt;/p&gt;\n&lt;p&gt;In the end, code will be written primarily for computers to execute and only incidentally for humans to read. A full transition to this approach may take until 2035 to have compilers and interpreters for new languages replace existing ones. However, new languages could be developed in the meantime to act as intermediate languages to be transpiled into their final form by deterministic tools.&lt;/p&gt;\n&lt;h2 id=&quot;from-engineering-teams-to-minimum-viable-engineering-teams-mvet&quot;&gt;From engineering teams to minimum viable engineering teams (MVET)&lt;/h2&gt;\n&lt;p&gt;In the new orchestrator role, a single engineer will guide multiple agents producing code equivalent to multiple engineers. The result will necessarily be smaller engineering teams with the bare minimum number of engineers to get the desired output. I call this a minimum viable engineering team (MVET). How small the MVET will be is based primarily on non-technical factors, such as:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;&lt;strong&gt;Will a single engineer working in isolation be happy and dedicated?&lt;/strong&gt; While many engineers claim they wish they could just do all the work themselves without dealing with the messiness of other humans, that might not be the reality. Humans are social creatures, and part of the fun is collaborating with others to solve interesting problems. There may be some minimum team size that’s optimal for productive engineers.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;What value do companies place on eliminating single points of failure (SPOFs)?&lt;/strong&gt; It may be tempting to have a single engineer, especially at a small company or startup, but what if that engineer goes on vacation, gets sick, or leaves? Or, knowing their worth, leverages their position for more pay or other concessions? It may be worth the extra payroll to have redundancy on the engineering team to eliminate this SPOF.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;How important are subject matter experts?&lt;/strong&gt; Will companies want to hire specialists or generalists? In a complex system, expecting one person to be an expert in all aspects of an application is unrealistic. Companies may still want a front-end expert, a back-end expert, and a database expert to orchestrate the agents associated with those areas.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;What kind of a talent pipeline does the company want?&lt;/strong&gt; Companies will want some continuity plan for when engineers leave. Having at least one senior and one junior engineer on a task both eliminates a SPOF while planning for a future in which the senior engineer leaves.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Does AI orchestration productivity increase with more engineers?&lt;/strong&gt; We may discover that engineers are more effective orchestrators when they have other engineers to collaborate with. Traditional pair programming techniques such as driver-navigator&lt;sup&gt;&lt;a href=&quot;#user-content-fn-10&quot; id=&quot;user-content-fnref-10&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; may reduce errors and produce higher-quality output. Even without following any strict collaboration setup, just having someone else to bounce ideas off of and provide a different perspective may yield better results. In that case, it would benefit the company to have at least two engineers per orchestration.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;There will undoubtedly be companies that try to pare down their engineering teams to the bare minimum. My hypothesis is that these companies will suffer as a result and eventually realize that an MVET is never just one person.&lt;/p&gt;\n&lt;p&gt;Engineering teams will be smaller in the future, but how much smaller will vary by company. I’ve worked at many companies that overhired engineers because they were focused on the desired lines of code or parallelization necessary for a given project. With so much code needed, it was easy for subpar coders to find and keep well-paying jobs even as they underperformed. These individuals will not have software engineering jobs in the future. AI can already produce better quality code at a much lower cost.&lt;/p&gt;\n&lt;p&gt;We’ve already seen the transition to smaller teams in 2025, and I think we’ll see a permanent rethinking of engineering team size by 2028.&lt;/p&gt;\n&lt;h2 id=&quot;from-delivering-code-to-delivering-value&quot;&gt;From delivering code to delivering value&lt;/h2&gt;\n&lt;p&gt;So who will have software engineering jobs? In the near term, the engineers who keep their jobs will show an aptitude not just for creating code but also for organizing work. These are typically skills people learn as they progress from junior engineer to senior engineer to tech lead and beyond. At that point, the job becomes less about delivering code and more about delivering value.&lt;/p&gt;\n&lt;p&gt;Many tech leads are initially frustrated that they need to spend less time coding to take on other tasks like reviewing technical specifications and pull requests, mentoring, and assigning tasks. They’re frequently forced to start delegating work that they would otherwise prefer to do themselves. Eventually, though, tech leads relax into this role and understand that they can deliver value in many ways, not just through writing code. In fact, they can act as a multiplier on the team and make everyone else more productive.&lt;/p&gt;\n&lt;p&gt;Future software engineers will need to behave more like tech leads right from the start of their careers. These are the skills I think will be important going forward:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;&lt;strong&gt;Organization.&lt;/strong&gt; Software development using agents means much more work to organize tasks across agents. Engineers will need to develop “flight plans” that map out how work will be split, parallelized, and merged to achieve the desired result. This includes logging and observability work to ensure traceability for agent workstreams.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Communication.&lt;/strong&gt; With fewer engineers per team, ironically, communication among humans becomes more important. There will be much more cross-functional communication, and miscommunication is more likely to result in wasted work.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Systems thinking.&lt;/strong&gt; Engineers will no longer be able to focus on a particular component in a system. Instead, they’ll need to think about how pieces fit together into a greater whole.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Model selection.&lt;/strong&gt; Just as engineers today need to understand which languages to use for specific types of problems, we’ll also need to know which model is best suited for which types of tasks. Even within a particular model family, it’s important to know which generation and variant to use. For example, when is it better to use Claude 3.5 Sonnet vs. Claude 4.5 Haiku?&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Prompt engineering.&lt;/strong&gt; With all the agent interaction, writing prompts is a key skill. Prompts need to be specific enough to ensure agents aren’t wasting time misinterpreting commands. Practices like providing examples and counterexamples, using steps and lists, and breaking complex tasks into individual prompts are all important tools for this role.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Output validation.&lt;/strong&gt; Much of an engineer’s job will be validating AI output. To do so, we will need to create deterministic workflows that can evaluate it. Engineers must have a good sense of when something is incomplete or error-prone before it deploys to production.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Debugging workflows.&lt;/strong&gt; When agents don’t produce the desired result or the output isn’t working correctly, engineers will need to debug workflows to determine the source of the problem.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Agent-focused information management.&lt;/strong&gt; Documentation is critical when agents are doing all the work. Information needs to be structured for AI consumption, and retrieval systems need to be available to agents so they can fetch the up-to-date data they need on demand.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Security.&lt;/strong&gt; Making sure agents don’t access sensitive systems or fall victim to prompt injection attacks or jailbreaking will be a constant concern. Even cloud agents may be able to escape their sandbox, so this must be taken into account.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Budget management.&lt;/strong&gt; Engineers may be given an AI budget on a weekly or monthly basis to keep costs in check. Whether that budget comes in the form of dollars, tokens, or credits, understanding the value you’ll get from a specific model’s output versus the cost to generate it will likely become an important skill.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;Up to this point, the ability to create functional code could land you an entry-level job as a software engineer. With code generation now commoditized through AI, this is no longer the case. The skills companies look for are already changing, making it more difficult for bootcamp-taught individuals to attain jobs. The necessary skills are those that are more difficult to teach and often hard-earned through on-the-job experience. As such, software engineering jobs will likely attract fewer people, as the “fun” part of the job is now being handled by AI and the more manager-like part of the job becomes more important.&lt;/p&gt;\n&lt;p&gt;This transition is already taking place and I expect it to be mostly complete by 2028.&lt;/p&gt;\n&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;\n&lt;p&gt;The transition from coder to orchestrator represents a fundamental shift in how software is created. While the timeline for each phase varies, the direction is clear: humans will spend less time writing code and more time directing AI agents to do so. This isn’t a distant future scenario. The tools and practices described here are already emerging, with full adoption of orchestrator-based development likely within the next five years.&lt;/p&gt;\n&lt;p&gt;This shift will be disruptive, particularly for those whose primary value proposition is writing code. Entry-level positions will become scarcer, and the barrier to entry will rise as companies seek engineers with organizational and strategic skills typically acquired through years of experience. However, for those who adapt, the role of software engineer will become more creative and strategic, focused on solving problems at a higher level of abstraction.&lt;/p&gt;\n&lt;p&gt;The future of software engineering isn’t about humans versus AI. It’s about humans and AI working together, with humans providing the vision, judgment, and orchestration while AI handles the implementation details. Those who embrace this partnership and develop the skills to thrive in an orchestrator role will find themselves not replaced by AI, but empowered by it.&lt;/p&gt;\n&lt;section data-footnotes=&quot;&quot; class=&quot;footnotes&quot;&gt;&lt;h2 class=&quot;sr-only&quot; id=&quot;footnote-label&quot;&gt;Footnotes&lt;/h2&gt;\n&lt;ol&gt;\n&lt;li id=&quot;user-content-fn-1&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://addyo.substack.com/p/conductors-to-orchestrators-the-future&quot;&gt;Conductors to Orchestrators: The Future of Agentic Coding&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-1&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-2&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://cursor.com/docs/cloud-agent&quot;&gt;Cloud Agents&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-2&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-3&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://docs.github.com/en/copilot/concepts/agents/coding-agent/about-coding-agent&quot;&gt;About GitHub Copilot coding agent&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-3&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-4&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://github.com/copilot/agents&quot;&gt;GitHub Mission Control&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-4&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-5&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://code.visualstudio.com/docs/copilot/agents/overview#_manage-agent-sessions&quot;&gt;Using agents in Visual Studio Code&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-5&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-6&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://antigravity.google/&quot;&gt;Google Antigravity&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-6&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-7&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://youtu.be/htV29JrMXmA&quot;&gt;Verification Plans in Google Antigravity&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-7&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-8&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://antigravity.google/use-cases/frontend&quot;&gt;Frontend - Google Antigravity&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-8&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-9&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://toonformat.dev/&quot;&gt;TOON - Token-Oriented Object Notation&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-9&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-10&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://martinfowler.com/articles/on-pair-programming.html#Styles&quot;&gt;On Pair Programming&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-10&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;/ol&gt;\n&lt;/section&gt;","tags":["AI","Coder","Orchestrator","Automation"],"date_published":"2026-01-20T00:00:00.000Z","date_updated":"2026-01-20T00:00:00.000Z"},{"id":"https://humanwhocodes.com/blog/2026/01/how-github-could-secure-npm/","url":"https://humanwhocodes.com/blog/2026/01/how-github-could-secure-npm/","title":"How GitHub could secure npm","author":{"name":"Nicholas C. Zakas"},"summary":"Why doesn't npm detect compromised packages the way credit card companies detect fraud?","content_text":"\nIn 2025, npm experienced an unprecedented number of compromised packages in a series of coordinated attacks on the JavaScript open source supply chain. These packages ranged from crypto-stealing malware[^1] to credential-stealing exploits[^2]. While GitHub announced changes[^3] to address these attacks, many maintainers (myself included) found the response insufficient.\n\n## The impact of compromised packages\n\nThe scale of these attacks is staggering. In September 2025 alone, over 500 packages were compromised across two major attack waves. The first wave on September 8 compromised 20 widely-used packages with over 2 billion weekly downloads[^4]. Despite being live for only 2 hours, the compromised versions were downloaded 2.5 million times. The second wave, known as Shai-Hulud, was even more insidious: a self-replicating worm that automatically propagated across 500+ packages.\n\nWhile the total financial damage appears limited (approximately $500 in stolen cryptocurrency), the potential for significant harm is clear. If this was merely a test to gauge the feasibility of self-replicating attacks, we should prepare for more damaging attempts in the future.\n\n## The anatomy of an attack\n\nTo understand why npm's latest updates may fall short, it's important to understand how these attacks proceed.\n\n1. **Steal credentials.** Attackers steal the credentials of an existing npm maintainer, preferably someone with access to a high-traffic package or one frequently used by the intended target. Credentials are stolen either by compromising the maintainer's npm account (as in the case of Qix) or by stealing npm tokens (as in the Nx compromise[^5]).  \n2. **Add a `preinstall` or `postinstall` script.** The attacker creates a malicious script that executes during the `preinstall` or `postinstall` phase of the npm package. This script runs automatically whenever `npm install` is executed, whether for the malicious package itself or any project using it.  \n3. **Publish the compromised package.** The compromised package is published to the npm registry as a semver-patch update, increasing the likelihood it will be installed before the compromise is discovered.\n\n## How compromised packages spread\n\nCompromised packages are often installed quickly after publishing due to npm's default behavior. When using `npm install`, packages are added to `package.json` using a semver range beginning with a caret (`^`), such as `^1.2.3`. This tells npm to install any version starting with the given version (in this case, `1.2.3`) up to, but excluding, the next major version (`2.0.0`). So if `1.2.4` is the latest version, it will be installed instead of `1.2.3`. This behavior assumes packages follow semantic versioning, meaning non-major version bumps are always backwards compatible.\n\nAttackers exploit this behavior by publishing compromised versions as semver-patch or semver-minor increments. This ensures that anyone doing a fresh install of a project using the package will download the compromised version instead of a safe one, provided their version range includes the new version.\n\nWhile individuals may not do fresh installs frequently, continuous integration (CI) systems typically do. If not properly configured, CI systems can install the compromised package, potentially giving attackers access to cloud credentials that enable further attacks.\n\nUltimately, package consumers must take extra steps to avoid installing compromised packages, such as using lock files and immutable installs. However, npm's defaults still make it too easy to use version ranges that result in automatic installation of compromised packages.\n\n## GitHub’s response to the attacks\n\nIn September, GitHub announced its response[^6] to the attacks. The changes included:\n\n* Limiting publishing to local 2FA, granular access tokens[^7], and trusted publishing[^8].  \n* Deprecating legacy classic tokens and time-based one-time password (TOTP) 2FA.  \n* Enforcing shorter expiration windows for granular tokens.  \n* Disabling access tokens by default for new packages.\n\nThese steps targeted the Shai-Hulud attack[^9], which used a compromised package to scan for additional tokens and secrets. Those tokens and secrets were then used to publish more compromised packages, making the attack self-replicating.\n\nGitHub's response specifically targeted preventing future self-replicating attacks of this nature. Deprecating older, less secure legacy tokens helps limit the scope of an attack if a malicious actor obtains someone's credentials.\n\nHowever, the response has some limitations:\n\n* Reducing the usable lifetime of tokens only ensures that older, possibly forgotten tokens can’t be used in attacks. Infiltration of machines with up-to-date tokens yield the same results.  \n* While promoting trusted publishing as an alternative to tokens makes sense for open source projects hosted on GitHub or GitLab, it leaves others without a viable option. npm currently only supports GitHub and GitLab as OpenID Connect (OIDC) providers, so maintainers not using these systems cannot use this feature.  \n* The first publish of a new package can't use trusted publishing—it must be done with a token or locally using 2FA.  \n* Trusted publishing is not yet complete, most notably its missing 2FA. This caused the Open JS Foundation to recommend not using trusted publishing[^10] for critical packages.  \n* Maintainers of many packages now need to rotate tokens at least every 90 days, creating significant additional maintenance burden[^11].  \n* Maintainers of many packages must manually update every package through the npm web app, completing multiple 2FA verifications for each package.  \n* Removing TOTP means maintainers always need a web browser available in the same environment as the publish operation.  \n* The rapid rollout, along with shifting dates and lack of UI to accommodate common use cases, created confusion and frustration[^12] among maintainers.\n\nIn short, GitHub's response placed more responsibility on maintainers whose credentials were stolen and packages compromised. This created additional work for maintainers, especially those managing many packages. While these changes may reduce a certain type of attack, they don't address npm's systemic problems.\n\nProblems like this require a different approach. To understand what that might look like, it helps to examine another industry facing similar challenges.\n\n## How npm is like the credit card industry\n\nThe credit card industry faces challenges similar to npm's, except instead of compromised packages, they deal with fraudulent transactions. The attack vector is similar: both begin with stealing credentials. In this case, the credentials are credit card information rather than an npm login or token. I'm old enough to remember when stores would take imprints of credit cards and process all transactions in a batch at the end of the day. It was easy to commit fraud and never be caught using that system, so the credit card industry adapted.\n\nToday, credit cards have several ways to prevent credential theft:\n\n* The cards themselves have chips that are difficult to duplicate (as opposed to the old magnetic stripes), making it easier to authenticate a physical card.  \n* In some countries, you must enter a PIN along with presenting the chipped card to make a transaction, adding second-factor authentication to the process.  \n* When using a credit card online, you need to enter not just the number, but also the expiration date, CVC number, cardholder name, and sometimes the postal code. All of this helps ensure that someone possesses the physical card and not just the card number.\n\nEven so, credit card companies know that cards will still be stolen and used for fraudulent purchases, so they don't stop at these measures. They also monitor their networks for suspicious activity.\n\nIf you're a frequent credit card user, you've likely received a text or phone call asking if you made a particular transaction. That's the credit card company's algorithms flagging a transaction as outside your normal spending pattern. Maybe you typically make small purchases and suddenly buy a new kitchen appliance. It's not fraudulent, but it's unusual, so it gets flagged for verification. Maybe you travel to another state or country and use your credit card there. Again, it's not fraudulent, but it doesn't follow your typical usage pattern, so it's best to verify before allowing the transaction. This is called *anomaly detection*, a standard practice for identifying unwanted or unexpected data in data streams.\n\n## What npm got wrong\n\nGitHub's response to the ongoing supply chain attacks focused solely on credential theft, which is why it falls short. We already know how packages become compromised, and while securing credentials is important, we also know that credentials will inevitably be stolen.\n\nCredit card companies understand that fraudulent transactions will still occur regardless of how many additional factors they add to validation. That's why they invest in anomaly detection in addition to securing credentials. Once credentials are compromised, they still want to protect consumers and merchants from fraud.\n\nGitHub, on the other hand, has not invested in protecting the ecosystem from compromised packages as they are published. The latest changes place most of the responsibility on package maintainers. Long-time maintainer Qix fell victim to a convincing phishing attack—if even experienced maintainers can be compromised, less-seasoned maintainers face even greater risk.\n\nMeanwhile, GitHub continues taking down malicious packages after they've already caused damage. However, there are proactive measures GitHub could implement, such as investing in the same kind of anomaly detection that helps credit card companies flag fraudulent transactions.\n\n## What GitHub could do with npm\n\nInstead of continuing to focus solely on credential security, GitHub could analyze packages as they are published. (They already do this once they have identified Indicators of Compromise, effectively blocking new packages containing the same IoCs.) Given what we know about malicious packages, there are several ways the npm registry could be made more secure. Each of the following suggestions assumes the maintainer's npm account has been compromised and therefore we cannot rely on the npm web app for verification.\n\n### Location tracking of publishes\n\nSimilar to how credit card companies track purchase locations and flag unexpected transactions, the npm registry could flag package publishes that occur from an unexpected location. The npm registry likely already tracks the IP address of operations, which can be used to infer the location of the person or system publishing the package. If an `npm publish` operation occurs from a location significantly different from the previous publish, npm could require verification via email to at least one maintainer.\n\nBecause we are assuming the package owner's npm account has been compromised, npm 2FA offers little validation of the package owner's identity. Instead, npm could require the maintainer to retrieve a code sent to their email to publish a package from an unusual location. This would require the attacker to have access to both the npm account and the email account, significantly raising the bar for publishing a compromised package.\n\nWhat would count as an unusual location? Here are some examples:\n\n* The publish typically happens from a GitHub Actions datacenter but this one happens from outside the datacenter.  \n* The publish typically happens from a location in Florida but this one happens in California.  \n* The publish typically happens from a location in the United States but this one happens in China.\n\nThese heuristics can be tuned according to the actual patterns observed in the npm registry. Popular web apps like Gmail and Facebook use similar location tracking to proactively intervene when an account appears compromised.\n\n### Require semver-major version bumps when adding `preinstall` or  `postinstall` scripts\n\nBecause these attacks frequently use `preinstall` or `postinstall` scripts on packages that didn't have one previously, detecting when a package is published with a `preinstall` or `postinstall` script for the first time is key. This could be done with a single bit indicating whether a major release line has a `preinstall` script and a single bit indicating whether it has a `postinstall` script. For instance, when `1.0.0` is published, the `1.x` release line bits are set to indicate whether it has either a `preinstall` or `postinstall` script.\n\nWhen the next version of the package is published in the same major release line (for example, `1.1.0` or `1.0.1`), check the bits of the `1.x` release line to see whether a `preinstall` script already exists. If the bit is set, there's no need to further investigate `preinstall` for this new version (`preinstall` is already allowed). If the bit is not set, check `package.json` to see whether a `preinstall` script exists. If it does, this is a violation and the package publish must fail. If desired, the package may be published as the next major version (in the previous example, `2.0.0`). Repeat the process with the `postinstall` bit.\n\nThis type of anomaly detection effectively removes one of the attacker's main weapons: the speed with which a compromised package is installed. Because forcing a semver-major version bump removes it from the default range for npm dependencies, it will not automatically be installed in most projects. Some projects with customized dependency ranges (such as `> 1.0.2`) will still be affected, but the majority will be safe. This delay will hopefully both dissuade some attackers and make it easier to detect problems before they affect too many systems.\n\n### Require email-based 2FA when adding `preinstall` or `postinstall` scripts\n\nIn addition to requiring a semver-major version bump when adding `preinstall` or `postinstall` scripts, npm could also enforce verification via email to publish a new version with a `preinstall` or `postinstall` script where one didn't previously exist. This could use the same email-based 2FA system as location anomaly detection.\n\n### Require double verification for invited maintainers\n\nThe current system for inviting maintainers to a package leaves a gap that could allow attackers to circumvent email-based 2FA. Because the invitation process is single opt-in on the part of the invitee, an attacker could compromise an npm account and then invite a separate npm account as a maintainer to receive any email-based 2FA requests. To prevent this, the invite system should be updated so that all current maintainers receive an email asking them to confirm they intended to invite the new maintainer. As long as one of the current maintainers approves, the invite will be sent to the new maintainer.\n\n## A plea to GitHub\n\nWe know you want to be responsible stewards of the JavaScript ecosystem. We know the npm registry requires significant effort to maintain and is costly to run. However, npm's infrastructure needs more attention and resources. The response to these attacks was reactive and implemented without gathering feedback from the community most affected. Now is the time to invest in proactive security measures that can protect the registry against what is certain to be an increasing number and intensity of attacks.\n\n## Conclusion\n\nGitHub has an opportunity to take a more proactive approach to securing the npm registry. Rather than placing the burden solely on maintainers to protect their credentials, GitHub could implement anomaly detection systems similar to those used by the credit card industry. The suggestions outlined here (location tracking, restrictions on lifecycle scripts, and improved verification processes) would create multiple layers of defense that work even after credentials are compromised. These measures wouldn't eliminate all supply chain attacks, but they would significantly reduce the window of opportunity for attackers and limit the damage compromised packages can cause. Most importantly, they would demonstrate a commitment to protecting the entire JavaScript ecosystem, not just responding to attacks after they've already succeeded. The technology and patterns for these protections already exist in other industries. It's time for GitHub to apply them to npm.\n\n[^1]: [npm Author Qix Compromised via Phishing Email in Major Supply Chain Attack](https://socket.dev/blog/npm-author-qix-compromised-in-major-supply-chain-attack)\n[^2]: [Popular Tinycolor npm Package Compromised in Supply Chain Attack Affecting 40+ Packages](https://socket.dev/blog/tinycolor-supply-chain-attack-affects-40-packages)\n[^3]: [Our plan for a more secure npm supply chain](https://github.blog/security/supply-chain-security/our-plan-for-a-more-secure-npm-supply-chain/)\n[^4]: [New compromised packages identified in largest npm attack in history](https://jfrog.com/blog/new-compromised-packages-in-largest-npm-attack-in-history/)\n[^5]: [Nx Investigation Reveals GitHub Actions Workflow Exploit Led to npm Token Theft, Prompting Switch to Trusted Publishing](https://socket.dev/blog/nx-supply-chain-attack-investigation-github-actions-workflow-exploit)\n[^6]: [Our plan for a more secure npm supply chain](https://github.blog/security/supply-chain-security/our-plan-for-a-more-secure-npm-supply-chain/)\n[^7]: [About granular access tokens](https://docs.npmjs.com/about-access-tokens#about-granular-access-tokens)\n[^8]: [Trusted Publishers for All Package Repositories](https://repos.openssf.org/trusted-publishers-for-all-package-repositories)\n[^9]: [Updated and Ongoing Supply Chain Attack Targets CrowdStrike npm Packages](https://socket.dev/blog/ongoing-supply-chain-attack-targets-crowdstrike-npm-packages)\n[^10]: [Publishing More Securely on npm: Guidance from the OpenJS Security Collaboration Space](https://openjsf.org/blog/publishing-securely-on-npm)\n[^11]: [Comment: Classic token removal moves to December 9, bundled with new CLI improvements](https://github.com/orgs/community/discussions/179562#discussioncomment-15221604)\n[^12]: [Update: Classic token removal moves to December 9, bundled with new CLI improvements](https://github.com/orgs/community/discussions/179562)\n","content_html":"&lt;p&gt;In 2025, npm experienced an unprecedented number of compromised packages in a series of coordinated attacks on the JavaScript open source supply chain. These packages ranged from crypto-stealing malware&lt;sup&gt;&lt;a href=&quot;#user-content-fn-1&quot; id=&quot;user-content-fnref-1&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; to credential-stealing exploits&lt;sup&gt;&lt;a href=&quot;#user-content-fn-2&quot; id=&quot;user-content-fnref-2&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;. While GitHub announced changes&lt;sup&gt;&lt;a href=&quot;#user-content-fn-3&quot; id=&quot;user-content-fnref-3&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;3&lt;/a&gt;&lt;/sup&gt; to address these attacks, many maintainers (myself included) found the response insufficient.&lt;/p&gt;\n&lt;h2 id=&quot;the-impact-of-compromised-packages&quot;&gt;The impact of compromised packages&lt;/h2&gt;\n&lt;p&gt;The scale of these attacks is staggering. In September 2025 alone, over 500 packages were compromised across two major attack waves. The first wave on September 8 compromised 20 widely-used packages with over 2 billion weekly downloads&lt;sup&gt;&lt;a href=&quot;#user-content-fn-4&quot; id=&quot;user-content-fnref-4&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;4&lt;/a&gt;&lt;/sup&gt;. Despite being live for only 2 hours, the compromised versions were downloaded 2.5 million times. The second wave, known as Shai-Hulud, was even more insidious: a self-replicating worm that automatically propagated across 500+ packages.&lt;/p&gt;\n&lt;p&gt;While the total financial damage appears limited (approximately $500 in stolen cryptocurrency), the potential for significant harm is clear. If this was merely a test to gauge the feasibility of self-replicating attacks, we should prepare for more damaging attempts in the future.&lt;/p&gt;\n&lt;h2 id=&quot;the-anatomy-of-an-attack&quot;&gt;The anatomy of an attack&lt;/h2&gt;\n&lt;p&gt;To understand why npm’s latest updates may fall short, it’s important to understand how these attacks proceed.&lt;/p&gt;\n&lt;ol&gt;\n&lt;li&gt;&lt;strong&gt;Steal credentials.&lt;/strong&gt; Attackers steal the credentials of an existing npm maintainer, preferably someone with access to a high-traffic package or one frequently used by the intended target. Credentials are stolen either by compromising the maintainer’s npm account (as in the case of Qix) or by stealing npm tokens (as in the Nx compromise&lt;sup&gt;&lt;a href=&quot;#user-content-fn-5&quot; id=&quot;user-content-fnref-5&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;5&lt;/a&gt;&lt;/sup&gt;).&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Add a &lt;code&gt;preinstall&lt;/code&gt; or &lt;code&gt;postinstall&lt;/code&gt; script.&lt;/strong&gt; The attacker creates a malicious script that executes during the &lt;code&gt;preinstall&lt;/code&gt; or &lt;code&gt;postinstall&lt;/code&gt; phase of the npm package. This script runs automatically whenever &lt;code&gt;npm install&lt;/code&gt; is executed, whether for the malicious package itself or any project using it.&lt;/li&gt;\n&lt;li&gt;&lt;strong&gt;Publish the compromised package.&lt;/strong&gt; The compromised package is published to the npm registry as a semver-patch update, increasing the likelihood it will be installed before the compromise is discovered.&lt;/li&gt;\n&lt;/ol&gt;\n&lt;h2 id=&quot;how-compromised-packages-spread&quot;&gt;How compromised packages spread&lt;/h2&gt;\n&lt;p&gt;Compromised packages are often installed quickly after publishing due to npm’s default behavior. When using &lt;code&gt;npm install&lt;/code&gt;, packages are added to &lt;code&gt;package.json&lt;/code&gt; using a semver range beginning with a caret (&lt;code&gt;^&lt;/code&gt;), such as &lt;code&gt;^1.2.3&lt;/code&gt;. This tells npm to install any version starting with the given version (in this case, &lt;code&gt;1.2.3&lt;/code&gt;) up to, but excluding, the next major version (&lt;code&gt;2.0.0&lt;/code&gt;). So if &lt;code&gt;1.2.4&lt;/code&gt; is the latest version, it will be installed instead of &lt;code&gt;1.2.3&lt;/code&gt;. This behavior assumes packages follow semantic versioning, meaning non-major version bumps are always backwards compatible.&lt;/p&gt;\n&lt;p&gt;Attackers exploit this behavior by publishing compromised versions as semver-patch or semver-minor increments. This ensures that anyone doing a fresh install of a project using the package will download the compromised version instead of a safe one, provided their version range includes the new version.&lt;/p&gt;\n&lt;p&gt;While individuals may not do fresh installs frequently, continuous integration (CI) systems typically do. If not properly configured, CI systems can install the compromised package, potentially giving attackers access to cloud credentials that enable further attacks.&lt;/p&gt;\n&lt;p&gt;Ultimately, package consumers must take extra steps to avoid installing compromised packages, such as using lock files and immutable installs. However, npm’s defaults still make it too easy to use version ranges that result in automatic installation of compromised packages.&lt;/p&gt;\n&lt;h2 id=&quot;githubs-response-to-the-attacks&quot;&gt;GitHub’s response to the attacks&lt;/h2&gt;\n&lt;p&gt;In September, GitHub announced its response&lt;sup&gt;&lt;a href=&quot;#user-content-fn-6&quot; id=&quot;user-content-fnref-6&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;6&lt;/a&gt;&lt;/sup&gt; to the attacks. The changes included:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;Limiting publishing to local 2FA, granular access tokens&lt;sup&gt;&lt;a href=&quot;#user-content-fn-7&quot; id=&quot;user-content-fnref-7&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;7&lt;/a&gt;&lt;/sup&gt;, and trusted publishing&lt;sup&gt;&lt;a href=&quot;#user-content-fn-8&quot; id=&quot;user-content-fnref-8&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;8&lt;/a&gt;&lt;/sup&gt;.&lt;/li&gt;\n&lt;li&gt;Deprecating legacy classic tokens and time-based one-time password (TOTP) 2FA.&lt;/li&gt;\n&lt;li&gt;Enforcing shorter expiration windows for granular tokens.&lt;/li&gt;\n&lt;li&gt;Disabling access tokens by default for new packages.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;These steps targeted the Shai-Hulud attack&lt;sup&gt;&lt;a href=&quot;#user-content-fn-9&quot; id=&quot;user-content-fnref-9&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;9&lt;/a&gt;&lt;/sup&gt;, which used a compromised package to scan for additional tokens and secrets. Those tokens and secrets were then used to publish more compromised packages, making the attack self-replicating.&lt;/p&gt;\n&lt;p&gt;GitHub’s response specifically targeted preventing future self-replicating attacks of this nature. Deprecating older, less secure legacy tokens helps limit the scope of an attack if a malicious actor obtains someone’s credentials.&lt;/p&gt;\n&lt;p&gt;However, the response has some limitations:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;Reducing the usable lifetime of tokens only ensures that older, possibly forgotten tokens can’t be used in attacks. Infiltration of machines with up-to-date tokens yield the same results.&lt;/li&gt;\n&lt;li&gt;While promoting trusted publishing as an alternative to tokens makes sense for open source projects hosted on GitHub or GitLab, it leaves others without a viable option. npm currently only supports GitHub and GitLab as OpenID Connect (OIDC) providers, so maintainers not using these systems cannot use this feature.&lt;/li&gt;\n&lt;li&gt;The first publish of a new package can’t use trusted publishing—it must be done with a token or locally using 2FA.&lt;/li&gt;\n&lt;li&gt;Trusted publishing is not yet complete, most notably its missing 2FA. This caused the Open JS Foundation to recommend not using trusted publishing&lt;sup&gt;&lt;a href=&quot;#user-content-fn-10&quot; id=&quot;user-content-fnref-10&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;10&lt;/a&gt;&lt;/sup&gt; for critical packages.&lt;/li&gt;\n&lt;li&gt;Maintainers of many packages now need to rotate tokens at least every 90 days, creating significant additional maintenance burden&lt;sup&gt;&lt;a href=&quot;#user-content-fn-11&quot; id=&quot;user-content-fnref-11&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;11&lt;/a&gt;&lt;/sup&gt;.&lt;/li&gt;\n&lt;li&gt;Maintainers of many packages must manually update every package through the npm web app, completing multiple 2FA verifications for each package.&lt;/li&gt;\n&lt;li&gt;Removing TOTP means maintainers always need a web browser available in the same environment as the publish operation.&lt;/li&gt;\n&lt;li&gt;The rapid rollout, along with shifting dates and lack of UI to accommodate common use cases, created confusion and frustration&lt;sup&gt;&lt;a href=&quot;#user-content-fn-12&quot; id=&quot;user-content-fnref-12&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;12&lt;/a&gt;&lt;/sup&gt; among maintainers.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;In short, GitHub’s response placed more responsibility on maintainers whose credentials were stolen and packages compromised. This created additional work for maintainers, especially those managing many packages. While these changes may reduce a certain type of attack, they don’t address npm’s systemic problems.&lt;/p&gt;\n&lt;p&gt;Problems like this require a different approach. To understand what that might look like, it helps to examine another industry facing similar challenges.&lt;/p&gt;\n&lt;h2 id=&quot;how-npm-is-like-the-credit-card-industry&quot;&gt;How npm is like the credit card industry&lt;/h2&gt;\n&lt;p&gt;The credit card industry faces challenges similar to npm’s, except instead of compromised packages, they deal with fraudulent transactions. The attack vector is similar: both begin with stealing credentials. In this case, the credentials are credit card information rather than an npm login or token. I’m old enough to remember when stores would take imprints of credit cards and process all transactions in a batch at the end of the day. It was easy to commit fraud and never be caught using that system, so the credit card industry adapted.&lt;/p&gt;\n&lt;p&gt;Today, credit cards have several ways to prevent credential theft:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;The cards themselves have chips that are difficult to duplicate (as opposed to the old magnetic stripes), making it easier to authenticate a physical card.&lt;/li&gt;\n&lt;li&gt;In some countries, you must enter a PIN along with presenting the chipped card to make a transaction, adding second-factor authentication to the process.&lt;/li&gt;\n&lt;li&gt;When using a credit card online, you need to enter not just the number, but also the expiration date, CVC number, cardholder name, and sometimes the postal code. All of this helps ensure that someone possesses the physical card and not just the card number.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;Even so, credit card companies know that cards will still be stolen and used for fraudulent purchases, so they don’t stop at these measures. They also monitor their networks for suspicious activity.&lt;/p&gt;\n&lt;p&gt;If you’re a frequent credit card user, you’ve likely received a text or phone call asking if you made a particular transaction. That’s the credit card company’s algorithms flagging a transaction as outside your normal spending pattern. Maybe you typically make small purchases and suddenly buy a new kitchen appliance. It’s not fraudulent, but it’s unusual, so it gets flagged for verification. Maybe you travel to another state or country and use your credit card there. Again, it’s not fraudulent, but it doesn’t follow your typical usage pattern, so it’s best to verify before allowing the transaction. This is called &lt;em&gt;anomaly detection&lt;/em&gt;, a standard practice for identifying unwanted or unexpected data in data streams.&lt;/p&gt;\n&lt;h2 id=&quot;what-npm-got-wrong&quot;&gt;What npm got wrong&lt;/h2&gt;\n&lt;p&gt;GitHub’s response to the ongoing supply chain attacks focused solely on credential theft, which is why it falls short. We already know how packages become compromised, and while securing credentials is important, we also know that credentials will inevitably be stolen.&lt;/p&gt;\n&lt;p&gt;Credit card companies understand that fraudulent transactions will still occur regardless of how many additional factors they add to validation. That’s why they invest in anomaly detection in addition to securing credentials. Once credentials are compromised, they still want to protect consumers and merchants from fraud.&lt;/p&gt;\n&lt;p&gt;GitHub, on the other hand, has not invested in protecting the ecosystem from compromised packages as they are published. The latest changes place most of the responsibility on package maintainers. Long-time maintainer Qix fell victim to a convincing phishing attack—if even experienced maintainers can be compromised, less-seasoned maintainers face even greater risk.&lt;/p&gt;\n&lt;p&gt;Meanwhile, GitHub continues taking down malicious packages after they’ve already caused damage. However, there are proactive measures GitHub could implement, such as investing in the same kind of anomaly detection that helps credit card companies flag fraudulent transactions.&lt;/p&gt;\n&lt;h2 id=&quot;what-github-could-do-with-npm&quot;&gt;What GitHub could do with npm&lt;/h2&gt;\n&lt;p&gt;Instead of continuing to focus solely on credential security, GitHub could analyze packages as they are published. (They already do this once they have identified Indicators of Compromise, effectively blocking new packages containing the same IoCs.) Given what we know about malicious packages, there are several ways the npm registry could be made more secure. Each of the following suggestions assumes the maintainer’s npm account has been compromised and therefore we cannot rely on the npm web app for verification.&lt;/p&gt;\n&lt;h3 id=&quot;location-tracking-of-publishes&quot;&gt;Location tracking of publishes&lt;/h3&gt;\n&lt;p&gt;Similar to how credit card companies track purchase locations and flag unexpected transactions, the npm registry could flag package publishes that occur from an unexpected location. The npm registry likely already tracks the IP address of operations, which can be used to infer the location of the person or system publishing the package. If an &lt;code&gt;npm publish&lt;/code&gt; operation occurs from a location significantly different from the previous publish, npm could require verification via email to at least one maintainer.&lt;/p&gt;\n&lt;p&gt;Because we are assuming the package owner’s npm account has been compromised, npm 2FA offers little validation of the package owner’s identity. Instead, npm could require the maintainer to retrieve a code sent to their email to publish a package from an unusual location. This would require the attacker to have access to both the npm account and the email account, significantly raising the bar for publishing a compromised package.&lt;/p&gt;\n&lt;p&gt;What would count as an unusual location? Here are some examples:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;The publish typically happens from a GitHub Actions datacenter but this one happens from outside the datacenter.&lt;/li&gt;\n&lt;li&gt;The publish typically happens from a location in Florida but this one happens in California.&lt;/li&gt;\n&lt;li&gt;The publish typically happens from a location in the United States but this one happens in China.&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;These heuristics can be tuned according to the actual patterns observed in the npm registry. Popular web apps like Gmail and Facebook use similar location tracking to proactively intervene when an account appears compromised.&lt;/p&gt;\n&lt;h3 id=&quot;require-semver-major-version-bumps-when-adding-preinstall-or--postinstall-scripts&quot;&gt;Require semver-major version bumps when adding &lt;code&gt;preinstall&lt;/code&gt; or  &lt;code&gt;postinstall&lt;/code&gt; scripts&lt;/h3&gt;\n&lt;p&gt;Because these attacks frequently use &lt;code&gt;preinstall&lt;/code&gt; or &lt;code&gt;postinstall&lt;/code&gt; scripts on packages that didn’t have one previously, detecting when a package is published with a &lt;code&gt;preinstall&lt;/code&gt; or &lt;code&gt;postinstall&lt;/code&gt; script for the first time is key. This could be done with a single bit indicating whether a major release line has a &lt;code&gt;preinstall&lt;/code&gt; script and a single bit indicating whether it has a &lt;code&gt;postinstall&lt;/code&gt; script. For instance, when &lt;code&gt;1.0.0&lt;/code&gt; is published, the &lt;code&gt;1.x&lt;/code&gt; release line bits are set to indicate whether it has either a &lt;code&gt;preinstall&lt;/code&gt; or &lt;code&gt;postinstall&lt;/code&gt; script.&lt;/p&gt;\n&lt;p&gt;When the next version of the package is published in the same major release line (for example, &lt;code&gt;1.1.0&lt;/code&gt; or &lt;code&gt;1.0.1&lt;/code&gt;), check the bits of the &lt;code&gt;1.x&lt;/code&gt; release line to see whether a &lt;code&gt;preinstall&lt;/code&gt; script already exists. If the bit is set, there’s no need to further investigate &lt;code&gt;preinstall&lt;/code&gt; for this new version (&lt;code&gt;preinstall&lt;/code&gt; is already allowed). If the bit is not set, check &lt;code&gt;package.json&lt;/code&gt; to see whether a &lt;code&gt;preinstall&lt;/code&gt; script exists. If it does, this is a violation and the package publish must fail. If desired, the package may be published as the next major version (in the previous example, &lt;code&gt;2.0.0&lt;/code&gt;). Repeat the process with the &lt;code&gt;postinstall&lt;/code&gt; bit.&lt;/p&gt;\n&lt;p&gt;This type of anomaly detection effectively removes one of the attacker’s main weapons: the speed with which a compromised package is installed. Because forcing a semver-major version bump removes it from the default range for npm dependencies, it will not automatically be installed in most projects. Some projects with customized dependency ranges (such as &lt;code&gt;&gt; 1.0.2&lt;/code&gt;) will still be affected, but the majority will be safe. This delay will hopefully both dissuade some attackers and make it easier to detect problems before they affect too many systems.&lt;/p&gt;\n&lt;h3 id=&quot;require-email-based-2fa-when-adding-preinstall-or-postinstall-scripts&quot;&gt;Require email-based 2FA when adding &lt;code&gt;preinstall&lt;/code&gt; or &lt;code&gt;postinstall&lt;/code&gt; scripts&lt;/h3&gt;\n&lt;p&gt;In addition to requiring a semver-major version bump when adding &lt;code&gt;preinstall&lt;/code&gt; or &lt;code&gt;postinstall&lt;/code&gt; scripts, npm could also enforce verification via email to publish a new version with a &lt;code&gt;preinstall&lt;/code&gt; or &lt;code&gt;postinstall&lt;/code&gt; script where one didn’t previously exist. This could use the same email-based 2FA system as location anomaly detection.&lt;/p&gt;\n&lt;h3 id=&quot;require-double-verification-for-invited-maintainers&quot;&gt;Require double verification for invited maintainers&lt;/h3&gt;\n&lt;p&gt;The current system for inviting maintainers to a package leaves a gap that could allow attackers to circumvent email-based 2FA. Because the invitation process is single opt-in on the part of the invitee, an attacker could compromise an npm account and then invite a separate npm account as a maintainer to receive any email-based 2FA requests. To prevent this, the invite system should be updated so that all current maintainers receive an email asking them to confirm they intended to invite the new maintainer. As long as one of the current maintainers approves, the invite will be sent to the new maintainer.&lt;/p&gt;\n&lt;h2 id=&quot;a-plea-to-github&quot;&gt;A plea to GitHub&lt;/h2&gt;\n&lt;p&gt;We know you want to be responsible stewards of the JavaScript ecosystem. We know the npm registry requires significant effort to maintain and is costly to run. However, npm’s infrastructure needs more attention and resources. The response to these attacks was reactive and implemented without gathering feedback from the community most affected. Now is the time to invest in proactive security measures that can protect the registry against what is certain to be an increasing number and intensity of attacks.&lt;/p&gt;\n&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;\n&lt;p&gt;GitHub has an opportunity to take a more proactive approach to securing the npm registry. Rather than placing the burden solely on maintainers to protect their credentials, GitHub could implement anomaly detection systems similar to those used by the credit card industry. The suggestions outlined here (location tracking, restrictions on lifecycle scripts, and improved verification processes) would create multiple layers of defense that work even after credentials are compromised. These measures wouldn’t eliminate all supply chain attacks, but they would significantly reduce the window of opportunity for attackers and limit the damage compromised packages can cause. Most importantly, they would demonstrate a commitment to protecting the entire JavaScript ecosystem, not just responding to attacks after they’ve already succeeded. The technology and patterns for these protections already exist in other industries. It’s time for GitHub to apply them to npm.&lt;/p&gt;\n&lt;section data-footnotes=&quot;&quot; class=&quot;footnotes&quot;&gt;&lt;h2 class=&quot;sr-only&quot; id=&quot;footnote-label&quot;&gt;Footnotes&lt;/h2&gt;\n&lt;ol&gt;\n&lt;li id=&quot;user-content-fn-1&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://socket.dev/blog/npm-author-qix-compromised-in-major-supply-chain-attack&quot;&gt;npm Author Qix Compromised via Phishing Email in Major Supply Chain Attack&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-1&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-2&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://socket.dev/blog/tinycolor-supply-chain-attack-affects-40-packages&quot;&gt;Popular Tinycolor npm Package Compromised in Supply Chain Attack Affecting 40+ Packages&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-2&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-3&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://github.blog/security/supply-chain-security/our-plan-for-a-more-secure-npm-supply-chain/&quot;&gt;Our plan for a more secure npm supply chain&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-3&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-4&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://jfrog.com/blog/new-compromised-packages-in-largest-npm-attack-in-history/&quot;&gt;New compromised packages identified in largest npm attack in history&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-4&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-5&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://socket.dev/blog/nx-supply-chain-attack-investigation-github-actions-workflow-exploit&quot;&gt;Nx Investigation Reveals GitHub Actions Workflow Exploit Led to npm Token Theft, Prompting Switch to Trusted Publishing&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-5&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-6&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://github.blog/security/supply-chain-security/our-plan-for-a-more-secure-npm-supply-chain/&quot;&gt;Our plan for a more secure npm supply chain&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-6&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-7&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://docs.npmjs.com/about-access-tokens#about-granular-access-tokens&quot;&gt;About granular access tokens&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-7&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-8&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://repos.openssf.org/trusted-publishers-for-all-package-repositories&quot;&gt;Trusted Publishers for All Package Repositories&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-8&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-9&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://socket.dev/blog/ongoing-supply-chain-attack-targets-crowdstrike-npm-packages&quot;&gt;Updated and Ongoing Supply Chain Attack Targets CrowdStrike npm Packages&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-9&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-10&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://openjsf.org/blog/publishing-securely-on-npm&quot;&gt;Publishing More Securely on npm: Guidance from the OpenJS Security Collaboration Space&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-10&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-11&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://github.com/orgs/community/discussions/179562#discussioncomment-15221604&quot;&gt;Comment: Classic token removal moves to December 9, bundled with new CLI improvements&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-11&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;li id=&quot;user-content-fn-12&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://github.com/orgs/community/discussions/179562&quot;&gt;Update: Classic token removal moves to December 9, bundled with new CLI improvements&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-12&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;/ol&gt;\n&lt;/section&gt;","tags":["GitHub","npm","Security"],"date_published":"2026-01-06T00:00:00.000Z","date_updated":"2026-01-06T00:00:00.000Z"},{"id":"https://humanwhocodes.com/snippets/2025/08/setup-local-supabase-oauth-logins/","url":"https://humanwhocodes.com/snippets/2025/08/setup-local-supabase-oauth-logins/","title":"Set up local Supabase OAuth logins","author":{"name":"Nicholas C. Zakas"},"summary":"While it's easy to set up Supabase OAuth logins in the cloud product, setting it up for a local development environment is a bit tricky.","content_text":"\nOne of the benefits of [Supabase](https://supabase.com) is its integrated login system that supports many OAuth providers, including Google and GitHub. While there is plenty of documentation explaining how to set up OAuth providers in hosted Supabase, the [instructions for local Supabase](https://supabase.com/docs/guides/local-development/overview#use-auth-locally) are fairly terse and are missing several steps. \n\nFirst, create a callback endpoint in your application, for example, `/auth/callback`. This is the callback that will receive the OAuth information from Supabase. Specifically, you should receive either a `code` query string parameter that will allow the user to login or an `error` parameter indicating there was an error logging in. Note that for server-side authentication you must use [`@supabase/ssr`](https://npmjs.com/package/@supabase/ssr). Here's an example written using [Astro](https://astro.build).\n\n```ts\n// Astro example\nimport { createServerClient, parseCookieHeader } from \"@supabase/ssr\";\n\nconst supabaseUrl = import.meta.env.SUPABASE_URL;\nconst supabaseAnonKey = import.meta.env.SUPABASE_ANON_KEY;\n\nexport const GET: APIRoute = async ({ url, request, cookies, redirect }) => {\n\n    // if there's no code then redirect to login\n    const code = url.searchParams.get(\"code\");\n    if (!code) {\n        redirect(\"/login?error=no-code\");\n    }\n    \n    // there is a code, try to log in\n    const supabase = createServerClient(supabaseUrl, supabaseAnonKey, {\n        cookies: {\n            getAll() {\n                return parseCookieHeader(request.headers.get(\"cookie\") || \"\");\n            },\n            setAll(cookiesToSet) {\n                cookiesToSet.forEach(({ name, value, options }) =>\n                    cookies.set(name, value, options)\n                );\n            },\n        },\n    });\n    \n    const { data, error } = await supabase.auth.exchangeCodeForSession(code);\n    if (error || !data?.session) {\n        return redirect(\"/login?error=unauthorized\");\n    }\n\n    return redirect(\"/\");\n}\n```\n\nWith the callback set up, you now need to let Supabase know how to call it. In general, there are three steps to the OAuth login process for Supabase:\n\n1. Your application links off to the OAuth provider's authentication service.\n2. The OAuth provider redirects to the Supabase auth service.\n3. Supabase redirects to your application callback.\n\nTo ensure that happens locally, you need to edit the `config.toml` file. First, locate the `[auth]` section at the top and make sure the value for `site_url` is your local application URL and `additional_redirect_urls` contains the full URL for the callback endpoint. Here's an example:\n\n```toml\n[auth]\nenabled = true\n# The base URL of your website. Used as an allow-list for redirects and for constructing URLs used\n# in emails.\nsite_url = \"http://localhost:4321\"\n# A list of *exact* URLs that auth providers are permitted to redirect to post authentication.\nadditional_redirect_urls = [\"http://localhost:4321/auth/callback\"]\n```\n\n(Without specifying `additional_redirect_urls`, you'll always have to redirect back to the application homepage.)\n\nNext, create an entry for your OAuth provider including the client ID, client secret, and redirect URI. For GitHub, it would look like this:\n\n```toml\n[auth.external.github]\nenabled = true\nclient_id = \"env(GITHUB_CLIENT_ID)\"\nsecret = \"env(GITHUB_CLIENT_SECRET)\"\nredirect_uri = \"http://localhost:54321/auth/v1/callback\"\n```\n\nThe `redirect_uri` here needs to be the local Supabase URL for OAuth callbacks. The default port is 54321, so double-check the port.\n\nAfter editing `config.toml`, you need to restart Supabase to reload the configuration:\n\n```shell\nnpx supabase stop\nnpx supabase start\n```\n\nNext, you need to generate the OAuth URL to use in your application. Here's an example generating a URL to login with GitHub:\n\n```ts\nimport { createServerClient, parseCookieHeader } from \"@supabase/ssr\";\n\nconst supabaseUrl = import.meta.env.SUPABASE_URL;\nconst supabaseAnonKey = import.meta.env.SUPABASE_ANON_KEY;\n\nconst supabaseUrl = import.meta.env.SUPABASE_URL;\nconst supabaseAnonKey = import.meta.env.SUPABASE_ANON_KEY;\n\nexport const GET: APIRoute = async ({ url, request, cookies, redirect }) => {\n\n    // if there's no code then redirect to login\n    const code = url.searchParams.get(\"code\");\n    if (!code) {\n        redirect(\"/login?error=no-code\");\n    }\n    \n    // there is a code, try to log in\n    const supabase = createServerClient(supabaseUrl, supabaseAnonKey, {\n        cookies: {\n            getAll() {\n                return parseCookieHeader(request.headers.get(\"cookie\") || \"\");\n            },\n            setAll(cookiesToSet) {\n                cookiesToSet.forEach(({ name, value, options }) =>\n                    cookies.set(name, value, options)\n                );\n            },\n        },\n    });\n        \n    const { data, error } = await supabase.auth.signInWithOAuth({\n        provider: \"github\",\n        options: {\n            // must be listed as additional_redirect_urls\n            redirectTo: `http://localhost:4321/auth/callback`\n        }\n    });\n\n    if (error || !data?.url) {\n        // handle error\n    }\n\n    redirect(data.url);\n}\n```\n\nEverything is now wired up for the correct end-to-end flow.\n","content_html":"&lt;p&gt;One of the benefits of &lt;a href=&quot;https://supabase.com&quot;&gt;Supabase&lt;/a&gt; is its integrated login system that supports many OAuth providers, including Google and GitHub. While there is plenty of documentation explaining how to set up OAuth providers in hosted Supabase, the &lt;a href=&quot;https://supabase.com/docs/guides/local-development/overview#use-auth-locally&quot;&gt;instructions for local Supabase&lt;/a&gt; are fairly terse and are missing several steps.&lt;/p&gt;\n&lt;p&gt;First, create a callback endpoint in your application, for example, &lt;code&gt;/auth/callback&lt;/code&gt;. This is the callback that will receive the OAuth information from Supabase. Specifically, you should receive either a &lt;code&gt;code&lt;/code&gt; query string parameter that will allow the user to login or an &lt;code&gt;error&lt;/code&gt; parameter indicating there was an error logging in. Note that for server-side authentication you must use &lt;a href=&quot;https://npmjs.com/package/@supabase/ssr&quot;&gt;&lt;code&gt;@supabase/ssr&lt;/code&gt;&lt;/a&gt;. Here’s an example written using &lt;a href=&quot;https://astro.build&quot;&gt;Astro&lt;/a&gt;.&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;// Astro example&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;import&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; { createServerClient, parseCookieHeader } &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;from&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;@supabase/ssr&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;supabaseUrl&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;import&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;meta&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.env.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;SUPABASE_URL&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;supabaseAnonKey&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;import&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;meta&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.env.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;SUPABASE_ANON_KEY&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;export&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;GET&lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;:&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;APIRoute&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;async&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; ({ &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;url&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;request&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;cookies&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;redirect&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; }) &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&gt;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;// if there&apos;s no code then redirect to login&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;code&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; url.searchParams.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;get&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;code&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;if&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; (&lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;!&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;code) {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;redirect&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;/login?error=no-code&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;// there is a code, try to log in&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;supabase&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;createServerClient&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(supabaseUrl, supabaseAnonKey, {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        cookies: {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;getAll&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;() {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;                &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;return&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;parseCookieHeader&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(request.headers.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;get&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;cookie&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;) &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;||&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            },&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;setAll&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;cookiesToSet&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;) {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;                cookiesToSet.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;forEach&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(({ &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;name&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;value&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;options&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; }) &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&gt;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;                    cookies.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;set&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(name, value, options)&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;                );&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            },&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        },&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    });&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; { &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;data&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;error&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; } &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;await&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; supabase.auth.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;exchangeCodeForSession&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(code);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;if&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; (error &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;||&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;!&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;data?.session) {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;return&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;redirect&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;/login?error=unauthorized&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;return&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;redirect&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;/&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;With the callback set up, you now need to let Supabase know how to call it. In general, there are three steps to the OAuth login process for Supabase:&lt;/p&gt;\n&lt;ol&gt;\n&lt;li&gt;Your application links off to the OAuth provider’s authentication service.&lt;/li&gt;\n&lt;li&gt;The OAuth provider redirects to the Supabase auth service.&lt;/li&gt;\n&lt;li&gt;Supabase redirects to your application callback.&lt;/li&gt;\n&lt;/ol&gt;\n&lt;p&gt;To ensure that happens locally, you need to edit the &lt;code&gt;config.toml&lt;/code&gt; file. First, locate the &lt;code&gt;[auth]&lt;/code&gt; section at the top and make sure the value for &lt;code&gt;site_url&lt;/code&gt; is your local application URL and &lt;code&gt;additional_redirect_urls&lt;/code&gt; contains the full URL for the callback endpoint. Here’s an example:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;[&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;auth&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;]&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;enabled = &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;true&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;# The base URL of your website. Used as an allow-list for redirects and for constructing URLs used&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;# in emails.&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;site_url = &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;http://localhost:4321&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;# A list of *exact* URLs that auth providers are permitted to redirect to post authentication.&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;additional_redirect_urls = [&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;http://localhost:4321/auth/callback&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;]&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;(Without specifying &lt;code&gt;additional_redirect_urls&lt;/code&gt;, you’ll always have to redirect back to the application homepage.)&lt;/p&gt;\n&lt;p&gt;Next, create an entry for your OAuth provider including the client ID, client secret, and redirect URI. For GitHub, it would look like this:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;[&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;auth&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;external&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;github&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;]&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;enabled = &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;true&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;client_id = &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;env(GITHUB_CLIENT_ID)&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;secret = &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;env(GITHUB_CLIENT_SECRET)&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;redirect_uri = &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;http://localhost:54321/auth/v1/callback&quot;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;The &lt;code&gt;redirect_uri&lt;/code&gt; here needs to be the local Supabase URL for OAuth callbacks. The default port is 54321, so double-check the port.&lt;/p&gt;\n&lt;p&gt;After editing &lt;code&gt;config.toml&lt;/code&gt;, you need to restart Supabase to reload the configuration:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;npx&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;supabase&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;stop&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;npx&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;supabase&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;start&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;Next, you need to generate the OAuth URL to use in your application. Here’s an example generating a URL to login with GitHub:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;import&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; { createServerClient, parseCookieHeader } &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;from&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;@supabase/ssr&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;supabaseUrl&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;import&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;meta&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.env.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;SUPABASE_URL&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;supabaseAnonKey&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;import&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;meta&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.env.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;SUPABASE_ANON_KEY&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;supabaseUrl&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;import&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;meta&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.env.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;SUPABASE_URL&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;supabaseAnonKey&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;import&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;meta&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.env.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;SUPABASE_ANON_KEY&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;export&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;GET&lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;:&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;APIRoute&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;async&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; ({ &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;url&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;request&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;cookies&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;redirect&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; }) &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&gt;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;// if there&apos;s no code then redirect to login&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;code&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; url.searchParams.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;get&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;code&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;if&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; (&lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;!&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;code) {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;redirect&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;/login?error=no-code&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;// there is a code, try to log in&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;supabase&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;createServerClient&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(supabaseUrl, supabaseAnonKey, {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        cookies: {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;getAll&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;() {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;                &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;return&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;parseCookieHeader&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(request.headers.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;get&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;cookie&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;) &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;||&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            },&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;setAll&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;cookiesToSet&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;) {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;                cookiesToSet.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;forEach&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(({ &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;name&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;value&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;options&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; }) &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&gt;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;                    cookies.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;set&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(name, value, options)&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;                );&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            },&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        },&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    });&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        &lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; { &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;data&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;error&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; } &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;await&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; supabase.auth.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;signInWithOAuth&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;({&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        provider: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;github&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        options: {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;// must be listed as additional_redirect_urls&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            redirectTo: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;`http://localhost:4321/auth/callback`&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    });&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;if&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; (error &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;||&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;!&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;data?.url) {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        &lt;/span&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;// handle error&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;redirect&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(data.url);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;Everything is now wired up for the correct end-to-end flow.&lt;/p&gt;","tags":["JavaScript","Supabase","OAuth","Login"],"date_published":"2025-08-27T00:00:00.000Z","date_updated":"2025-08-27T00:00:00.000Z"},{"id":"https://humanwhocodes.com/snippets/2025/08/run-multiple-cloudflare-workers-locally/","url":"https://humanwhocodes.com/snippets/2025/08/run-multiple-cloudflare-workers-locally/","title":"Run multiple Cloudflare workers locally","author":{"name":"Nicholas C. Zakas"},"summary":"Wrangler is made primarily to run one worker at a time. You can also use it to run all of your workers at the same time.","content_text":"\nWhen you're testing an application locally, you ideally want an environment that\nmimics production as much as possible. Especially when you're using multiple\n[Cloudflare workers](https://workers.cloudflare.com/), you'll want to make sure\nyou can run them together for end-to-end testing during development. Without\nthis ability, you either need to run everything in the cloud (problematic when\nyour internet connection is down or slow) and test each worker individually.\n\n[Wrangler](https://developers.cloudflare.com/workers/wrangler/) is capable of\nrunning multiple workers at the same time, like this:\n\n```shell\nnpx wrangler dev -c worker1/wrangler.jsonc -c worker2/wrangler.jsonc\n```\n\nThis is important because workers run together in one dev session share\nresources, including queues, and can communicate with one another. That's key\nfor creating a production-like environment locally.\n\nHowever, Wrangler only starts _one server_ for all workers and there's no way to\nroute between them. After some digging, I found this note in the\n[docs](https://developers.cloudflare.com/workers/wrangler/commands/#dev):\n\n> You can provide multiple configuration files to run multiple Workers in one\n> dev session like this:\n> `wrangler dev -c ./wrangler.toml -c ../other-worker/wrangler.toml`. The first\n> config will be treated as the _primary Worker_, which will be exposed over\n> HTTP. The remaining config files will only be accessible via a service binding\n> from the primary Worker.\n\nThat means we need the first worker listed to both list other workers as service\nbindings and route requests to the correct worker.\n\n## The `http` worker\n\nYou can create a simple worker, that I name `http`, to handle this for you. The\nfirst step is to list your other services in the `wrangler.jsonc` file:\n\n```jsonc\n{\n    \"$schema\": \"node_modules/wrangler/config-schema.json\",\n    \"name\": \"http\",\n    \"main\": \"src/index.ts\",\n    \"compatibility_date\": \"2025-08-22\",\n    \"services\": [\n        {\n            \"binding\": \"worker1\",\n            \"service\": \"worker1\"\n        },\n        {\n            \"binding\": \"worker2\",\n            \"service\": \"worker2\"\n        }\n    ]\n}\n```\n\nNext, you'll use [Hono](https://hono.dev) to route requests based on the request\npath. To make things easy, the path will be the worker binding name, which means\nyou'll only need to update the `wrangler.jsonc` file when you want to add or\nremove workers. Here's the code:\n\n```typescript\nimport { Hono } from \"hono\";\n\ninterface Bindings {\n    [binding: string]: Fetcher;\n}\n\nconst app = new Hono<{ Bindings: Bindings }>();\n\napp.all(\"/:worker/*\", (c) => {\n    const worker = c.req.param(\"worker\");\n    const binding = c.env[worker];\n\n    if (!binding || typeof binding.fetch !== \"function\") {\n        return c.text(`Worker binding '${worker}' not found`, 404);\n    }\n\n    // Rewrite the URL to remove the worker prefix\n    const url = URL.parse(c.req.url) as URL;\n    url.pathname = url.pathname.slice(`/${worker}`.length) || \"/\";\n\n    // create a new request object to avoid issues with reused requests\n    const request = new Request(url, c.req.raw.clone());\n\n    return binding.fetch(request);\n});\n\nexport default app;\n```\n\nNote that you need to pass `c.req.raw` to the worker rather than `c.req`, which\nis a Hono-specific object. In this way, all of the request information is passed\ndirectly to the worker. (You can, optionally, modify it as necessary.)\n\nNow, make sure that the `http` worker is the first one passed to Wrangler:\n\n```shell\nnpx wrangler dev -c http/wrangler.jsonc -c worker1/wrangler.jsonc -c worker2/wrangler.jsonc\n```\n\nThen you can test out your workers locally:\n\n```shell\n# call worker1\ncurl -i -X POST http://localhost:8787/worker1 \\\n    -H \"Content-Type: application/json\" \\\n    -d '{\"message\":\"Hello worker1!\"}'\n\n# call worker2\ncurl -i -X POST http://localhost:8787/worker2 \\\n    -H \"Content-Type: application/json\" \\\n    -d '{\"message\":\"Hello worker2!\"}'\n```\n\nEnjoy your local multi-worker development environment!\n\n**Updated (2025-08-25):** Cleaned up TypeScript code to match best practices for\nHono.\n\n**Updated (2025-09-08):** Enhanced the Hono app so that it handles URL rewriting\nand all HTTP verbs.\n","content_html":"&lt;p&gt;When you’re testing an application locally, you ideally want an environment that\nmimics production as much as possible. Especially when you’re using multiple\n&lt;a href=&quot;https://workers.cloudflare.com/&quot;&gt;Cloudflare workers&lt;/a&gt;, you’ll want to make sure\nyou can run them together for end-to-end testing during development. Without\nthis ability, you either need to run everything in the cloud (problematic when\nyour internet connection is down or slow) and test each worker individually.&lt;/p&gt;\n&lt;p&gt;&lt;a href=&quot;https://developers.cloudflare.com/workers/wrangler/&quot;&gt;Wrangler&lt;/a&gt; is capable of\nrunning multiple workers at the same time, like this:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;npx&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;wrangler&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;dev&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-c&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;worker1/wrangler.jsonc&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-c&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;worker2/wrangler.jsonc&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;This is important because workers run together in one dev session share\nresources, including queues, and can communicate with one another. That’s key\nfor creating a production-like environment locally.&lt;/p&gt;\n&lt;p&gt;However, Wrangler only starts &lt;em&gt;one server&lt;/em&gt; for all workers and there’s no way to\nroute between them. After some digging, I found this note in the\n&lt;a href=&quot;https://developers.cloudflare.com/workers/wrangler/commands/#dev&quot;&gt;docs&lt;/a&gt;:&lt;/p&gt;\n&lt;blockquote&gt;\n&lt;p&gt;You can provide multiple configuration files to run multiple Workers in one\ndev session like this:\n&lt;code&gt;wrangler dev -c ./wrangler.toml -c ../other-worker/wrangler.toml&lt;/code&gt;. The first\nconfig will be treated as the &lt;em&gt;primary Worker&lt;/em&gt;, which will be exposed over\nHTTP. The remaining config files will only be accessible via a service binding\nfrom the primary Worker.&lt;/p&gt;\n&lt;/blockquote&gt;\n&lt;p&gt;That means we need the first worker listed to both list other workers as service\nbindings and route requests to the correct worker.&lt;/p&gt;\n&lt;h2 id=&quot;the-http-worker&quot;&gt;The &lt;code&gt;http&lt;/code&gt; worker&lt;/h2&gt;\n&lt;p&gt;You can create a simple worker, that I name &lt;code&gt;http&lt;/code&gt;, to handle this for you. The\nfirst step is to list your other services in the &lt;code&gt;wrangler.jsonc&lt;/code&gt; file:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;{&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;$schema&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;node_modules/wrangler/config-schema.json&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;name&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;http&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;main&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;src/index.ts&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;compatibility_date&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;2025-08-22&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;services&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: [&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;binding&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;worker1&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;service&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;worker1&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        },&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;binding&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;worker2&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;            &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;service&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;worker2&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    ]&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;Next, you’ll use &lt;a href=&quot;https://hono.dev&quot;&gt;Hono&lt;/a&gt; to route requests based on the request\npath. To make things easy, the path will be the worker binding name, which means\nyou’ll only need to update the &lt;code&gt;wrangler.jsonc&lt;/code&gt; file when you want to add or\nremove workers. Here’s the code:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;import&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; { Hono } &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;from&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;hono&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;interface&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;Bindings&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    [&lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;binding&lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;:&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;string&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;]&lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;:&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;Fetcher&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;}&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;app&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;new&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;Hono&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;&amp;#x3C;{ &lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;Bindings&lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;:&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;Bindings&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; }&gt;();&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;app.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;all&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;/:worker/*&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, (&lt;/span&gt;&lt;span style=&quot;color: #FFAB70&quot;&gt;c&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;) &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&gt;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;worker&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; c.req.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;param&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;worker&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;binding&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; c.env[worker];&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;if&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; (&lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;!&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;binding &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;||&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;typeof&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; binding.fetch &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;!==&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;function&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;) {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;return&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; c.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;text&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;`Worker binding &apos;${&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;worker&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;}&apos; not found`&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;404&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;// Rewrite the URL to remove the worker prefix&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;url&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;URL&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;parse&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(c.req.url) &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;as&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;URL&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    url.pathname &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; url.pathname.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;slice&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;`/${&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;worker&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;}`&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;length&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;) &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;||&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;/&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;// create a new request object to avoid issues with reused requests&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;const&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;request&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;=&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;new&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;Request&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(url, c.req.raw.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;clone&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;());&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;return&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; binding.&lt;/span&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;fetch&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;(request);&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;});&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #F97583&quot;&gt;export&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #F97583&quot;&gt;default&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; app;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;Note that you need to pass &lt;code&gt;c.req.raw&lt;/code&gt; to the worker rather than &lt;code&gt;c.req&lt;/code&gt;, which\nis a Hono-specific object. In this way, all of the request information is passed\ndirectly to the worker. (You can, optionally, modify it as necessary.)&lt;/p&gt;\n&lt;p&gt;Now, make sure that the &lt;code&gt;http&lt;/code&gt; worker is the first one passed to Wrangler:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;npx&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;wrangler&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;dev&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-c&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;http/wrangler.jsonc&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-c&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;worker1/wrangler.jsonc&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-c&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;worker2/wrangler.jsonc&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;Then you can test out your workers locally:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;# call worker1&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;curl&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-i&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-X&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;POST&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;http://localhost:8787/worker1&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;\\&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-H&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;Content-Type: application/json&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;\\&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-d&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&apos;{&quot;message&quot;:&quot;Hello worker1!&quot;}&apos;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;# call worker2&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;curl&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-i&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-X&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;POST&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;http://localhost:8787/worker2&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;\\&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-H&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;Content-Type: application/json&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;\\&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-d&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&apos;{&quot;message&quot;:&quot;Hello worker2!&quot;}&apos;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;Enjoy your local multi-worker development environment!&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Updated (2025-08-25):&lt;/strong&gt; Cleaned up TypeScript code to match best practices for\nHono.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Updated (2025-09-08):&lt;/strong&gt; Enhanced the Hono app so that it handles URL rewriting\nand all HTTP verbs.&lt;/p&gt;","tags":["JavaScript","Cloudflare","Edge Workers"],"date_published":"2025-08-22T00:00:00.000Z","date_updated":"2025-08-22T00:00:00.000Z"},{"id":"https://humanwhocodes.com/blog/2025/06/persona-based-approach-ai-assisted-programming/","url":"https://humanwhocodes.com/blog/2025/06/persona-based-approach-ai-assisted-programming/","title":"A persona-based approach to AI-assisted software development","author":{"name":"Nicholas C. Zakas"},"summary":"Discover how breaking AI assistance into specialized personas can help you tackle complex software development tasks more efficiently and with less frustration.","content_text":"\nI’ve spent most of 2025 experimenting with AI-assisted programming, with mixed results. I tried different models, prompt styles, and editors to understand where AI adds value and where it becomes a distraction. Eventually, I developed a process for using AI to make non-trivial changes.\n\nNon-trivial changes require project knowledge and multiple steps to complete. I wasted too much time trying to rein in models that veered off track and needed a more productive approach. Eventually, I started treating AI as a team of personas rather than a single helper. This let me break the work into chunks and assign tasks to different models based on their strengths.\n\n## The development process\n\nI thought through how I typically implement a feature. The process usually looks like this:\n\n* Gather the requirements\n* Design the solution\n* Implement the solution\n* Test and debug the solution\n\nI assigned a persona to each task:\n\n* The product manager\n* The software architect\n* The implementer\n* The problem solver\n\nAt different steps I felt like I needed some quality control, so I added a couple other personas:\n\n* The tech spec reviewer\n* The implementation reviewer\n\nEach persona plays a role in implementing a feature, and knowing when to use each one has made me more productive. Here’s how I think about them.\n\n## The product manager\n\nThe product manager persona gathers requirements and creates a product requirements document (PRD) focused on functional rather than technical needs. Asking the model to generate user stories helps keep it away from implementation details.\n\n**Prompt:**\n\n> You are a product manager for this application. Your task is to turn user requirements into product requirements documents (PRDs) that include user stories for new features. Add acceptance criteria. If you don’t have enough information, ask me questions about the feature. Insert the design into a Markdown file in the `docs` directory of the repository. The file name should be in Kebab-case named and end with `-prd.md` suffix, for example `docs/saves-data-prd.md`. The file should be formatted in Markdown and include headings and bullet points.\n\nThe application needs an authentication flow to support signed-in functionality. Users should be able to sign up from a link on the login page if they don’t have an account. When signed out, the UI should show a login link; when signed in, it should show the user’s profile picture. The “Recent Saves” section on the homepage and the “Saves” page should be accessible only to logged-in users.\n\n**My choice:** GPT-4.1 \n\n**Rationale:** GPT-4.1 is focused and less likely to go off track. It lacks the deep technical knowledge of Claude or Gemini but does not need it for this role. It’s usually available in standard AI editors, which helps keep costs reasonable. GPT 4.1 isn't considered a premium model so you pay less to use it in most IDEs.\n\n## The architect\n\nThe architect persona designs the technical implementation of the feature. Using the PRD, it creates step-by-step instructions for the change. This persona requires deep technical knowledge and a strong understanding of how systems are built from smaller parts. It does not write code but describes the design to be implemented. You must guide this persona on technical requirements, such as when to use client-side versus server-side logic.\n\n**Prompt:**\n\n > You are a software architect for this application. Your product manager has provided the attached PRD outlining the functional requirements for a new feature. Your task is to design the implementation and ensure all acceptance criteria are met. Scan the current codebase to find integration points. Create a step-by-step guide detailing how to implement your design. Include all details an LLM needs to implement this feature without reading the PRD. DO NOT INCLUDE SOURCE CODE. If anything is unclear, ask me questions about the PRD or implementation. If you need to make assumptions, state them clearly. Insert the design into a Markdown file in the `docs` directory of the repository. The file should be named the same as the PRD without \"prd\" in the name an with \"techspec\" instead. For example, if the PRD is `docs/saves-data-prd.md`, the file should be `docs/saves-data-techspec.md`. The file should be formatted in Markdown and include headings and bullet points.\n\n**My choice:** Gemini 3 Pro, GPT 5.2 (fallback)\n\n**Rationale:** I previously used Gemini 2.5 Pro as the architect but I've switched to GPT-5. I've found GPT-5 to be just as technically knowledgeable and outputs technical specifications with much more depth and specificity. The result is a concrete plan that any LLM can easily follow. GPT-5 is a premium model in VS Code, and it's well worth the extra cost. If you're using an editor where GPT-5 is not available, then Gemini 2.5 Pro is a good second choice.\n\n## The implementer\n\nThe implementer persona carries out the design based on the architect’s technical specification. It needs to follow instructions and not make too many design decisions on its own.\n\n**Prompt:**\n\n> You are a software engineer tasked with implementing the feature described in the attached file. If anything is unclear, ask me questions before starting. You must complete all steps in the document. After finishing, verify that all steps are complete; if not, return and implement the missing steps. Repeat this process until all steps are done.\n\n**My choice:** Claude Haiku 4.5, Gemini 3 Flash (fallback)\n\n**Rationale:** Haiku is a great little model that is fast and effective. It stays on track better than earlier Sonnet models and doesn't get lost when all details aren't provided. On VS Code, it has a 0.33x multiplier, so you can use it without worrying about running out of premium requests. The only downside is it frequently skips reading `AGENTS.md`, but you can make up for that by just asking it do so explicitly first. Gemini 3 Flash, also a 0.33x multiplier, is my next favorite as it's also fast and effective. It seems like its context window gets filled up faster than Haiku, though, which can be a problem for longer implementation plans.\n\n## The problem solver\n\nIdeally, the feature is now implemented. More often, though, something still doesn’t work as expected. That’s when you need the problem solver persona. This persona investigates issues and finds how to fix them. Not all LLMs excel at this, so choosing the right one matters.\n\n**Prompt:**\n\n> The homepage isn’t being updated when I log in. It should show the profile photo in the header and log out button. Fix it.\n\n**My choice:** Gemini 3 Pro\n\n**Rationale:** I've found Gemini 3 Pro to work the best for problems where I have no idea what is going on. It tends to find targeted solutions rather quickly when compared to models like Claude Sonnet 4.5 and GPT 5.2, both of which tend to spend more time and end up with larger edits to address problems. Gemini 3 Pro always just seems to know what to do and what not to do, and overall saves me a lot of time.\n\n## The tech spec reviewer\n\nThe tech spec reviewer persona is quality assurance for the technical specification. While I don't use this persona all the time, I use it whenever the specification is more complex than usual. In that case, I want another review to make sure nothing is missing and there aren't any non-obvious technical issues such as race conditions.\n\n**Prompt:**\n\n> You are a software architect. Critique this specification, paying particular attention to scalability and performance issues. Identify edge cases that are not adequately addressed and possible race conditions. Compare the specification against the PRD to ensure that acceptance criteria is met. \n\n**My choice:** Gemini 3 Pro\n\n**Rationale:** Gemini really shines in this type of role. Its deep technical knowledge is coupled with an ability to describe exact situations where a technical design will fail or introduce problems. Gemini debates approaches using scenarios and explains its thinking well, so you can identify any logic errors or misalignment. Iterating on a specification with Gemini has been a lot of fun for me, personally, as it almost always seems to be correct (provided I gave it the correct requirements). Plus, it is the least sycophantic model I've used, standing its ground on points it knows to be correct.\n\n## The implementation reviewer\n\nThe implementation reviewer persona is quality assurance for the code generated from the technical specification. This is another persona I don't use all the time but lean on for more complex implementations. \n\n**Prompt:**\n\n> You are a software architect. Review the attached specification and then scan the codebase to validate that the specification has been implemented correctly. If there are any problems, list them out in descending order of severity with a proposed fix. Do not implement the fixes, just describe them.\n\n**My choice:** Gemini 3 Pro\n\n**Rationale:** Once again, Gemini is perfect for this role. It's capable of deeply understanding the technical specification and matching it against the existing code. In a recent complex feature, I was surprised at how many problems Gemini found in the implementation even after I had reviewed it. \n\n## Conclusion\n\nOver the past months, I’ve found that treating AI not as a single assistant but as a team of specialized personas dramatically improves my productivity when handling complex changes. By clearly defining roles such as product manager, architect, implementer, problem solver, tech spec reviewer, and implementation reviewer, and choosing the right model for each, I can guide the process efficiently from requirements to debugging. This approach minimizes distractions and leverages each model’s strengths, making AI-assisted programming a practical and powerful part of my workflow. If you’re experimenting with AI in development, I encourage you to try this persona-driven process and see how it transforms your projects.\n\n**Update (2025-08-14):** Switched architect and problem solver personas to use GPT-5. Added the tech spec reviewer and implementation reviewer personas.\n\n**Update (2026-01-28):** Switched architect role, problem solver, tech spec reviewer, and implementation reviewer to Gemini 3 Pro. Switched implementer role to Claude 4.5 Haiku.\n","content_html":"&lt;p&gt;I’ve spent most of 2025 experimenting with AI-assisted programming, with mixed results. I tried different models, prompt styles, and editors to understand where AI adds value and where it becomes a distraction. Eventually, I developed a process for using AI to make non-trivial changes.&lt;/p&gt;\n&lt;p&gt;Non-trivial changes require project knowledge and multiple steps to complete. I wasted too much time trying to rein in models that veered off track and needed a more productive approach. Eventually, I started treating AI as a team of personas rather than a single helper. This let me break the work into chunks and assign tasks to different models based on their strengths.&lt;/p&gt;\n&lt;h2 id=&quot;the-development-process&quot;&gt;The development process&lt;/h2&gt;\n&lt;p&gt;I thought through how I typically implement a feature. The process usually looks like this:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;Gather the requirements&lt;/li&gt;\n&lt;li&gt;Design the solution&lt;/li&gt;\n&lt;li&gt;Implement the solution&lt;/li&gt;\n&lt;li&gt;Test and debug the solution&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;I assigned a persona to each task:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;The product manager&lt;/li&gt;\n&lt;li&gt;The software architect&lt;/li&gt;\n&lt;li&gt;The implementer&lt;/li&gt;\n&lt;li&gt;The problem solver&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;At different steps I felt like I needed some quality control, so I added a couple other personas:&lt;/p&gt;\n&lt;ul&gt;\n&lt;li&gt;The tech spec reviewer&lt;/li&gt;\n&lt;li&gt;The implementation reviewer&lt;/li&gt;\n&lt;/ul&gt;\n&lt;p&gt;Each persona plays a role in implementing a feature, and knowing when to use each one has made me more productive. Here’s how I think about them.&lt;/p&gt;\n&lt;h2 id=&quot;the-product-manager&quot;&gt;The product manager&lt;/h2&gt;\n&lt;p&gt;The product manager persona gathers requirements and creates a product requirements document (PRD) focused on functional rather than technical needs. Asking the model to generate user stories helps keep it away from implementation details.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Prompt:&lt;/strong&gt;&lt;/p&gt;\n&lt;blockquote&gt;\n&lt;p&gt;You are a product manager for this application. Your task is to turn user requirements into product requirements documents (PRDs) that include user stories for new features. Add acceptance criteria. If you don’t have enough information, ask me questions about the feature. Insert the design into a Markdown file in the &lt;code&gt;docs&lt;/code&gt; directory of the repository. The file name should be in Kebab-case named and end with &lt;code&gt;-prd.md&lt;/code&gt; suffix, for example &lt;code&gt;docs/saves-data-prd.md&lt;/code&gt;. The file should be formatted in Markdown and include headings and bullet points.&lt;/p&gt;\n&lt;/blockquote&gt;\n&lt;p&gt;The application needs an authentication flow to support signed-in functionality. Users should be able to sign up from a link on the login page if they don’t have an account. When signed out, the UI should show a login link; when signed in, it should show the user’s profile picture. The “Recent Saves” section on the homepage and the “Saves” page should be accessible only to logged-in users.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;My choice:&lt;/strong&gt; GPT-4.1&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Rationale:&lt;/strong&gt; GPT-4.1 is focused and less likely to go off track. It lacks the deep technical knowledge of Claude or Gemini but does not need it for this role. It’s usually available in standard AI editors, which helps keep costs reasonable. GPT 4.1 isn’t considered a premium model so you pay less to use it in most IDEs.&lt;/p&gt;\n&lt;h2 id=&quot;the-architect&quot;&gt;The architect&lt;/h2&gt;\n&lt;p&gt;The architect persona designs the technical implementation of the feature. Using the PRD, it creates step-by-step instructions for the change. This persona requires deep technical knowledge and a strong understanding of how systems are built from smaller parts. It does not write code but describes the design to be implemented. You must guide this persona on technical requirements, such as when to use client-side versus server-side logic.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Prompt:&lt;/strong&gt;&lt;/p&gt;\n&lt;blockquote&gt;\n&lt;p&gt;You are a software architect for this application. Your product manager has provided the attached PRD outlining the functional requirements for a new feature. Your task is to design the implementation and ensure all acceptance criteria are met. Scan the current codebase to find integration points. Create a step-by-step guide detailing how to implement your design. Include all details an LLM needs to implement this feature without reading the PRD. DO NOT INCLUDE SOURCE CODE. If anything is unclear, ask me questions about the PRD or implementation. If you need to make assumptions, state them clearly. Insert the design into a Markdown file in the &lt;code&gt;docs&lt;/code&gt; directory of the repository. The file should be named the same as the PRD without “prd” in the name an with “techspec” instead. For example, if the PRD is &lt;code&gt;docs/saves-data-prd.md&lt;/code&gt;, the file should be &lt;code&gt;docs/saves-data-techspec.md&lt;/code&gt;. The file should be formatted in Markdown and include headings and bullet points.&lt;/p&gt;\n&lt;/blockquote&gt;\n&lt;p&gt;&lt;strong&gt;My choice:&lt;/strong&gt; Gemini 3 Pro, GPT 5.2 (fallback)&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Rationale:&lt;/strong&gt; I previously used Gemini 2.5 Pro as the architect but I’ve switched to GPT-5. I’ve found GPT-5 to be just as technically knowledgeable and outputs technical specifications with much more depth and specificity. The result is a concrete plan that any LLM can easily follow. GPT-5 is a premium model in VS Code, and it’s well worth the extra cost. If you’re using an editor where GPT-5 is not available, then Gemini 2.5 Pro is a good second choice.&lt;/p&gt;\n&lt;h2 id=&quot;the-implementer&quot;&gt;The implementer&lt;/h2&gt;\n&lt;p&gt;The implementer persona carries out the design based on the architect’s technical specification. It needs to follow instructions and not make too many design decisions on its own.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Prompt:&lt;/strong&gt;&lt;/p&gt;\n&lt;blockquote&gt;\n&lt;p&gt;You are a software engineer tasked with implementing the feature described in the attached file. If anything is unclear, ask me questions before starting. You must complete all steps in the document. After finishing, verify that all steps are complete; if not, return and implement the missing steps. Repeat this process until all steps are done.&lt;/p&gt;\n&lt;/blockquote&gt;\n&lt;p&gt;&lt;strong&gt;My choice:&lt;/strong&gt; Claude Haiku 4.5, Gemini 3 Flash (fallback)&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Rationale:&lt;/strong&gt; Haiku is a great little model that is fast and effective. It stays on track better than earlier Sonnet models and doesn’t get lost when all details aren’t provided. On VS Code, it has a 0.33x multiplier, so you can use it without worrying about running out of premium requests. The only downside is it frequently skips reading &lt;code&gt;AGENTS.md&lt;/code&gt;, but you can make up for that by just asking it do so explicitly first. Gemini 3 Flash, also a 0.33x multiplier, is my next favorite as it’s also fast and effective. It seems like its context window gets filled up faster than Haiku, though, which can be a problem for longer implementation plans.&lt;/p&gt;\n&lt;h2 id=&quot;the-problem-solver&quot;&gt;The problem solver&lt;/h2&gt;\n&lt;p&gt;Ideally, the feature is now implemented. More often, though, something still doesn’t work as expected. That’s when you need the problem solver persona. This persona investigates issues and finds how to fix them. Not all LLMs excel at this, so choosing the right one matters.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Prompt:&lt;/strong&gt;&lt;/p&gt;\n&lt;blockquote&gt;\n&lt;p&gt;The homepage isn’t being updated when I log in. It should show the profile photo in the header and log out button. Fix it.&lt;/p&gt;\n&lt;/blockquote&gt;\n&lt;p&gt;&lt;strong&gt;My choice:&lt;/strong&gt; Gemini 3 Pro&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Rationale:&lt;/strong&gt; I’ve found Gemini 3 Pro to work the best for problems where I have no idea what is going on. It tends to find targeted solutions rather quickly when compared to models like Claude Sonnet 4.5 and GPT 5.2, both of which tend to spend more time and end up with larger edits to address problems. Gemini 3 Pro always just seems to know what to do and what not to do, and overall saves me a lot of time.&lt;/p&gt;\n&lt;h2 id=&quot;the-tech-spec-reviewer&quot;&gt;The tech spec reviewer&lt;/h2&gt;\n&lt;p&gt;The tech spec reviewer persona is quality assurance for the technical specification. While I don’t use this persona all the time, I use it whenever the specification is more complex than usual. In that case, I want another review to make sure nothing is missing and there aren’t any non-obvious technical issues such as race conditions.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Prompt:&lt;/strong&gt;&lt;/p&gt;\n&lt;blockquote&gt;\n&lt;p&gt;You are a software architect. Critique this specification, paying particular attention to scalability and performance issues. Identify edge cases that are not adequately addressed and possible race conditions. Compare the specification against the PRD to ensure that acceptance criteria is met.&lt;/p&gt;\n&lt;/blockquote&gt;\n&lt;p&gt;&lt;strong&gt;My choice:&lt;/strong&gt; Gemini 3 Pro&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Rationale:&lt;/strong&gt; Gemini really shines in this type of role. Its deep technical knowledge is coupled with an ability to describe exact situations where a technical design will fail or introduce problems. Gemini debates approaches using scenarios and explains its thinking well, so you can identify any logic errors or misalignment. Iterating on a specification with Gemini has been a lot of fun for me, personally, as it almost always seems to be correct (provided I gave it the correct requirements). Plus, it is the least sycophantic model I’ve used, standing its ground on points it knows to be correct.&lt;/p&gt;\n&lt;h2 id=&quot;the-implementation-reviewer&quot;&gt;The implementation reviewer&lt;/h2&gt;\n&lt;p&gt;The implementation reviewer persona is quality assurance for the code generated from the technical specification. This is another persona I don’t use all the time but lean on for more complex implementations.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Prompt:&lt;/strong&gt;&lt;/p&gt;\n&lt;blockquote&gt;\n&lt;p&gt;You are a software architect. Review the attached specification and then scan the codebase to validate that the specification has been implemented correctly. If there are any problems, list them out in descending order of severity with a proposed fix. Do not implement the fixes, just describe them.&lt;/p&gt;\n&lt;/blockquote&gt;\n&lt;p&gt;&lt;strong&gt;My choice:&lt;/strong&gt; Gemini 3 Pro&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Rationale:&lt;/strong&gt; Once again, Gemini is perfect for this role. It’s capable of deeply understanding the technical specification and matching it against the existing code. In a recent complex feature, I was surprised at how many problems Gemini found in the implementation even after I had reviewed it.&lt;/p&gt;\n&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;\n&lt;p&gt;Over the past months, I’ve found that treating AI not as a single assistant but as a team of specialized personas dramatically improves my productivity when handling complex changes. By clearly defining roles such as product manager, architect, implementer, problem solver, tech spec reviewer, and implementation reviewer, and choosing the right model for each, I can guide the process efficiently from requirements to debugging. This approach minimizes distractions and leverages each model’s strengths, making AI-assisted programming a practical and powerful part of my workflow. If you’re experimenting with AI in development, I encourage you to try this persona-driven process and see how it transforms your projects.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Update (2025-08-14):&lt;/strong&gt; Switched architect and problem solver personas to use GPT-5. Added the tech spec reviewer and implementation reviewer personas.&lt;/p&gt;\n&lt;p&gt;&lt;strong&gt;Update (2026-01-28):&lt;/strong&gt; Switched architect role, problem solver, tech spec reviewer, and implementation reviewer to Gemini 3 Pro. Switched implementer role to Claude 4.5 Haiku.&lt;/p&gt;","tags":["AI","Claude","GPT"],"date_published":"2025-06-11T00:00:00.000Z","date_updated":"2026-01-28T00:00:00.000Z"},{"id":"https://humanwhocodes.com/blog/2025/04/post-social-media-claude-crosspost/","url":"https://humanwhocodes.com/blog/2025/04/post-social-media-claude-crosspost/","title":"Post to social media using Claude Desktop and Crosspost","author":{"name":"Nicholas C. Zakas"},"summary":"Crosspost is a small utility I wrote to post across social media networks. It now includes an MCP server for use with AI agents.","content_text":"\nWe all spend way too much time on social media these days. If you're like me, you also spend time switching between various social media platforms and, as a result, end up copy-pasting the same content in multiple browser tabs. In order to address this problem, I created Crosspost[^1], an npm package that allows you to post across multiple social media platforms using a command line interface. \n\n## Introducing Crosspost\n\nThe Crosspost command line interface lets you submit a message to any number of different social media networks. The message can either be passed directly on the command line or you can pass in a file using the `--file` option. Here's an overview of how it works:\n\n```\nUsage: crosspost [options] [\"Message to post.\"]\n--twitter, -t   Post to Twitter.\n--mastodon, -m  Post to Mastodon.\n--bluesky, -b   Post to Bluesky.\n--linkedin, -l  Post to LinkedIn.\n--discord, -d   Post to Discord via bot.\n--discord-webhook  Post to Discord via webhook.\n--devto         Post to dev.to.\n--mcp           Start MCP server.\n--file          The file to read the message from.\n--image         The image file to upload with the message.\n--image-alt     Alt text for the image (defaults: filename).\n--help, -h      Show this message.\n```\n\nYou can use the CLI via `npx`. Here are some examples:\n\n```shell\n# Post a message to multiple services\nnpx @humanwhocodes/crosspost -t -m -b \"Check out this beach!\"\n\n# Post a message with an image to multiple services\nnpx @humanwhocodes/crosspost -t -m -b --image ./photo.jpg --image-alt \"A beautiful sunset\" \"Check out this beach!\"\n```\n\nThese examples post the message `\"Check out this beach!\"` to Twitter, Mastodon, and Bluesky with an attached image. You can choose to post to any combination by specifying the appropriate command line options.\n\nEach social media platform is represented by a strategy inside of Crosspost, and each strategy requires specific environment variables to work correctly. (Please refer to the Crosspost README[^1] for details on the environment variables.)\n\n## Using with Claude Desktop\n\nInitially, Crosspost was designed for use in continuous integration systems to help announce releases in my open source projects. Eventually, though, I started thinking about how to make Crosspost easier for me to use to quickly post something on social media manually. I wanted something where the environment variables were already baked in and I didn't have to worry about setting them up each time. At first I thought I'd bundle a web application in the package but instead I grew fascinated with the buzz around MCP servers and set out to create one for Crosspost.\n\nYou can start the Crosspost MCP server by using the `--mcp` command. Most of the command line arguments work in the MCP server, aside from `--file`, `--image`, and `--image-alt`. \n\nTo set up Claude Desktop to use Crosspost:\n\n1. Click on File -> Settings.\n1. Select \"Developer\".\n1. Click \"Edit Config\".\n\nThis will create a `claude_desktop_config.json` file. Open it and add the following:\n\n```json\n{\n  \"mcpServers\": {\n    \"crosspost\": {\n      \"command\": \"npx\",\n      \"args\": [\"@humanwhocodes/crosspost\", \"-m\", \"-l\", \"--mcp\"],\n      \"env\": {\n        \"CROSSPOST_DOTENV\": \"/path/to/.env\"\n      }\n    }\n  }\n}\n```\n\nClaude Desktop only has access to the strategies you've enabled in Crosspost. In this example, the Mastodon and LinkedIn strategies are enabled so those are the only ones Claude can post to on your behalf (you can pick the strategies you want to use). This example also uses a `.env` file to read in the required environment variables.\n\nOnce the `claude_desktop_config.json` file is updated, you need to restart Claude Desktop. Note that just closing the window isn't enough because the Claude Desktop process stays active. You need to click File -> Exit before starting Claude Desktop again.\n\nAt this point, you should see a hammer icon with a number next to it, indicating how many tools are available through installed MCP servers. \n\n![The Claude Desktop toolbar under message entry showing a hammer icon with the number 5 next to it](/images/posts/2025/claude-tools-button.png)\n\nIf you click on the hammer, you'll see a list of all available tools.\n\n![The Claude Desktop dialog listing all of the available Crosspost tools for posting to social media](/images/posts/2025/claude-available-tools.png)\n\nOnce you've confirmed the Crosspost tools are available, you can ask Claude to post a message for you such as:\n\n> Crosspost this message: \"If you are reading this, it means I successfully got Claude to post to my social media.\"\n\n(You can also ask Claude to post to just one social media platform, such as \"Post this to Bluesky,\" if you don't always wants to post to every platform.)\n\nWhen Claude decides it will use one of the Crosspost tools, it will ask for your permission to do so. You can either allow once or for the lifetime of the chat.\n\n![The Claude Desktop dialog asking you to approve the use of the Crosspost tool either once or for the lifetime of the chat.](/images/posts/2025/claude-allow-tool.png)\n\nOnce you allow use of Crosspost, Claude will post on your behalf and let you know when complete.\n\n![The Claude Desktop chat window showing confirmation that a message has been posted across multiple social media platforms.](/images/posts/2025/claude-crosspost-success.png)\n\nI've found this so convenient that Claude Desktop is now the primary way I post to social media. It's fast and I don't get distracted by reading other content in my feed.\n\n## Conclusion\n\nCrosspost started as a simple utility to help automate social media posts for my open source projects, but it has evolved into something much more useful. By integrating with Claude Desktop through an MCP server, it's become a streamlined way to manage social media posts without getting caught up in the endless scroll of content. Whether you're managing multiple social media accounts or just want a more efficient way to post updates, Crosspost combined with Claude Desktop offers a fun solution.\n\n[^1]: [Crosspost](https://github.com/humanwhocodes/crosspost)\n[^2]: [Claude Desktop](https://claude.ai/download)\n","content_html":"&lt;p&gt;We all spend way too much time on social media these days. If you’re like me, you also spend time switching between various social media platforms and, as a result, end up copy-pasting the same content in multiple browser tabs. In order to address this problem, I created Crosspost&lt;sup&gt;&lt;a href=&quot;#user-content-fn-1&quot; id=&quot;user-content-fnref-1&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;, an npm package that allows you to post across multiple social media platforms using a command line interface.&lt;/p&gt;\n&lt;h2 id=&quot;introducing-crosspost&quot;&gt;Introducing Crosspost&lt;/h2&gt;\n&lt;p&gt;The Crosspost command line interface lets you submit a message to any number of different social media networks. The message can either be passed directly on the command line or you can pass in a file using the &lt;code&gt;--file&lt;/code&gt; option. Here’s an overview of how it works:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #e1e4e8&quot;&gt;Usage: crosspost [options] [&quot;Message to post.&quot;]&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #e1e4e8&quot;&gt;--twitter, -t   Post to Twitter.&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #e1e4e8&quot;&gt;--mastodon, -m  Post to Mastodon.&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #e1e4e8&quot;&gt;--bluesky, -b   Post to Bluesky.&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #e1e4e8&quot;&gt;--linkedin, -l  Post to LinkedIn.&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #e1e4e8&quot;&gt;--discord, -d   Post to Discord via bot.&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #e1e4e8&quot;&gt;--discord-webhook  Post to Discord via webhook.&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #e1e4e8&quot;&gt;--devto         Post to dev.to.&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #e1e4e8&quot;&gt;--mcp           Start MCP server.&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #e1e4e8&quot;&gt;--file          The file to read the message from.&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #e1e4e8&quot;&gt;--image         The image file to upload with the message.&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #e1e4e8&quot;&gt;--image-alt     Alt text for the image (defaults: filename).&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #e1e4e8&quot;&gt;--help, -h      Show this message.&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;You can use the CLI via &lt;code&gt;npx&lt;/code&gt;. Here are some examples:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;# Post a message to multiple services&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;npx&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;@humanwhocodes/crosspost&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-t&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-m&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-b&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;Check out this beach!&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #6A737D&quot;&gt;# Post a message with an image to multiple services&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #B392F0&quot;&gt;npx&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;@humanwhocodes/crosspost&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-t&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-m&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;-b&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;--image&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;./photo.jpg&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;--image-alt&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;A beautiful sunset&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;Check out this beach!&quot;&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;These examples post the message &lt;code&gt;&quot;Check out this beach!&quot;&lt;/code&gt; to Twitter, Mastodon, and Bluesky with an attached image. You can choose to post to any combination by specifying the appropriate command line options.&lt;/p&gt;\n&lt;p&gt;Each social media platform is represented by a strategy inside of Crosspost, and each strategy requires specific environment variables to work correctly. (Please refer to the Crosspost README&lt;sup&gt;&lt;a href=&quot;#user-content-fn-1&quot; id=&quot;user-content-fnref-1-2&quot; data-footnote-ref=&quot;&quot; aria-describedby=&quot;footnote-label&quot;&gt;1&lt;/a&gt;&lt;/sup&gt; for details on the environment variables.)&lt;/p&gt;\n&lt;h2 id=&quot;using-with-claude-desktop&quot;&gt;Using with Claude Desktop&lt;/h2&gt;\n&lt;p&gt;Initially, Crosspost was designed for use in continuous integration systems to help announce releases in my open source projects. Eventually, though, I started thinking about how to make Crosspost easier for me to use to quickly post something on social media manually. I wanted something where the environment variables were already baked in and I didn’t have to worry about setting them up each time. At first I thought I’d bundle a web application in the package but instead I grew fascinated with the buzz around MCP servers and set out to create one for Crosspost.&lt;/p&gt;\n&lt;p&gt;You can start the Crosspost MCP server by using the &lt;code&gt;--mcp&lt;/code&gt; command. Most of the command line arguments work in the MCP server, aside from &lt;code&gt;--file&lt;/code&gt;, &lt;code&gt;--image&lt;/code&gt;, and &lt;code&gt;--image-alt&lt;/code&gt;.&lt;/p&gt;\n&lt;p&gt;To set up Claude Desktop to use Crosspost:&lt;/p&gt;\n&lt;ol&gt;\n&lt;li&gt;Click on File -&gt; Settings.&lt;/li&gt;\n&lt;li&gt;Select “Developer”.&lt;/li&gt;\n&lt;li&gt;Click “Edit Config”.&lt;/li&gt;\n&lt;/ol&gt;\n&lt;p&gt;This will create a &lt;code&gt;claude_desktop_config.json&lt;/code&gt; file. Open it and add the following:&lt;/p&gt;\n&lt;pre is:raw=&quot;&quot; class=&quot;astro-code github-dark&quot; style=&quot;background-color: #24292e; overflow-x: auto;&quot; tabindex=&quot;0&quot;&gt;&lt;code&gt;&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;{&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;mcpServers&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;crosspost&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;      &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;command&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;npx&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;,&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;      &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;args&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: [&lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;@humanwhocodes/crosspost&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;-m&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;-l&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;--mcp&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;],&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;      &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;env&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: {&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;        &lt;/span&gt;&lt;span style=&quot;color: #79B8FF&quot;&gt;&quot;CROSSPOST_DOTENV&quot;&lt;/span&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;color: #9ECBFF&quot;&gt;&quot;/path/to/.env&quot;&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;      }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;    }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;  }&lt;/span&gt;&lt;/span&gt;\n&lt;span class=&quot;line&quot;&gt;&lt;span style=&quot;color: #E1E4E8&quot;&gt;}&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;\n&lt;p&gt;Claude Desktop only has access to the strategies you’ve enabled in Crosspost. In this example, the Mastodon and LinkedIn strategies are enabled so those are the only ones Claude can post to on your behalf (you can pick the strategies you want to use). This example also uses a &lt;code&gt;.env&lt;/code&gt; file to read in the required environment variables.&lt;/p&gt;\n&lt;p&gt;Once the &lt;code&gt;claude_desktop_config.json&lt;/code&gt; file is updated, you need to restart Claude Desktop. Note that just closing the window isn’t enough because the Claude Desktop process stays active. You need to click File -&gt; Exit before starting Claude Desktop again.&lt;/p&gt;\n&lt;p&gt;At this point, you should see a hammer icon with a number next to it, indicating how many tools are available through installed MCP servers.&lt;/p&gt;\n&lt;p&gt;&lt;img src=&quot;/images/posts/2025/claude-tools-button.png&quot; alt=&quot;The Claude Desktop toolbar under message entry showing a hammer icon with the number 5 next to it&quot;&gt;&lt;/p&gt;\n&lt;p&gt;If you click on the hammer, you’ll see a list of all available tools.&lt;/p&gt;\n&lt;p&gt;&lt;img src=&quot;/images/posts/2025/claude-available-tools.png&quot; alt=&quot;The Claude Desktop dialog listing all of the available Crosspost tools for posting to social media&quot;&gt;&lt;/p&gt;\n&lt;p&gt;Once you’ve confirmed the Crosspost tools are available, you can ask Claude to post a message for you such as:&lt;/p&gt;\n&lt;blockquote&gt;\n&lt;p&gt;Crosspost this message: “If you are reading this, it means I successfully got Claude to post to my social media.”&lt;/p&gt;\n&lt;/blockquote&gt;\n&lt;p&gt;(You can also ask Claude to post to just one social media platform, such as “Post this to Bluesky,” if you don’t always wants to post to every platform.)&lt;/p&gt;\n&lt;p&gt;When Claude decides it will use one of the Crosspost tools, it will ask for your permission to do so. You can either allow once or for the lifetime of the chat.&lt;/p&gt;\n&lt;p&gt;&lt;img src=&quot;/images/posts/2025/claude-allow-tool.png&quot; alt=&quot;The Claude Desktop dialog asking you to approve the use of the Crosspost tool either once or for the lifetime of the chat.&quot;&gt;&lt;/p&gt;\n&lt;p&gt;Once you allow use of Crosspost, Claude will post on your behalf and let you know when complete.&lt;/p&gt;\n&lt;p&gt;&lt;img src=&quot;/images/posts/2025/claude-crosspost-success.png&quot; alt=&quot;The Claude Desktop chat window showing confirmation that a message has been posted across multiple social media platforms.&quot;&gt;&lt;/p&gt;\n&lt;p&gt;I’ve found this so convenient that Claude Desktop is now the primary way I post to social media. It’s fast and I don’t get distracted by reading other content in my feed.&lt;/p&gt;\n&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;\n&lt;p&gt;Crosspost started as a simple utility to help automate social media posts for my open source projects, but it has evolved into something much more useful. By integrating with Claude Desktop through an MCP server, it’s become a streamlined way to manage social media posts without getting caught up in the endless scroll of content. Whether you’re managing multiple social media accounts or just want a more efficient way to post updates, Crosspost combined with Claude Desktop offers a fun solution.&lt;/p&gt;\n&lt;section data-footnotes=&quot;&quot; class=&quot;footnotes&quot;&gt;&lt;h2 class=&quot;sr-only&quot; id=&quot;footnote-label&quot;&gt;Footnotes&lt;/h2&gt;\n&lt;ol&gt;\n&lt;li id=&quot;user-content-fn-1&quot;&gt;\n&lt;p&gt;&lt;a href=&quot;https://github.com/humanwhocodes/crosspost&quot;&gt;Crosspost&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-1&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;/a&gt; &lt;a href=&quot;#user-content-fnref-1-2&quot; data-footnote-backref=&quot;&quot; class=&quot;data-footnote-backref&quot; aria-label=&quot;Back to content&quot;&gt;↩&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;\n&lt;/li&gt;\n&lt;/ol&gt;\n&lt;/section&gt;","tags":["Open Source","AI","Claude","MCP"],"date_published":"2025-04-08T00:00:00.000Z","date_updated":"2025-04-08T00:00:00.000Z"}]}