Summary
Allow islands within a prerendered page to be server-rendered at runtime.
Background & Motivation
Often a page may be mostly static, but with only parts that need to be rendered on-demand. For example, a product page might need to show stock levels or personalised recommendations. Currently the only option for these is to either render the whole page on demand, or to render the dynamic parts on the client. This proposal introduces the concept of deferred islands, which are not prerendered, but rather server rendered on demand at runtime.
Next.js is working on a solution called partial pre-rendering, which allows most of the page to be prerendered, with individual postponed parts rendered using SSR on-demand. The implementation is quite different from what I propose for Astro, but the concept is similar.
Goals
Goals
- Allow the shell to be fully rendered as valid, cacheable, static HTML, with optional placeholders for postponed islands.
- Replace the deferred islands at runtime, using server-rendered HTML
- Require no JavaScript except a very small script to load the data
- Load the deferred islands individually and asynchronously, allowing each to be rendered as soon as it is available
- Allow access to on-demand rendering features in deferred components, such as cookies and the response object
- Allow components to know if they are currently being rendered in deferred mode
Possible goals
- Allow deferred components to be cacheable (see GET vs POST)
Non-goals
- Automatic pre-rendering of deferred components. Only fallback content would be pre-rendered
- Batching of requests for deferred content. At this stage there would be one request per component, with batching as a possible later optimization
- Allowing components to default to deferred rendering. Deferring would be specified at the page level.
Example
A component would be deferred by setting the server:defer directive.
The "fallback" slot can be used to specify a placeholder that is pre-rendered and displayed while the component is loading.
<Avatar server:defer>
<div slot="fallback">Guest</div>
</Avatar>
The page can pass props to the component like normal, and these are available when rendering the component:
---
import Like from "../components/Like";
export const prerender = true;
const post = await getPost(Astro.params.slug)
---
<Like server:defer post={post.id} />
The component itself does not need to do anything to support deferred rendering, so it should work with any existing component. However deferred components can optionally use special powers, and can detect if they were deferred by checking the Astro.deferred prop. This means that it was deferred at build time but is now being rendered on-demand.
The special powers are because during deferred rendering, a component is rendered like a mini page. This means it can use features such as Astro.cookies, and setting headers on Astro.response. The Astro.url and Astro.request.url are from the original page, and are passed in the request along with the props.
---
// Like.astro
const { post } = Astro.props
let user = { name: "Guest" }
// If this is a deferred render then we may have a user's cookie
if (Astro.deferred) {
user = await getUser(Astro.cookies.get('session'))
}
---
<div>
<span class="name">{user?.name}</span>
</div>
Implementation
When rendering the static page, postponed elements would not be rendered, and instead would render an <astro-island> containing any placeholder. The <astro-island> would embed the serialized props, as well as the URL for the deferred endpoint.
When the page has loaded, a request would be made to each deferred endpoint (see "GET vs POST" below for considerations). This request would pass all of the props and other serialized context.
On the server, the component would be rendered effectively in a thin wrapper page that decoded and forwarded the props, and rewrites the Astro global values.
When the browser has loaded the response from the endpoint, it would use it to replace the content of the island.
The runtime for replacing the deferred islands would be in an inline script tag. A simplified version without error handling could look like this:
document.addEventListener("DOMContentLoaded", () => {
document.querySelectorAll("astro-island[defer]").forEach(async (element) => {
const props = JSON.parse(element.getAttribute('props'));
const endpoint = element.getAttribute('component-endpoint');
const response = await fetch(endpoint, {
method: "POST",
body: JSON.stringify({ props, url: document.location.href })
}).then(res => res.text());
const range = document.createRange()
range.selectNodeContents(element)
const fragment = range.createContextualFragment(response)
range.deleteContents()
range.insertNode(fragment)
})
})
GET vs POST
One of the unanswered questions is whether to use GET or POST requests for the deferred component endpoints. The benefit of the POST is that it can send arbitrarily large request bodies. The benefits of GET are that they are cacheable and can be preloaded in the page head. Some options are:
- always use POST. This is the simplest option, both to use and implement, but does not allow caching or preloading
- use GET, unless the props are too long in which case fallback to POST. This avoids users needing to make choices, but could lead to unexpected results where a component becomes uncacheable when the props get too long
- allow users to opt-in to cacheable contents. This would be more flexible and potentially allow the use of default cache headers. However it could be hard to teach, and would require builds to be failed if the props get too long.
- use POST, but implement local caching. This would be a lot more complex to implement, but would be easier for users. It wouldn't benefit from CDN caching but would help for repeat visits. Might be more useful when pages are personalized.
Why not streaming?
A alternative approach would be to send the postponed components in the same response as the initial shell. This is how Next.js PPR is currently implemented. This has some benefits, but I think they are outweighed by drawbacks in most cases. Astro has always been static-first, and I think that approach is best here too.
Primarily, a prerendered, static page is easily cacheable, both in the browser but also in a CDN. This is not the case when the deferred data is in the same response. The benefits of this are that the static part can be cached at the edge, near to users, with a very fast response time. The deferred content can be rendered and served near to the site's data without blocking rendering of the rest of the page. If you want to stream the deferred content in the same response, you either have to render everything at the origin and take the hit on distance from users, or render it all at the edge and take the hit on distance from your data. In some cases rendering everything at the edge is fine (e.g. if there's no central data source or API access), and Astro already supports that.
You can work around this with logic at the edge to combine a local cached shell and a stream of the updates from the origin, and edge middleware to do this could be a helpful option. It still prevents the use of the browser cache though, because it can't make conditional requests for the prerendered page - the whole things needs to be sent on every request in case the deferred data has changed.
Summary
Allow islands within a prerendered page to be server-rendered at runtime.
Background & Motivation
Often a page may be mostly static, but with only parts that need to be rendered on-demand. For example, a product page might need to show stock levels or personalised recommendations. Currently the only option for these is to either render the whole page on demand, or to render the dynamic parts on the client. This proposal introduces the concept of deferred islands, which are not prerendered, but rather server rendered on demand at runtime.
Next.js is working on a solution called partial pre-rendering, which allows most of the page to be prerendered, with individual postponed parts rendered using SSR on-demand. The implementation is quite different from what I propose for Astro, but the concept is similar.
Goals
Goals
Possible goals
Non-goals
Example
A component would be deferred by setting the
server:deferdirective.The
"fallback"slot can be used to specify a placeholder that is pre-rendered and displayed while the component is loading.The page can pass props to the component like normal, and these are available when rendering the component:
The component itself does not need to do anything to support deferred rendering, so it should work with any existing component. However deferred components can optionally use special powers, and can detect if they were deferred by checking the
Astro.deferredprop. This means that it was deferred at build time but is now being rendered on-demand.The special powers are because during deferred rendering, a component is rendered like a mini page. This means it can use features such as
Astro.cookies, and setting headers onAstro.response. TheAstro.urlandAstro.request.urlare from the original page, and are passed in the request along with the props.Implementation
When rendering the static page, postponed elements would not be rendered, and instead would render an
<astro-island>containing any placeholder. The<astro-island>would embed the serialized props, as well as the URL for the deferred endpoint.When the page has loaded, a request would be made to each deferred endpoint (see "GET vs POST" below for considerations). This request would pass all of the props and other serialized context.
On the server, the component would be rendered effectively in a thin wrapper page that decoded and forwarded the props, and rewrites the Astro global values.
When the browser has loaded the response from the endpoint, it would use it to replace the content of the island.
The runtime for replacing the deferred islands would be in an inline script tag. A simplified version without error handling could look like this:
GET vs POST
One of the unanswered questions is whether to use GET or POST requests for the deferred component endpoints. The benefit of the POST is that it can send arbitrarily large request bodies. The benefits of GET are that they are cacheable and can be preloaded in the page head. Some options are:
Why not streaming?
A alternative approach would be to send the postponed components in the same response as the initial shell. This is how Next.js PPR is currently implemented. This has some benefits, but I think they are outweighed by drawbacks in most cases. Astro has always been static-first, and I think that approach is best here too.
Primarily, a prerendered, static page is easily cacheable, both in the browser but also in a CDN. This is not the case when the deferred data is in the same response. The benefits of this are that the static part can be cached at the edge, near to users, with a very fast response time. The deferred content can be rendered and served near to the site's data without blocking rendering of the rest of the page. If you want to stream the deferred content in the same response, you either have to render everything at the origin and take the hit on distance from users, or render it all at the edge and take the hit on distance from your data. In some cases rendering everything at the edge is fine (e.g. if there's no central data source or API access), and Astro already supports that.
You can work around this with logic at the edge to combine a local cached shell and a stream of the updates from the origin, and edge middleware to do this could be a helpful option. It still prevents the use of the browser cache though, because it can't make conditional requests for the prerendered page - the whole things needs to be sent on every request in case the deferred data has changed.