<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Don McCurdy]]></title><description><![CDATA[Developer working on climate data, graphics, data visualization, and web technologies.]]></description><link>https://www.donmccurdy.com/</link><generator>metalsmith-feed</generator><lastBuildDate>Fri, 31 Oct 2025 20:16:14 GMT</lastBuildDate><atom:link href="https://www.donmccurdy.com/rss.xml" rel="self" type="application/rss+xml"/><item><title><![CDATA[Bandcamp wrapped 2024]]></title><description><![CDATA[<p>As a personal experiment, I cancelled Spotify last year and set a target of buying one album per month.<sup>[<a href="#footnote-1">1</a>]</sup> That&#39;s been fun! I find myself looking forward to the 1st of each month and picking out the next album. Bandcamp allows streaming each album a few times before <a href="https://get.bandcamp.help/hc/en-us/articles/23020694060183-What-are-streaming-limits-on-Bandcamp">prompting to purchase</a>, so there&#39;s still a lot of flexibility to try new music within that monthly budget.</p>
<p>With thanks to Tom MacWright for both the <a href="https://macwright.com/2024/12/06/bandcamp-wrapped">inspiration</a> and the <a href="https://www.val.town/v/tmcw/bandcampWrapped">means</a>, my “Bandcamp wrapped” for 2024:</p>
<div style="display: grid; grid-template-columns: 1fr 1fr 1fr; gap: 10px;">
   <div><a href="https://boniver.bandcamp.com/album/sable"><img src="/assets/images/2024/0039-bandcamp-wrapped/a3831494910_2.jpg" alt=""></a><a href="https://boniver.bandcamp.com/album/sable"><strong>SABLE,</strong><span> by Bon Iver</span></a></div>
   <div><a href="https://penelopescott.bandcamp.com/album/public-void-2"><img src="/assets/images/2024/0039-bandcamp-wrapped/a2456227092_2.jpg" alt=""></a><a href="https://penelopescott.bandcamp.com/album/public-void-2"><strong>Public Void</strong><span> by Penelope Scott</span></a></div>
   <div><a href="https://yevagabonds.bandcamp.com/album/nine-waves"><img src="/assets/images/2024/0039-bandcamp-wrapped/a2348583136_2.jpg" alt=""></a><a href="https://yevagabonds.bandcamp.com/album/nine-waves"><strong>Nine Waves</strong><span> by Ye Vagabonds</span></a></div>
   <div><a href="https://kishibashi.bandcamp.com/album/kantos"><img src="/assets/images/2024/0039-bandcamp-wrapped/a0509169091_2.jpg" alt=""></a><a href="https://kishibashi.bandcamp.com/album/kantos"><strong>Kantos</strong><span> by Kishi Bashi</span></a></div>
   <div><a href="https://bigredmachine.bandcamp.com/album/how-long-do-you-think-its-gonna-last"><img src="/assets/images/2024/0039-bandcamp-wrapped/a2513287490_16.jpg" alt=""></a><a href="https://bigredmachine.bandcamp.com/album/how-long-do-you-think-its-gonna-last"><strong>How Long Do You Think It's Gonna Last?</strong><span> by Big Red Machine</span></a></div>
   <div><a href="https://paraic.bandcamp.com/album/not-before-time-39-years-in-the-making"><img src="/assets/images/2024/0039-bandcamp-wrapped/a0704884890_16.jpg" alt=""></a><a href="https://paraic.bandcamp.com/album/not-before-time-39-years-in-the-making"><strong>Not Before Time...39 Years in the Making</strong><span> by Páraic Mac Donnchadha</span></a></div>
   <div><a href="https://normanblake.bandcamp.com/album/blake-rice"><img src="/assets/images/2024/0039-bandcamp-wrapped/a3024354057_2.jpg" alt=""></a><a href="https://normanblake.bandcamp.com/album/blake-rice"><strong>Blake &amp; Rice</strong><span> by Norman Blake</span></a></div>
   <div><a href="https://alasdairfraser.bandcamp.com/album/return-to-kintail-2"><img src="/assets/images/2024/0039-bandcamp-wrapped/a1203560317_2.jpg" alt=""></a><a href="https://alasdairfraser.bandcamp.com/album/return-to-kintail-2"><strong>Return to Kintail</strong><span> by Alasdair Fraser</span></a></div>
   <div><a href="https://patsyreid.bandcamp.com/album/a-glint-o-scottish-fiddle"><img src="/assets/images/2024/0039-bandcamp-wrapped/a1871508192_2.jpg" alt=""></a><a href="https://patsyreid.bandcamp.com/album/a-glint-o-scottish-fiddle"><strong>A Glint o' Scottish Fiddle</strong><span> by Patsy Reid</span></a></div>
   <div><a href="https://marlafibish.bandcamp.com/album/the-bright-hollow-fog"><img src="/assets/images/2024/0039-bandcamp-wrapped/a1685191317_2.jpg" alt=""></a><a href="https://marlafibish.bandcamp.com/album/the-bright-hollow-fog"><strong>The Bright Hollow Fog</strong><span> by Marla Fibish</span></a></div>
   <div><a href="https://yevagabonds.bandcamp.com/track/im-a-rover"><img src="/assets/images/2024/0039-bandcamp-wrapped/a1259022316_2.jpg" alt=""></a><a href="https://yevagabonds.bandcamp.com/track/im-a-rover"><strong>I'm a Rover</strong><span> by Ye Vagabonds</span></a></div>
   <div><a href="https://michellezauner.bandcamp.com/album/psychopomp-2"><img src="/assets/images/2024/0039-bandcamp-wrapped/a3458975711_2.jpg" alt=""></a><a href="https://michellezauner.bandcamp.com/album/psychopomp-2"><strong>Psychopomp</strong><span> by Japanese Breakfast</span></a></div>
   <div><a href="https://adriannelenker.bandcamp.com/album/bright-future"><img src="/assets/images/2024/0039-bandcamp-wrapped/a2431419636_2.jpg" alt=""></a><a href="https://adriannelenker.bandcamp.com/album/bright-future"><strong>Bright Future</strong><span> by Adrianne Lenker</span></a></div>
   <div><a href="https://andrewbird.bandcamp.com/album/inside-problems"><img src="/assets/images/2024/0039-bandcamp-wrapped/a2526610686_2.jpg" alt=""></a><a href="https://andrewbird.bandcamp.com/album/inside-problems"><strong>Inside Problems</strong><span> by Andrew Bird</span></a></div>
   <div><a href="https://angie-mcmahon.bandcamp.com/album/salt"><img src="/assets/images/2024/0039-bandcamp-wrapped/a2498720225_2.jpg" alt=""></a><a href="https://angie-mcmahon.bandcamp.com/album/salt"><strong>Salt</strong><span> by Angie McMahon</span></a></div>
   <div><a href="https://aroojaftab.bandcamp.com/album/vulture-prince"><img src="/assets/images/2024/0039-bandcamp-wrapped/a0315775086_2.jpg" alt=""></a><a href="https://aroojaftab.bandcamp.com/album/vulture-prince"><strong>Vulture Prince</strong><span> by Arooj Aftab</span></a></div>
   <div><a href="https://kishibashi.bandcamp.com/album/omoiyari"><img src="/assets/images/2024/0039-bandcamp-wrapped/a0906381525_2.jpg" alt=""></a><a href="https://kishibashi.bandcamp.com/album/omoiyari"><strong>Omoiyari</strong><span> by Kishi Bashi</span></a></div>
   <div><a href="https://andrewbird.bandcamp.com/album/outside-problems"><img src="/assets/images/2024/0039-bandcamp-wrapped/a2755949637_2.jpg" alt=""></a><a href="https://andrewbird.bandcamp.com/album/outside-problems"><strong>Outside Problems</strong><span> by Andrew Bird</span></a></div>
</div>

<hr>
<p><small><a class="footnote" name="footnote-1"><sup>1</sup></a> I overshot this goal somewhat. :)</small></p>
]]></description><link>https://www.donmccurdy.com/2024/12/10/bandcamp-wrapped/</link><guid isPermaLink="true">https://www.donmccurdy.com/2024/12/10/bandcamp-wrapped/</guid><pubDate>Tue, 10 Dec 2024 00:00:00 GMT</pubDate></item><item><title><![CDATA[Emission and bloom]]></title><description><![CDATA[<style>
figure > canvas {
    width: 100% !important;
    height: min(400px, 80vh) !important;
}
@media (max-width: 650px) {
    figure > canvas {
        height: min(225px, 80vh) !important;
    }
}
figcaption {
    max-width: 600px;
    text-align: left;
    text-wrap: pretty;
}
.swatch {
    display: inline-block;
    width: 1em;
    height: 1em;
    border-radius: 4px;
    vertical-align: middle;
}
table {
    font-size: small;
}
</style>

<h2 id="emission">Emission</h2>
<p><strong>Emission models the contribution of light from a surface,</strong> in addition to any light already reflected from that surface. In a physically-based path tracer like Blender&#39;s <em>Cycles</em>, or a realtime renderer with support for <a href="https://en.wikipedia.org/wiki/Global_illumination">global illumination</a> (GI), emission works as you&#39;d expect: the surface emits light, and illuminates nearby objects like any other light source. But these types of renderers tend to be very advanced, or slow, or both.</p>
<p>Most realtime renderers use <strong>local illumination</strong>, and emission <em>brightens only the emissive surface itself</em>.</p>
<details>
    <summary><i>Why doesn't local illumination allow emissive surfaces to light the entire scene?</i></summary>
    <p>
        With local illumination, lighting of a surface is evaluated in isolation from other surfaces, and any light sources must be defined as simple combinations of shader uniforms and textures. The surface of another mesh cannot act as a light source in a fragment shader. Instead, emission is evaluated in the fragment shader only when rendering that specific emissive surface to the drawing buffer, affecting only itself.
    </p>
</details>

<figure class="width-large">
    <canvas id="emission-only"></canvas>
    <figcaption>
        <b>Figure 1:</b> Emission only, rendered with local illumination. <nobr>Underwhelming, isn't it?</nobr>
    </figcaption>
</figure>

<p>The emissive boxes in the illustration above each have emissive color <span class="swatch" style="background: #59BCF3"></span> <code>#59BCF3</code>, intensities increasing from 1 to 256 nits from left to right. <a href="https://en.wikipedia.org/wiki/Tone_mapping">Tone mapping</a> causes the brighter boxes to desaturate toward white.<sup>[<a href="#footnote-1">1</a>]</sup> The emissive surfaces are visible in the dimly-lit scene, but they aren&#39;t illuminating other objects.</p>
<p>Even limited to local illumination, we can create a much better illusion of brightness than this. We&#39;ll first light the scene independently, then create a glow called a <em>bloom effect</em> around the emissive surfaces.</p>
<h2 id="bloom">Bloom</h2>
<p><strong>Bloom emulates an imaging artifact present in physical cameras</strong>, in which fringes surround the brightest areas of the image, “contributing to the illusion of an extremely bright light overwhelming the camera or eye.” (<a href="https://en.wikipedia.org/wiki/Bloom_(shader_effect)">Wikipedia</a>)</p>
<p>Many 3D applications already have the bloom effect available as a post-processing option. But how should emissive surfaces, the surrounding scene, and the bloom effect be configured for best results?</p>
<details>
    <summary><i>How is a bloom effect implemented?</i></summary>
    <p>
        Bloom is usually implemented as a post-processing effect. Render the scene to an open domain <nobr>[0, ∞]</nobr> render target in a linear color space. Apply a render pass over the entire screen — not over individual objects — using a gaussian blur to spread pixels brighter than some defined threshold. The result may be drawn back into a render target for further processing, or tone mapped and converted to the output color space, forming the final image.
    </p>
    <p>Full implementation details are outside the scope of this post, but see
    <a href="https://learnopengl.com/Advanced-Lighting/Bloom" target="_blank">Learn OpenGL: Bloom</a> for a more detailed explanation.
    </p>
</details>

<h2 id="configuring-a-scene-for-emission-and-bloom">Configuring a scene for emission and bloom</h2>
<p>First, create a scene with the lighting, composition, and exposure you prefer. Enable tone mapping from the beginning, so that you&#39;re making initial lighting choices based on a fully-formed image. Point lights may be placed at the locations of the emissive objects, or lighting can be baked into surrounding materials.</p>
<figure class="width-large">
    <canvas id="emission-and-lighting"></canvas>
    <figcaption>
        <b>Figure 2:</b> Emission and independent scene lighting, still without
        bloom. Point lights, image-based lighting, or baked lighting can illuminate
        the rest of the scene giving a sense of light in the space, but the cubes
        themselves appear flat.
    </figcaption>
</figure>

<p><strong>Emissive surfaces intended to bloom should have higher total luminance than the surrounding scene, often using open domain <nobr>[0, ∞]</nobr> linear intensities</strong>.<sup>[<a href="#footnote-2">2</a>]</sup><sup>[<a href="#footnote-3">3</a>]</sup> We&#39;re trying to give the illusion that the emission has overwhelmed a camera sensor with its brightness.</p>
<p>Non-emissive surfaces are under varying lighting conditions, and may have strong reflective highlights. Unless you want to see bloom from reflected highlights — a stylistic choice — make it easy on yourself by using a much higher intensity for the emissive surfaces. When emissive surfaces are sufficiently bright, there&#39;s room to adjust lighting and the luminance threshold toward an overall look. If you can&#39;t isolate the effect to the intended surfaces, the surfaces are probably not bright enough.</p>
<details>
    <summary><i>How can an RGB value be brighter than 1?</i></summary>
    <p>
        Some colors in a 3D scene do require 0–1 RGB values. Base/diffuse/albedo color and specular color, which represent reflectance percentages, can never exceed 1 because a surface cannot reflect >100% of incoming light. Final images rendered to the canvas must use 0–1 RGB values, relative to the display's capabilities.
    </p>
    <p>
        Emission represents lighting data, not a reflectance percentage or a ratio, and its RGB values can use the full <nobr>[0, ∞]</nobr> domain. Whether expressed as an RGB triplet <small><code>[1000, 0, 1000]</code></small> or an RGB/intensity pair <small><code>[#FF00FF, 1000]</code></small> the meaning is the same.
    </p>
</details>

<p><strong>Pixels above the luminance threshold will bloom, creating a glow in screen space</strong>. In three.js, the bloom threshold is defined in luminance, measured with weights for the <a href="https://en.wikipedia.org/wiki/Luminous_efficiency_function">spectral sensitivity</a> of human vision,<sup>[<a href="#footnote-4">4</a>]</sup> also called luma coefficients. With emissive color <span class="swatch" style="background: #FF00FF"></span> <code>#FF00FF</code> and an intensity of 100 nits, a luminance threshold of 99 nits will <em>not</em> show visible bloom, due to the luma coefficients.</p>
<pre><code class="hljs">Luminance = <span class="hljs-number">0.2126</span> x Red + <span class="hljs-number">0.7152</span> x Green + <span class="hljs-number">0.0722</span> x Blue</code></pre>

<details>
    <summary><i>Why the arbitrary <small><code>[0.2126, 0.7152, 0.0722]</code></small> coefficients?</i></summary>
    <p>These coefficients come from the <nobr>Rec. 709</nobr> <a href="https://en.wikipedia.org/wiki/Primary_color" target="_blank">color primaries</a>, used in both sRGB and Linear-sRGB color spaces. Wide gamut color spaces, such as <nobr>Display P3</nobr> and <nobr>Rec. 2020</nobr>, use different primaries and require different luma coefficients.</p>
</details>

<figure class="width-large">
    <canvas id="emission-and-bloom"></canvas>
    <figcaption>
        <b>Figure 3:</b> Emission and bloom. Independent lighting illuminates the scene,
        and a bloom effect creates the illusion that the emissive cubes are the
        dominant light source.
    </figcaption>
</figure>

<p>Finally, remember that tone mapping and conversion to the color space of the HTML canvas context are necessary steps in forming an image from our <nobr>[0, ∞]</nobr> linear data, and should be applied after the bloom effect. Other post-processing effects using physically-based units may come before tone mapping.</p>
<p>The 3D scene used throughout this article comes from the <a href="https://github.com/KhronosGroup/glTF-Sample-Assets/tree/main/Models/EmissiveStrengthTest">glTF Emissive Strength Test</a> model, with modifications. Settings for materials, lighting, and post-processing in the final render, with three.js units:</p>
<table>
<thead>
<tr>
<th align="left">setting</th>
<th align="left">value</th>
</tr>
</thead>
<tbody><tr>
<td align="left"><code>material.emissive</code></td>
<td align="left"><span class="swatch" style="background: #59BCF3"></span> <code>#59BCF3</code></td>
</tr>
<tr>
<td align="left"><code>material.emissiveIntensity</code></td>
<td align="left">1 – 256 nits</td>
</tr>
<tr>
<td align="left"><code>light.color</code></td>
<td align="left"><span class="swatch" style="background: #59BCF3"></span> <code>#59BCF3</code></td>
</tr>
<tr>
<td align="left"><code>light.intensity</code></td>
<td align="left">1 – 256 cd</td>
</tr>
<tr>
<td align="left"><code>renderer.toneMapping</code></td>
<td align="left">AgX</td>
</tr>
<tr>
<td align="left"><code>renderer.toneMappingExposure</code></td>
<td align="left">0.5</td>
</tr>
<tr>
<td align="left"><code>bloom.threshold</code></td>
<td align="left">10 nits</td>
</tr>
<tr>
<td align="left"><code>bloom.strength</code></td>
<td align="left">0.5</td>
</tr>
</tbody></table>
<p>Thanks for reading, and please <a href="https://fosstodon.org/@donmccurdy">reach out</a> with any questions!</p>
<hr>
<p><small><a class="footnote" name="footnote-1"><sup>1</sup></a> Brighter boxes would appear white without tone mapping, too, as their values clip the limits of the <nobr>[0, 1]</nobr> drawing buffer. Before reaching  white, they would first clip to one of the “Notorious Six”, or <span class="swatch" style="background: #00FFFF"></span> <code>#00FFFF</code> in this case. Tone mapping is a necessary step in forming an image from lighting data, and helps to avoid clipping and the Notorious Six.</small></p>
<p><small><a class="footnote" name="footnote-2"><sup>2</sup></a> I avoid using “HDR” as a synonym for linear <nobr>[0, ∞]</nobr> RGB data. HDR has multiple meanings, often used imprecisely, and is not necessarily linear. In three.js, the working color space is <nobr>Linear sRGB</nobr>: <nobr>Rec. 709</nobr> primaries, D65 white point, and a linear transfer function.</small></p>
<p><small><a class="footnote" name="footnote-3"><sup>3</sup></a> The domain <nobr>[0, ∞]</nobr> is theoretical. A 32-bit floating point render target has a maximum value around 3.40 × 10<sup>38</sup>, with lower precision as values approach that limit. Rendering pipelines may have further limitations, such as 16-bit render targets.</small></p>
<p><small><a class="footnote" name="footnote-4"><sup>4</sup></a> More precisely, for the <a href="https://en.wikipedia.org/wiki/CIE_1931_color_space#CIE_standard_observer">CIE Standard Observer</a>.</small></p>
<hr>
<script type="importmap">
  {
    "imports": {
      "three": "https://cdn.jsdelivr.net/npm/three@v0.164.1/build/three.module.js",
      "three/addons/": "https://cdn.jsdelivr.net/npm/three@v0.164.1/examples/jsm/"
    }
  }
</script>
<script type="module" src="/assets/scripts/0038-emission-and-bloom.js"></script>

<!--

TODO: Is three.js correct in using luma weights? We are emulating a physical phenomenon, not a perceptual one. Why do perceptual weights apply?

TODO: Discuss [luminance](https://en.wikipedia.org/wiki/Luminance) or [relative luminance](https://en.wikipedia.org/wiki/Relative_luminance)?

https://hg2dc.com/2019/11/04/question-12/

NOTE: Luminance “describes the amount of light that passes through, is emitted from, or is reflected from a particular area, and falls within a given [solid angle](https://en.wikipedia.org/wiki/Solid_angle)” ([Luminance, Wikipedia](https://en.wikipedia.org/wiki/Luminance)).

-->
]]></description><link>https://www.donmccurdy.com/2024/04/27/emission-and-bloom/</link><guid isPermaLink="true">https://www.donmccurdy.com/2024/04/27/emission-and-bloom/</guid><pubDate>Sat, 27 Apr 2024 00:00:00 GMT</pubDate></item><item><title><![CDATA[Don't use Data URLs in 3D models]]></title><description><![CDATA[<p>glTF provides an efficient, last-mile transmission format for 3D models and scenes. Resources meant for GPU upload, like geometry and textures, are stored in GPU-ready binary buffers.<sup>[<a href="#footnote-1">1</a>]</sup> Runtimes upload these buffers to the GPU directly — without iterating over each vertex, and often without any intermediate copies of the buffer at all. <strong>Among standard 3D formats today, glTF is unique in these optimizations for last-mile transmission to realtime applications.</strong></p>
<p>However, there&#39;s more than one way to package a glTF file. One of those ways, often called “glTF Embedded,” uses Data URLs and loses many of those benefits. This might not matter for someone using glTF as an interchange format between two desktop applications, or loading 3D models into a game engine that compiles to engine-specific formats. But for anyone using glTF for last-mile transmission to a realtime application, the “glTF Embedded” packaging method is cursed.</p>
<p>Regularly I&#39;ve seen developers exporting models to “glTF Embedded,” understandably not knowing why it&#39;s any different, and then needing support to diagnose and fix performance issues. This wastes time for everyone — for developers trying to build great 3D experiences, for volunteers providing support on community forums, and for end-users waiting on slow 3D experiences.</p>
<p>Other ways of packaging glTF files are unambiguously better. <strong>The “glTF Embedded” method should be avoided except for debugging,</strong> if it&#39;s supported at all.</p>
<h2 id="packaging-gltf-models">Packaging glTF models</h2>
<p><strong><code>.glb</code> models are binary, standalone files</strong>. By convention, GLB files are self-contained. Textures and other resources are embedded in the file, as binary data, following a short JSON header. This is the most common glTF packaging method.</p>
<p><strong><code>.gltf</code> models are human-readable JSON</strong>, and <em>should reference external files for binary data and textures</em>. Open a <code>.gltf</code> file in any text editor, and you&#39;ll see something like this:</p>
<pre><code class="hljs json">{
    "asset": {
        "version": "2.0",
        "generator": "Khronos glTF Blender I/O v4.0.44",
    },
    "extensionsUsed": ["KHR_materials_unlit"],
    "scenes": [ ... ],
    "nodes": [ ... ],
    "meshes": [ ... ],
    "materials": [{
        "pbrMetallicRoughness": {
            "baseColorFactor": [ 1, 1, 1, 1 ],
            "baseColorTexture": { "index": 0 }
        }
    }],
    "textures": [ ... ],
    "images": [ { "uri": "base_color.png" }, { "uri": "normal.png" } ],
    "buffers": [ { "uri": "geometry.bin", "byteLength": 3240 } ],
    ...
}</code></pre>

<p>JSON is convenient. With JSON, you can edit materials and other non-binary definitions in a text editor.<sup>[<a href="#footnote-2">2</a>]</sup> And notice the URLs for images and buffers above: these are external files with <code>.png</code>, <code>.jpg</code>, <code>.webp</code>, or <code>.ktx2</code> extensions for textures, or <code>.bin</code> extensions for geometry and animation data.</p>
<p>What if we wanted to embed these images and buffers in the JSON? In a <code>.glb</code>, appending binary chunks to a binary format is no problem. But JSON can&#39;t store binary data directly, and we instead need to use <a href="https://en.wikipedia.org/wiki/Data_URI_scheme">Data URLs</a> and <a href="https://en.wikipedia.org/wiki/Base64">Base64</a>:</p>
<pre><code class="hljs json">{
    ...
    "images": [ { "uri": "data:image/png;base64,iVBORw0K..." }],
    "buffers": [ { "uri": "data:application/octet-stream;base64,iVBORw0K..." }],
}</code></pre>

<p>Combining these choices, we get three common presets for packaging glTF files:<sup>[<a href="#footnote-3">3</a>]</sup></p>
<table>
<thead>
<tr>
<th>preset</th>
<th>extensions</th>
<th>buffers</th>
<th>human-readable</th>
<th>self-contained</th>
</tr>
</thead>
<tbody><tr>
<td>GLB</td>
<td>.glb</td>
<td>binary</td>
<td>❌</td>
<td>✅</td>
</tr>
<tr>
<td>glTF “Separate”</td>
<td>.gltf+.bin+.jpg+.png...</td>
<td>binary</td>
<td>✅</td>
<td>❌</td>
</tr>
<tr>
<td>glTF “Embedded”</td>
<td>.gltf</td>
<td>Data URL</td>
<td>✅</td>
<td>✅</td>
</tr>
</tbody></table>
<p>The first two presets are great! Use GLB if you want a self-contained file. Use “glTF Separate” if you want human-readable JSON or external textures. But “glTF Embedded” encodes buffers as Data URLs, taking a heavy performance penalty.</p>
<h2 id="the-problem-with-data-urls">The problem with Data URLs</h2>
<p>Data URLs are common enough in web development. HTTP requests cost some overhead, and webpages can have many small resources like icons, stylesheets, and scripts. The overhead adds up, so we might embed the smallest resources — below some size threshold — into the initial HTML page request. Data URLs are fine for that.</p>
<p>But this scenario doesn&#39;t translate well to 3D scenes. Performance requires batching resources on the GPU. We only get so many textures per material, we can&#39;t have fewer draw calls than we have materials, and so lots of tiny images aren&#39;t practical. Instead we have a smaller number of larger 2K-4K images, and heavy binary resources like geometry and animation keyframes. Even properly optimized, these can be much larger than the resources on a traditional webpage.</p>
<p>So here&#39;s the first problem: <strong>Data URLs increase data size by +33%.</strong></p>
<pre><code class="hljs javascript"><span class="hljs-comment">// create binary 'geometry'.</span>
<span class="hljs-keyword">const</span> bytes = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Uint8Array</span>(<span class="hljs-number">1024</span>).map(<span class="hljs-function"><span class="hljs-params">()</span> =&gt;</span> <span class="hljs-built_in">Math</span>.random() * <span class="hljs-number">255</span>);
bytes.byteLength; <span class="hljs-comment">// 1024 bytes</span>

<span class="hljs-comment">// convert to base64 Data URL.</span>
<span class="hljs-keyword">const</span> base64 = btoa(<span class="hljs-built_in">String</span>.fromCharCode.apply(<span class="hljs-literal">null</span>, bytes));
<span class="hljs-keyword">const</span> bytesBase64 = <span class="hljs-keyword">new</span> TextEncoder().encode(base64);
bytesBase64.byteLength; <span class="hljs-comment">// 1368 bytes</span></code></pre>

<p>Worse, the data is no longer GPU-ready — it&#39;s encoded as ASCII characters! On each device that renders the 3D scene, we&#39;re going to have to decode every byte on the CPU up front. <a href="https://twitter.com/b1tr0t">Peter McLachlan</a> ran the numbers on mobile devices loading images from Data URLs:<sup>[<a href="#footnote-4">4</a>]</sup></p>
<blockquote>
<p>“... you can imagine my surprise to discover, when measuring the performance of hundreds of thousands of mobile page views, that loading images using a data URI is on average <strong>6x slower</strong> than using a binary source link such as an img tag with an src attribute!”</p>
</blockquote>
<p>And here&#39;s the second problem: That&#39;s entirely unnecessary work. We took binary GPU-ready data, encoded it in a larger plaintext format, sent that larger data over the network, and then made each viewer&#39;s device spend time translating it back to something the GPU understands.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Arguably, Data URLs should not have been included in the glTF specification. I don&#39;t recall the reasons for allowing them, but based on recent <a href="https://github.com/KhronosGroup/glTF-Blender-IO/issues/2148">feedback to Blender removing the “glTF Embedded” export option</a> (and later reverting that change) the convenience of human-readable, self-contained JSON is a big factor.</p>
<p>Unfortunately I see a lot of people in the WebGL and WebGPU communities taken by surprise with performance problems caused by Data URLs in their applications, then understandably not knowing what&#39;s wrong. In some cases Data URLs have lead to negative perceptions of the entire glTF format. glTF isn&#39;t the only cause. Popular JavaScript bundlers with the wrong configuration can inline binary resources — of any format — into large Data URLs, too. It&#39;s a <a href="https://en.wiktionary.org/wiki/footgun">footgun</a>, and explaining details of file formats to users doesn&#39;t scale. Tooling needs better guard rails, and warnings when Data URLs exceed some size threshold.</p>
<p>As general rules for performance-sensitive applications:</p>
<ul>
<li>For a self-contained binary file, use <code>.glb</code>.</li>
<li>For a human-readable JSON file, use <code>.gltf</code> with external resources (“glTF Separate”).</li>
<li><strong>DO NOT</strong> use self-contained <code>.gltf</code> files with embedded resources (“glTF Embedded”), outside of debugging.</li>
</ul>
<p>If you have glTF files already using “glTF Embedded” and want to losslessly convert to another glTF packaging method, you can. Install <a href="https://nodejs.org/en/download">Node.js</a>, and then open a terminal.</p>
<pre><code class="hljs bash"><span class="hljs-comment"># one-time installation</span>
npm install --global @gltf-transform/cli

<span class="hljs-comment"># glb</span>
gltf-transform cp input.gltf output.glb

<span class="hljs-comment"># gltf separate</span>
gltf-transform cp input.gltf output.gltf</code></pre>

<hr>
<p><small><a class="footnote" name="footnote-1"><sup>1</sup></a> I&#39;m using the term “GPU-ready” somewhat optimistically when referring to images. Some image formats are GPU-compatible, others are not. See my previous post, <a href="/2024/02/11/web-texture-formats/"><em>Choosing texture formats for WebGL and WebGPU applications</em></a>, for more on image formats in 3D models.</small></p>
<p><small><a class="footnote" name="footnote-2"><sup>2</sup></a> Optionally, there&#39;s a great <a href="https://marketplace.visualstudio.com/items?itemName=cesium.gltf-vscode">VSCode plugin</a>.</small></p>
<p><small><a class="footnote" name="footnote-3"><sup>3</sup></a> Technically, exporters could mix and match. A binary <code>.glb</code> could contain Data URLs; a <code>.gltf</code> could contain some Data URLs but also have external binary resources. But these presets cover 99% of use cases.</small></p>
<p><small><a class="footnote" name="footnote-4"><sup>4</sup></a> <a href="https://www.catchpoint.com/blog/data-uri"><em>On Mobile, Data URIs are 6x Slower than Source Linking</em></a>, July 2013.</small></p>
]]></description><link>https://www.donmccurdy.com/2024/03/10/data-urls-in-3d-models/</link><guid isPermaLink="true">https://www.donmccurdy.com/2024/03/10/data-urls-in-3d-models/</guid><pubDate>Sun, 10 Mar 2024 00:00:00 GMT</pubDate></item><item><title><![CDATA[Choosing texture formats for WebGL and WebGPU applications]]></title><description><![CDATA[<p>A supplement to web.dev&#39;s <a href="https://web.dev/articles/choose-the-right-image-format"><em>Choosing the Right Image Format</em></a>, adding context for texture formats used in WebGL and WebGPU development. If you aren&#39;t familiar with newer image formats like WebP or AVIF, consider reading the web.dev article first.</p>
<h2 id="image-formats-png-jpeg-webp-avif">Image formats (PNG, JPEG, WebP, AVIF)</h2>
<p>Most image formats used on the web are optimized for two goals:</p>
<ol>
<li>minimize file size transferred over a network</li>
<li>maximize image quality</li>
</ol>
<p>As of 2024, that generally means using WebP and AVIF for modern browsers, or PNG and JPEG for older browsers.<sup>[<a href="#footnote-1">1</a>]</sup> But when building for WebGL and WebGPU APIs, where high framerates and memory budgets are hard constraints, several more goals should be considered:</p>
<ol>
<li>avoid framerate stalls when the texture loads</li>
<li>minimize memory cost</li>
<li>maximize application performance</li>
</ol>
<p>When users can do more than just look at an image, spending significant time in a 2D or 3D application — games, immersive AR/VR experiences, or interactive storytelling — framerate stalls can be disastrous. Most image formats (including all formats mentioned above) must be decompressed on the CPU, then uploaded without compression to GPU memory. One 4096x4096px image with <a href="https://en.wikipedia.org/wiki/Mipmap">mipmaps</a> consumes 90 MB of memory, regardless of how small the original file might have been.</p>
<pre><code class="hljs">uncompressedSize = &lt;width&gt; ⨉ &lt;height&gt; ⨉ <span class="hljs-number">4</span> ⨉ <span class="hljs-number">1.333</span></code></pre>

<p>On many platforms, RGB textures are expanded to RGBA for compatibility reasons, so it&#39;s safer to assume all four channels are stored in memory. Multiply by 1.333 to include mipmaps.</p>
<p>Decompression and upload prevent WebGL from doing other work, a very common source of dropped frames. In WebXR, users experience dropped frames as an unexpected lurch and a major cause of motion sickness.</p>
<details>
    <summary><i>Additional context for three.js users</i></summary>

<p>By default, three.js decompresses images and uploads them to the GPU the first time the image is rendered in the scene. To upload the image sooner, use <code>WebGLRenderer</code>&#39;s <a href="https://threejs.org/docs/?q=renderer#api/en/renderers/WebGLRenderer.initTexture"><code>.initTexture()</code></a> method. Using <a href="https://threejs.org/docs/?q=imagebit#api/en/loaders/ImageBitmapLoader"><code>ImageBitmapLoader</code></a> can take the decompression step off the main thread, reducing dropped frames somewhat.</p>
</details>

<h2 id="specialized-gpu-texture-formats-etc-astc-bc17-pvrtc">Specialized GPU texture formats (ETC, ASTC, BC1–7, PVRTC)</h2>
<p>To avoid these issues, GPUs are designed with built-in support for specialized image formats, called “GPU texture formats.” In these formats, pixel data <em>remains compressed on the GPU</em>, and each pixel is decompressed (with no meaningful performance cost) each time a shader samples the texture. No decompression is required before GPU upload, memory cost is reduced by 4–8x, and uploads complete 4–8x faster.</p>
<p>Inconveniently, different platforms and GPUs require different GPU texture formats. Historically, game developers supporting multiple platforms ship different compressed texture formats — ETC, ASTC, BC1–7, or PVRTC — for different platforms.</p>
<p>While this strategy would also work on the web, it can be burdensome for individual developers and small teams, and these textures tend to be larger files. And while GPU texture formats can provide excellent quality with the right format choices and compression settings, they&#39;re much less forgiving about configuration and quality than traditional image formats. Careful tuning of compression parameters should be expected.</p>
<h2 id="universal-gpu-texture-formats-basis-etc1s-and-uastc">Universal GPU texture formats (Basis ETC1S and UASTC)</h2>
<p>Texture compression became far more practical in 2019, when <a href="https://www.binomial.info/">Binomial</a> (in <a href="https://opensource.googleblog.com/2019/05/google-and-binomial-partner-to-open.html">partnership with Google</a>) released the open source <a href="https://github.com/BinomialLLC/basis_universal">Basis Universal</a> family of compression codecs. These codecs are designed to be efficiently transcoded (not decompressed) at runtime into a variety of GPU texture formats. Any modern device receiving Basis Universal textures can load and display them efficiently.<sup>[<a href="#footnote-2">2</a>]</sup></p>
<p>The Basis Universal family includes two codecs:</p>
<ul>
<li><strong>ETC1S:</strong> Also called “BasisLZ.” Low/medium quality, with small file sizes comparable to JPEG. Default choice in most cases. ETC1S works well on color textures, but not as well on data textures like normal maps.</li>
<li><strong>UASTC:</strong> Higher quality, comparable to BC7. Quality is sufficient for both color and data textures, including normal maps. UASTC data is larger than ETC1S, but benefits from an additional pass of lossless compression (”supercompression”). Most KTX2 compressions apply Zstandard compression<sup>[<a href="#footnote-3">3</a>]</sup> on top of UASTC, producing compressed files 1-2x larger than JPEG or ETC1S textures with the same resolution.</li>
</ul>
<p>ETC1S and UASTC are low-level codecs, not file formats. To create files using either codec, use the <a href="https://registry.khronos.org/KTX/specs/2.0/ktxspec.v2.html">KTX2</a> file format.<sup>[<a href="#footnote-4">4</a>]</sup> KTX2 is a thin container, carrying required metadata like texture dimensions and color spaces along with compressed or uncompressed pixel data. KTX2 supports Basis Universal codecs and many GPU texture formats.</p>
<p>Like GPU texture formats, Basis Universal formats require more careful compression settings than traditional image formats. The same options will not work for all models, but do generalize per material slot. For example, one set of compression options works well for diffuse color textures, another set of options works well for normal maps.</p>
<p>To get started with KTX2 and Basis Universal, see the <a href="https://github.com/KhronosGroup/3D-Formats-Guidelines/blob/main/KTXArtistGuide.md"><em>KTX2 Artist Guide</em></a> and <a href="https://github.com/KhronosGroup/KTX-Software">KTX-Software project</a> from the Khronos Group.</p>
<h2 id="how-to-choose">How to choose</h2>
<p>When <a href="https://web.dev/articles/fcp">First Contentful Paint</a> is the top priority, minimize file size by using modern image formats like WebP and AVIF. Simple 2D/3D visuals, like a landing page graphic, will probably fall into this category.</p>
<p>When users will spend more time with the application, or the application is loading a lot of textures interactively, then the advantages of KTX2 textures with Basis Universal compression are significant. In this situation, KTX2 should often be preferred over traditional image formats.</p>
<p>When it&#39;s most important for textures be compatible with many different applications, or to keep original image quality, stick to JPEG and PNG. Libraries of reusable textures and 3D models are good examples of this situation.</p>
<details>
    <summary><i>glTF 3D models and image compatibility</i></summary>

<p>All glTF-compliant software supports JPEG and PNG textures. WebP support requires that a glTF loader implement the <a href="https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Vendor/EXT_texture_webp/README.md"><code>EXT_texture_webp</code></a>  extension. AVIF support remains an early <a href="https://github.com/KhronosGroup/glTF/pull/2235">proposal to glTF</a>. Support for KTX2 with Basis Universal requires the <code>KHR_texture_basisu</code> extension. All of these are implemented by three.js&#39;s THREE.GLTFLoader, but web browser support for WebP and AVIF is also required, and varies:</p>
<ul>
<li><a href="https://caniuse.com/webp">WebP browser support</a></li>
<li><a href="https://caniuse.com/avif">AVIF browser support</a></li>
</ul>
<p>Install the <a href="https://gltf-transform.dev/cli">glTF Transform CLI</a> to apply any of these compression methods to existing glTF 3D scenes.</p>
</details>

<details>
    <summary><i>LDR vs. HDR texture formats</i></summary>

<p>Formats discussed throughout this article support values in the range 0–1, and (for our purposes) are limited to 8-bit precision. These “LDR” formats are appropriate for most texture roles in physically-based materials, including albedo/diffuse/base color, normal maps, occlusion, roughness, metalness, and almost anything else.</p>
<p>Certain texture roles benefit from having values &gt;1, especially environment maps or image-based lighting (“IBL”). Light maps and emissive maps are often in this category as well. These are commonly referred to as high dynamic range (“HDR”) textures.</p>
<p>Common HDR formats include OpenEXR and HDR, which support higher precision and lossless compression. They&#39;re commonly used in VFX and film, but must be used sparingly on the web due to their larger size.</p>
<p>KTX2 supports HDR data, but the Basis Universal codecs do not. Available HDR formats in KTX2 are either uncompressed on the GPU (like RGBA16 or RGBA32) or supported only on a subset of devices (like BC6H). Tooling for BC6H compression is mostly targeted at game developers, and is unfortunately non-trivial for most web developers to add in their build and optimization systems.</p>
<p>No universal GPU texture format supports HDR data today. Using HDR formats in WebGL and WebGPU remains fairly difficult to optimize, and no clear best practice exists. When HDR textures are required, limit their resolution as much as possible. If you know of easy-to-use tools for creating BC6H textures, or better alternatives, please reach out!</p>
</details>

<p>For more help choosing among common texture formats, refer to <a href="https://github.com/KhronosGroup/3D-Formats-Guidelines/blob/main/KTXArtistGuide.md#gltf-texture-formats">glTF Texture Formats</a> from the <em>KTX2 Artist Guide</em>.</p>
<hr>
<p><small><a class="footnote" name="footnote-1"><sup>1</sup></a> Often we use SVGs and icon fonts instead of images, too. But that&#39;s less applicable with 3D graphics APIs, and outside the scope of this article.</small></p>
<p><small><a class="footnote" name="footnote-2"><sup>2</sup></a> Basis Universal transcoders select a target GPU texture format based on the current device&#39;s capabilities. The specific target format chosen is generally an implementation detail, but if you&#39;re looking for those details, see the Khronos Group&#39;s KTX2 <a href="https://github.com/KhronosGroup/3D-Formats-Guidelines/blob/main/KTXDeveloperGuide.md#transcode-target-selection-rgb-and-rgba">WebGL Developer Guide</a>.</small></p>
<p><small><a class="footnote" name="footnote-3"><sup>3</sup></a> Loaders can decode Zstandard supercompression off the main thread, and UASTC pixel data is still 4x smaller than JPEG or PNG, so Zstandard doesn&#39;t significantly reduce the memory benefits of Basis Universal formats.</small></p>
<p><small><a class="footnote" name="footnote-4"><sup>4</sup></a> When working with Basis Universal compression, you may see <code>.basis</code> file extensions. This is a similar container format to KTX2, specifically for Basis Universal, and the compressed texture data will be identical. KTX2 is the standardized container format, provides additional texture formats and supercompression, and is required to use Basis Universal in glTF files.</small></p>
]]></description><link>https://www.donmccurdy.com/2024/02/11/web-texture-formats/</link><guid isPermaLink="true">https://www.donmccurdy.com/2024/02/11/web-texture-formats/</guid><pubDate>Sun, 11 Feb 2024 00:00:00 GMT</pubDate></item><item><title><![CDATA[Apps defaults in 2024]]></title><description><![CDATA[<p>Lists of apps are <a href="https://defaults.rknight.me/">sweeping the nation</a>! Here&#39;s mine.</p>
<h2 id="basics">Basics</h2>
<ul>
<li>📬 <strong>Mail Client</strong>: <a href="https://sparkmailapp.com/">Spark Mail</a></li>
<li>📮 <strong>Mail Server</strong>: <a href="https://fastmail.com/">FastMail</a> and <a href="https://www.google.com/gmail/about/">Gmail</a></li>
<li>🗒️ <strong>Notes</strong>: <a href="https://obsidian.md/">Obsidian</a> (slowly migrating from <a href="https://www.notion.so/">Notion</a>)</li>
<li>✅ <strong>To-Do</strong>: <a href="https://moleskinestudio.com/actions/">Moleskine Actions</a></li>
<li>🌅 <strong>Photo Management</strong>: <a href="https://www.adobe.com/products/photoshop-lightroom.html">Adobe Lightroom</a></li>
<li>📆 <strong>Calendar</strong>: <a href="https://moleskinestudio.com/timepage/">Moleskine Timepage</a></li>
<li>📦 <strong>Cloud file storage</strong>: iCloud and Google Drive</li>
<li>📖 <strong>RSS</strong>: <a href="https://readwise.io/">Readwise Reader</a></li>
<li>👥 <strong>Contacts</strong>: Apple Contacts</li>
<li>🌐 <strong>Browser</strong>: <a href="https://arc.net/">Arc</a></li>
<li>💬 <strong>Chat</strong>: too many things</li>
<li>🔖 <strong>Bookmarks</strong>: <a href="https://readwise.io/">Readwise Reader</a></li>
<li>📑 <strong>Read It Later</strong>: <a href="https://readwise.io/">Readwise Reader</a></li>
<li>📝 <strong>Word Processing</strong>: <a href="https://ia.net/writer">iA Writer</a></li>
<li>📈 <strong>Spreadsheets</strong>: Google Sheets</li>
<li>📊 <strong>Presentations</strong>: <a href="https://ia.net/presenter">iA Presenter</a></li>
<li>🛒 <strong>Shopping Lists</strong>: <a href="https://moleskinestudio.com/actions/">Moleskine Actions</a></li>
<li>🥘 <strong>Meal Planning</strong>: <a href="https://www.paprikaapp.com/">Paprika</a></li>
<li>💸 <strong>Budgeting &amp; Personal Finance</strong>: <a href="https://soulver.app/">Soulver</a> and Google Sheets</li>
<li>🗞️ <strong>News</strong>: <a href="https://www.nytimes.com/">The New York Times</a>, <a href="https://restofworld.org/">Rest of World</a>, and RSS</li>
<li>🎹 <strong>Music</strong>: <a href="https://bandcamp.com/">Bandcamp</a> and <a href="https://brushedtype.co/doppler/">Doppler</a></li>
<li>🎙️ <strong>Podcasts</strong>: <a href="https://overcast.fm/">Overcast</a></li>
<li>🔑 <strong>Password Management</strong>: <a href="https://1password.com/">1Password</a></li>
</ul>
<p><em>Prompts above are from the <a href="https://defaults.rknight.me/">App Defaults</a> template, with icons from <a href="https://maique.eu/2023/11/03/defaults.html">micro maique</a>.</em></p>
<h2 id="other">Other</h2>
<ul>
<li><strong>Backups:</strong> <a href="https://www.backblaze.com/">Backblaze</a> and macOS Time Machine (<a href="https://www.donmccurdy.com/2019/05/18/hard-drive-backups/">my setup</a>)</li>
<li><strong>Code Editor</strong>: <a href="https://code.visualstudio.com/">VSCode</a> and <a href="https://www.sublimetext.com/">Sublime Text</a></li>
<li><strong>Git</strong>: <a href="https://graphite.dev/">Graphite</a></li>
<li><strong>Graphic Design</strong>: <a href="https://affinity.serif.com/en-us/designer/">Affinity Designer</a></li>
<li><strong>Maps:</strong> <a href="https://transitapp.com/">Transit</a> and Google Maps</li>
<li><strong>Music Organization</strong>: <a href="https://www.nightbirdsevolve.com/meta/">Meta</a> and <a href="https://mp3tag.app/">Mp3Tag</a></li>
<li><strong>Social Media</strong>: <a href="https://tapbots.com/ivory/">Ivory</a> for <a href="https://joinmastodon.org/">Mastodon</a></li>
<li><strong>Terminal</strong>: <a href="https://iterm2.com/">iTerm2</a></li>
<li><strong>Web Hosting</strong>: <a href="https://vercel.com/home">Vercel</a> and Google Cloud Storage</li>
</ul>
<p><em>Prompts here are my own additions.</em></p>
<p>Thanks for reading! If you have questions or want to start a fight over any of these, <a href="https://fosstodon.org/@donmccurdy">I&#39;m on Mastodon</a>.</p>
]]></description><link>https://www.donmccurdy.com/2024/01/05/apps-defaults-in-2024/</link><guid isPermaLink="true">https://www.donmccurdy.com/2024/01/05/apps-defaults-in-2024/</guid><pubDate>Fri, 05 Jan 2024 00:00:00 GMT</pubDate></item><item><title><![CDATA[Joining CARTO]]></title><description><![CDATA[<p>Some personal news: I&#39;m joining <a href="https://carto.com/">CARTO</a>!</p>
<p>Founded in 2012 as “CartoDB”, CARTO originally made open source tools for building maps on <a href="https://postgis.net/">PostGIS</a>. The product has since evolved well beyond that — integrating with data warehouses including Google BigQuery, Amazon Redshift, Snowflake, and Databricks — and offers a suite of tools and applications for geospatial analysis, visualization, and data science.</p>
<p>As a software developer at CARTO, my work impacts first-party applications like <a href="https://carto.com/builder">CARTO Builder</a>, and popular open source libraries for large-scale geospatial visualization, including <a href="https://deck.gl/">deck.gl</a> and <a href="https://luma.gl/">luma.gl</a>.</p>
]]></description><link>https://www.donmccurdy.com/2023/12/08/joining-carto/</link><guid isPermaLink="true">https://www.donmccurdy.com/2023/12/08/joining-carto/</guid><pubDate>Fri, 08 Dec 2023 00:00:00 GMT</pubDate></item><item><title><![CDATA[Generating glTF files programmatically]]></title><description><![CDATA[<p>While it&#39;s far more common to create 3D models in artist-friendly software like <a href="https://www.blender.org/">Blender</a>, there are times when we want to create a model in code. For example, when converting from one 3D file format to another, or when creating geometry with <a href="https://en.wikipedia.org/wiki/Procedural_generation">procedural generation</a>.</p>
<p>This will be a short, technical post explaining how to create a glTF 3D model programmatically, using the glTF Transform (<a href="https://gltf-transform.dev/">gltf-transform.dev</a>) JavaScript library. The library weighs about 20kb minzipped, and is much smaller than a full 3D engine like three.js or babylon.js.</p>
<p>First, we&#39;re going to need some vertex data! glTF supports points, lines, and <a href="https://en.wikipedia.org/wiki/Triangle_mesh">triangle meshes</a>. The details here depend entirely on what you&#39;re trying to create, but let&#39;s start with a torus (a.k.a. “donut”). three.js has <a href="https://github.com/mrdoob/three.js/tree/dev/src/geometries">open-source implementations of many common 3D shapes</a>, and I&#39;ve copied the relevant section of <code>THREE.TorusGeometry</code> below, removing dependencies on three.js itself.</p>
<pre><code class="hljs javascript"><span class="hljs-comment">// MIT License. Copyright © 2010-2023 three.js authors.</span>

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">createTorus</span>(<span class="hljs-params">radius = <span class="hljs-number">1</span>, tube = <span class="hljs-number">0.4</span>, radialSegments = <span class="hljs-number">12</span>,tubularSegments = <span class="hljs-number">48</span>, arc = Math.PI * <span class="hljs-number">2</span></span>) </span>{

    <span class="hljs-keyword">const</span> indicesArray = [];
    <span class="hljs-keyword">const</span> positionArray = [];
    <span class="hljs-keyword">const</span> uvArray = [];

    <span class="hljs-keyword">const</span> vertex = [<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>];

    <span class="hljs-comment">// generate positions and uvs</span>
    <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> j = <span class="hljs-number">0</span>; j &lt;= radialSegments; j++) {
        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> i = <span class="hljs-number">0</span>; i &lt;= tubularSegments; i++) {
            <span class="hljs-keyword">const</span> u = (i / tubularSegments) * arc;
            <span class="hljs-keyword">const</span> v = (j / radialSegments) * <span class="hljs-built_in">Math</span>.PI * <span class="hljs-number">2</span>;

            <span class="hljs-comment">// position</span>
            vertex[<span class="hljs-number">0</span>] = (radius + tube * <span class="hljs-built_in">Math</span>.cos(v)) * <span class="hljs-built_in">Math</span>.cos(u);
            vertex[<span class="hljs-number">1</span>] = (radius + tube * <span class="hljs-built_in">Math</span>.cos(v)) * <span class="hljs-built_in">Math</span>.sin(u);
            vertex[<span class="hljs-number">2</span>] = tube * <span class="hljs-built_in">Math</span>.sin(v);
            positionArray.push(vertex[<span class="hljs-number">0</span>], vertex[<span class="hljs-number">1</span>], vertex[<span class="hljs-number">2</span>]);

            <span class="hljs-comment">// uv</span>
            uvArray.push(i / tubularSegments);
            uvArray.push(j / radialSegments);
        }
    }

    <span class="hljs-comment">// generate indices</span>
    <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> j = <span class="hljs-number">1</span>; j &lt;= radialSegments; j++) {
        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> i = <span class="hljs-number">1</span>; i &lt;= tubularSegments; i++) {
            <span class="hljs-comment">// indices</span>

            <span class="hljs-keyword">const</span> a = (tubularSegments + <span class="hljs-number">1</span>) * j + i - <span class="hljs-number">1</span>;
            <span class="hljs-keyword">const</span> b = (tubularSegments + <span class="hljs-number">1</span>) * (j - <span class="hljs-number">1</span>) + i - <span class="hljs-number">1</span>;
            <span class="hljs-keyword">const</span> c = (tubularSegments + <span class="hljs-number">1</span>) * (j - <span class="hljs-number">1</span>) + i;
            <span class="hljs-keyword">const</span> d = (tubularSegments + <span class="hljs-number">1</span>) * j + i;

            <span class="hljs-comment">// faces</span>

            indicesArray.push(a, b, d);
            indicesArray.push(b, c, d);
        }
    }

    <span class="hljs-keyword">return</span> { indicesArray, positionArray, uvArray };
}</code></pre>

<p>The <code>createTorus()</code> function above provides arrays of vertex positions and UVs, and indices for triangles connecting those vertices. Replace all this with your own geometry, if you want.</p>
<p>Next we need to assemble the vertex data into a glTF 2.0 file. First we&#39;ll create a new glTF Transform <a href="https://gltf-transform.dev/modules/core/classes/Document">document</a>, and a <a href="https://gltf-transform.dev/modules/core/classes/Buffer">buffer</a> to store our data.</p>
<pre><code class="hljs javascript"><span class="hljs-keyword">import</span> { Document } <span class="hljs-keyword">from</span> <span class="hljs-string">'@gltf-transform/core'</span>;

<span class="hljs-keyword">const</span> <span class="hljs-built_in">document</span> = <span class="hljs-keyword">new</span> Document();
<span class="hljs-keyword">const</span> buffer = <span class="hljs-built_in">document</span>.createBuffer();</code></pre>

<p>Next we&#39;ll take the vertex data we generated above, and use it to create a <a href="https://gltf-transform.dev/modules/core/classes/Mesh">mesh</a>.</p>
<pre><code class="hljs javascript"><span class="hljs-keyword">const</span> { indicesArray, positionArray, uvArray } = createTorus();

<span class="hljs-comment">// indices and vertex attributes</span>
<span class="hljs-keyword">const</span> indices = <span class="hljs-built_in">document</span>
    .createAccessor()
    .setArray(<span class="hljs-keyword">new</span> <span class="hljs-built_in">Uint16Array</span>(indicesArray))
    .setType(<span class="hljs-string">'SCALAR'</span>)
    .setBuffer(buffer);
<span class="hljs-keyword">const</span> position = <span class="hljs-built_in">document</span>
    .createAccessor()
    .setArray(<span class="hljs-keyword">new</span> <span class="hljs-built_in">Float32Array</span>(positionArray))
    .setType(<span class="hljs-string">'VEC3'</span>)
    .setBuffer(buffer);
<span class="hljs-keyword">const</span> texcoord = <span class="hljs-built_in">document</span>
    .createAccessor()
    .setArray(<span class="hljs-keyword">new</span> <span class="hljs-built_in">Float32Array</span>(texcoordArray))
    .setType(<span class="hljs-string">'VEC2'</span>)
    .setBuffer(buffer);

<span class="hljs-comment">// material</span>
<span class="hljs-keyword">const</span> material = <span class="hljs-built_in">document</span>.createMaterial()
    .setBaseColorHex(<span class="hljs-number">0xD96459</span>)
    .setRoughnessFactor(<span class="hljs-number">1</span>)
    .setMetallicFactor(<span class="hljs-number">0</span>);

<span class="hljs-comment">// primitive and mesh</span>
<span class="hljs-keyword">const</span> prim = <span class="hljs-built_in">document</span>
    .createPrimitive()
    .setMaterial(material)
    .setIndices(indices)
    .setAttribute(<span class="hljs-string">'POSITION'</span>, position)
    .setAttribute(<span class="hljs-string">'TEXCOORD_0'</span>, texcoord);
<span class="hljs-keyword">const</span> mesh = <span class="hljs-built_in">document</span>.createMesh(<span class="hljs-string">'MyMesh'</span>)
    .addPrimitive(prim);</code></pre>

<p>While glTF <em>can</em> be used to store a mesh all by itself, it&#39;s far more common to place the mesh within a default scene. This ensures everything shows up as expected in various 3D viewers. We&#39;ll add the mesh to a node, and put that node in a scene. If we had multiple meshes, we might assign different position/rotation/scale to each node.</p>
<pre><code class="hljs javascript"><span class="hljs-keyword">const</span> node = <span class="hljs-built_in">document</span>.createNode(<span class="hljs-string">'MyNode'</span>)
    .setMesh(mesh)
    .setTranslation([<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>]);

<span class="hljs-keyword">const</span> scene = <span class="hljs-built_in">document</span>.createScene(<span class="hljs-string">'MyScene'</span>)
    .addChild(node);</code></pre>

<p>Finally, we&#39;re ready to save the result as a new glTF file. I/O depends on the environment where we&#39;re running the code, so select the appropriate option below.</p>
<details>
<summary>Node.js</summary>

<pre><code class="hljs javascript"><span class="hljs-keyword">import</span> { NodeIO } <span class="hljs-keyword">from</span> <span class="hljs-string">'@gltf-transform/core'</span>;

<span class="hljs-keyword">const</span> io = <span class="hljs-keyword">new</span> NodeIO();
<span class="hljs-keyword">await</span> io.write(<span class="hljs-string">'./torus.glb'</span>, <span class="hljs-built_in">document</span>);</code></pre>
</details>

<details>
<summary>Deno</summary>

<pre><code class="hljs javascript"><span class="hljs-keyword">import</span> { DenoIO } <span class="hljs-keyword">from</span> <span class="hljs-string">'@gltf-transform/core'</span>;

<span class="hljs-keyword">const</span> io = <span class="hljs-keyword">new</span> DenoIO();
<span class="hljs-keyword">await</span> io.write(<span class="hljs-string">'./torus.glb'</span>, <span class="hljs-built_in">document</span>);</code></pre>
</details>

<details>
<summary>Web</summary>

<pre><code class="hljs javascript"><span class="hljs-keyword">import</span> { WebIO } <span class="hljs-keyword">from</span> <span class="hljs-string">'@gltf-transform/core'</span>;

<span class="hljs-keyword">const</span> io = <span class="hljs-keyword">new</span> WebIO();
<span class="hljs-keyword">const</span> bytes = <span class="hljs-keyword">await</span> io.writeBinary(<span class="hljs-built_in">document</span>); <span class="hljs-comment">// → Uint8Array</span></code></pre>
</details>

<p>That&#39;s it — you&#39;ve got a shiny new glTF 2.0 file:</p>
<script type="module" async src="/assets/lib/model-viewer.min.js"></script>
<p><model-viewer src="/assets/models/0033-Torus.glb"
    alt="Low-poly 3D donut, shaded in salmon pink."
    camera-controls
    style="--poster-color: transparent; width: 100%; height: 400px;"></model-viewer></p>
<p>The glTF Transform library, as the name hints, can do a lot more than <em>create</em> new files. It&#39;s more often used for <em>optimizing</em> existing glTF files, and we could do the same here by <a href="https://gltf-transform.dev/modules/extensions/classes/KHRDracoMeshCompression">adding Draco compression</a> to our model. Consider that an exercise for the reader.</p>
<p>Thanks for reading, and please <a href="https://fosstodon.org/@donmccurdy">reach out</a> with any questions!</p>
]]></description><link>https://www.donmccurdy.com/2023/08/01/generating-gltf/</link><guid isPermaLink="true">https://www.donmccurdy.com/2023/08/01/generating-gltf/</guid><pubDate>Tue, 01 Aug 2023 00:00:00 GMT</pubDate></item><item><title><![CDATA[Healthy expectations in open source]]></title><description><![CDATA[<p>Most open source projects choose licenses from a common list,<sup>[<a href="#footnote-1">1</a>]</sup> while support and maintenance expectations are not standardized. Usually those expectations are not stated at all.</p>
<p>That&#39;s a loss — not because maintainers of free software owe anyone commitments, but because the unstated status quo presumes this support. Maintainers will consciously understand that no such obligation exists, but still feel a presure to provide indefinite and unsustainable support. And that dynamic contributes to an extractive open source ecosystem.</p>
<h2 id="a-series-of-kindnesses">A series of kindnesses</h2>
<p>While open source can be funded well and sustainably maintained, it usually isn&#39;t: valuable work is created in someone&#39;s free time, given away, and used in commercial and non-commercial pursuits alike, without compensation or reciprocation.</p>
<p><em>Open source is “a series of kindnesses</em>,”<sup>[<a href="#footnote-2">2</a>]</sup> and maintainers are under no obligation to provide support. <em>We are not suppliers</em>.<sup>[<a href="#footnote-3">3</a>]</sup> Writing software and giving it away is a kindness. Answering a question is a kindness. Fixing a bug is a kindness. Building a new feature and contributing it back to a project is a kindness. Accepting a pull request, then maintaining that code indefinitely, is a kindness.</p>
<p>I&#39;ve been wondering: would open source work better for maintainers if expectations on support were stated up front? The clarity could directly benefit maintainers and users, and studies based on these statements could spark conversations about compensation and business models in open source.</p>
<h2 id="categories">Categories</h2>
<p>I&#39;m aware of few attempts to categorize open source lifecycles, and no detailed studies.<sup>[<a href="#footnote-4">4</a>]</sup> I have not found any formal definition of open source support, or of its varying degrees. So I&#39;ve made the attempt here, for myself.</p>
<p>My categorization covers three areas: <em>Maturity</em>, <em>Development</em>, and <em>Support</em>. Each is value-neutral. We&#39;d all prefer more support than less from projects we depend on, but can&#39;t disparage the choice to give one&#39;s time and work away, or the boundaries of the gift.</p>
<p>Projects that afford incentives and income to maintainers tend to provide more in all three categories. While it&#39;s not uncommon for high-quality open source projects to begin as a labor of love, sustaining development and support over a long period is difficult without meaningful support for maintainers, and these projects inevitably shift into less-active modes without support and incentives.</p>
<h3 id="maturity">Maturity</h3>
<p><em>Status or phase in project&#39;s lifecycle. Indicative of maturity, quality, and stability.</em></p>
<ul>
<li><strong>Stable:</strong> Project achieves its stated goals with a high level of quality and reliability. While certain areas of the project may remain experimental, the project as a whole is more mature, and the authors have some degree of confidence in its correctness and stability.</li>
<li><strong>Experimental:</strong> Exploration, idea generation, early development, or research. May solve a particular use case for the maintainer, while remaining unstable for broader use. Creation of experimental software does not necessarily represent a commitment to bring that software to maturity.</li>
<li><strong>Deprecated:</strong> Previously stable or experimental projects may become deprecated, indicating that a project&#39;s use is no longer recommended. Deprecation usually implies inactive development and support. Causes for deprecation may include lack of time and incentives for development and support, degradation in the codebase, or wider changes in the software ecosystem.</li>
</ul>
<h3 id="development">Development</h3>
<p><em>Current level of maintainer activity on the project.</em></p>
<ul>
<li><strong>Active:</strong> Frequent contributions toward feature and maintenance goals.</li>
<li><strong>Maintenance:</strong> Bugs, compatibility issues, and security risks are addressed periodically, but new feature development is not expected. Projects may enter maintenance development due to limited developer time and incentives, or because the project is feature-complete within its intended scope.</li>
<li><strong>Unmaintained:</strong> Neither new features nor maintenance are planned. Most projects cannot remain both unmaintained and stable indefinitely, as dependencies shift and bitrot accumulates. Unmaintained status is common for older experimental projects, and implicit for archived projects.</li>
</ul>
<h3 id="support">Support</h3>
<p><em>Level of support (for bug reports, feature requests, and help tickets<sup>[<a href="#footnote-5">5</a>]</sup>) provided to users of the project.</em></p>
<ul>
<li><strong>Active:</strong> Project provides a place to file tickets. Maintainers welcome these discussions, and intend to read and respond regularly.<sup>[<a href="#footnote-6">6</a>]</sup></li>
<li><strong>Limited</strong>: Project provides a place to file tickets. Maintainers welcome these discussions, but cannot review tickets regularly. Delays should be expected, and some tickets may not receive a response.</li>
<li><strong>Self-service:</strong> Project is provided as-is. Maintainers decline to provide support. Questions should be directed to Stack Overflow or similar websites.</li>
</ul>
<p>There&#39;s room for exploration in <em>Support</em> — the three levels above are not exhaustive! Find creative ways to support open source work. Maybe reserve help tickets for noncommercial users, sponsors, and paying customers.<sup>[<a href="#footnote-7">7</a>]</sup> Consider selling SLAs. More experimentation on open source business models would be a healthy thing.<sup>[<a href="#footnote-8">8</a>]</sup></p>
<h2 id="applying-the-definitions">Applying the definitions</h2>
<p>I&#39;m in the process of reviewing my own open source projects, and plan to categorize each project in the three areas above. Badges may be created with <a href="https://shields.io/">Shields.io</a>, and a project&#39;s README may include details or exceptions.</p>
<p>Examples:</p>
<p><img alt="maturity: stable" src="https://img.shields.io/badge/maturity-stable-blue" style="display: inline-block;"> <img alt="maturity: experimental" src="https://img.shields.io/badge/maturity-experimental-yellow" style="display: inline-block;"> <img alt="maturity: deprecated" src="https://img.shields.io/badge/maturity-deprecated-red" style="display: inline-block;"></p>
<p><img alt="development: active" src="https://img.shields.io/badge/development-active-blue" style="display: inline-block;"> <img alt="development: maintenance" src="https://img.shields.io/badge/development-maintenance-yellow" style="display: inline-block;"> <img alt="development: unmaintained" src="https://img.shields.io/badge/development-unmaintained-red" style="display: inline-block;"></p>
<p><img alt="support: active" src="https://img.shields.io/badge/support-active-blue" style="display: inline-block;"> <img alt="support: limited" src="https://img.shields.io/badge/support-limited-yellow" style="display: inline-block;"> <img alt="support: self-service" src="https://img.shields.io/badge/support-self--service-orange" style="display: inline-block;"></p>
<p>I&#39;m curious how these definitions resonate with open source maintainers, and users of open source software. Are the categories missing something important? Is something gained — or lost<sup>[<a href="#footnote-9">9</a>]</sup> — in stating a support policy for an open source project? I am active <a href="https://fosstodon.org/@donmccurdy">on Mastodon</a> these days, and welcome questions and ideas.</p>
<hr>
<p><small><a class="footnote" name="footnote-1"><sup>1</sup></a> <a href="https://spdx.org/licenses/"><em>The Software Package Data Exchange (SPDX)</em></a>. I&#39;m no longer sure whether this is a good thing. On one hand, SPDX encourages corporations to choose use open licenses, and prevents license-compatibility nightmares. On the other hand, SPDX takes away tools that could benefit indie developers, like non-commercial licenses. SPDX does not generally grant these licenses identifiers for use in package registries.</small></p>
<p><small><a class="footnote" name="footnote-2"><sup>2</sup></a> <a href="https://snarky.ca/setting-expectations-for-open-source-participation/#everythinginopensourceshouldbeaseriesofkindnesses"><em>Setting expectations for open source participation</em></a>, by Brett Cannon.</small></p>
<p><small><a class="footnote" name="footnote-3"><sup>3</sup></a> <a href="https://www.softwaremaxims.com/blog/not-a-supplier"><em>I am not a supplier</em></a>, by Thomas Depierre.</small></p>
<p><small><a class="footnote" name="footnote-4"><sup>4</sup></a> Inspired by the Linux Foundation&#39;s <a href="https://lfx.linuxfoundation.org/blog/the-life-cycles-of-open-source-projects/"><em>The Life Cycles of Open Source Projects</em></a> and Formidable&#39;s <a href="https://formidable.com/blog/2019/oss-maintenance/"><em>OSS Maintenance Levels</em></a>.</small></p>
<p><small><a class="footnote" name="footnote-5"><sup>5</sup></a> I&#39;ve intentionally avoided distinguishing between “bugs”, “feature requests”, and “support”. The difference is often known only to a project&#39;s maintainers, and accepting one kind of support ticket means dealing with them all.</small></p>
<p><small><a class="footnote" name="footnote-6"><sup>6</sup></a> I&#39;ve intentionally avoided defining “regularly”. This is a hint about maintainer intention — not an obligation. Fixed support schedules are not a reasonable expectation, outside the context of a paid Service Level Agreement (SLA).</small></p>
<p><small><a class="footnote" name="footnote-7"><sup>7</sup></a> OSI&#39;s <a href="https://opensource.org/osd/"><em>The Open Source Definition</em></a> states — controversially — that licenses containing restrictions against “fields of endeavour” cannot be considered open source. Even so, maintainers can certainly be selective about time spent on support.</small></p>
<p><small><a class="footnote" name="footnote-8"><sup>8</sup></a> See <a href="https://indieopensource.com/">Indie Open Source</a>.</small></p>
<p><small><a class="footnote" name="footnote-9"><sup>9</sup></a> One real fear I have is that maintainers will — wanting others to choose their software — promise levels of support they cannot sustain, that users will prefer those projects until their maintainers burn out, and that the status quo will deepen. Meaningful incentives are required to sustain support, and problems of support and compensation are closely linked.</small></p>
]]></description><link>https://www.donmccurdy.com/2023/07/03/expectations-in-open-source/</link><guid isPermaLink="true">https://www.donmccurdy.com/2023/07/03/expectations-in-open-source/</guid><pubDate>Mon, 03 Jul 2023 00:00:00 GMT</pubDate></item><item><title><![CDATA[Publishing WebAssembly modules with AssemblyScript]]></title><description><![CDATA[<h2 id="introduction">Introduction</h2>
<p><a href="https://webassembly.org/">WebAssembly</a> (WASM) is a compilation target for other programming languages. It runs in many computing environments, and it runs fast. Typically a WebAssembly module begins life as C/C++, Rust, or AssemblyScript source code. A toolchain compiles the source code to a .wasm binary, and that binary is loaded into environments like web browsers and Node.js.</p>
<p><a href="https://www.assemblyscript.org/">AssemblyScript</a> — heavily inspired by <a href="https://www.typescriptlang.org/">TypeScript</a> — will be more approachable to web developers than C/C++ or Rust, and we&#39;ll focus on AssemblyScript here.</p>
<p>This post explains how to set up a new AssemblyScript project, compile that project as a WebAssembly module, write unit tests, and publish the package to npm. We&#39;ll spend some time on compatibility, so that the same npm package works well in web pages, Node.js, and <a href="https://eloquentjavascript.net/10_modules.html#h_zWTXAU93DC">bundlers</a>.</p>
<p>If you&#39;re new to AssemblyScript, reading <a href="https://www.assemblyscript.org/introduction.html#from-a-webassembly-perspective">an introduction to the language</a> may be worthwhile.  While I&#39;ve claimed above that WebAssembly is fast, new code still needs to be optimized, and this requires thinking about issues that web developers typically don&#39;t. Read Surma&#39;s excellent post, <em><a href="https://surma.dev/things/js-to-asc/index.html">Is WebAssembly magic performance pixie dust?</a></em>, for an introduction to optimizing AssemblyScript code.</p>
<h2 id="setting-up-a-new-project">Setting up a new project</h2>
<blockquote>
<p><strong>NOTE:</strong> This section mirrors AssemblyScript&#39;s own tutorials. If my post falls out of date, prefer the <a href="https://www.assemblyscript.org/getting-started.html#setting-up-a-new-project">official documentation</a>.</p>
</blockquote>
<p>With a <a href="https://nodejs.org/">recent version of Node.js</a> installed, run the following steps in a new directory:</p>
<pre><code class="hljs bash"><span class="hljs-comment"># Initialize an empty npm package.</span>
npm init

<span class="hljs-comment"># Install the AssemblyScript compiler.</span>
npm install --save-dev assemblyscript

<span class="hljs-comment"># Generate AssemblyScript project structure</span>
npx asinit .</code></pre>

<p>The last step creates several folders and files in the project directory. These are worth mentioning:</p>
<ul>
<li><code>./assembly/index.ts</code> — Our <em>AssemblyScript</em> entry point. Functions exported here will be available through our WASM module.</li>
<li><code>./build/</code> — WASM modules and JavaScript bindings will be compiled into this folder. Don&#39;t hand-edit these files. You may include them in Git, but I typically don&#39;t.</li>
<li><code>./asconfig.json</code> — Configuration for the AssemblyScript compiler.</li>
</ul>
<p>The files below can be deleted. We&#39;ll set up more helpful unit tests later.</p>
<ul>
<li><code>./tests/index.js</code></li>
<li><code>./index.html</code></li>
</ul>
<p>Then, create a <code>src/</code> folder and two empty source files, which we&#39;ll use to load our WASM module and create a clean TypeScript API around it. If you&#39;d prefer to use JavaScript instead of TypeScript, feel free.</p>
<ul>
<li><code>./src/index.ts</code></li>
<li><code>./src/wasm.ts</code></li>
</ul>
<h2 id="compile-assemblyscript-to-webassembly">Compile AssemblyScript to WebAssembly</h2>
<p>The default <code>assembly/index.ts</code> defines a simple function adding two integers:</p>
<pre><code class="hljs typescript"><span class="hljs-comment">// The entry file of your WebAssembly module.</span>

<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">add</span>(<span class="hljs-params">a: i32, b: i32</span>): <span class="hljs-title">i32</span> </span>{
    <span class="hljs-keyword">return</span> a + b;
}</code></pre>

<p>Let&#39;s leave that alone and compile it. An <code>asbuild</code> script is already defined in our <code>package.json</code> file, which we can execute:</p>
<pre><code class="hljs bash">npm run asbuild</code></pre>

<p>The compiler builds our WebAssembly binaries and JavaScript bindings, and puts release and debug builds in the <code>build/</code> folder. Open the release bindings in <code>build/release.js</code>, and you&#39;ll see something like this toward the end of the file:</p>
<pre><code class="hljs javascript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> { ... } = <span class="hljs-keyword">await</span> (<span class="hljs-keyword">async</span> url =&gt; instantiate(
    <span class="hljs-keyword">await</span> (<span class="hljs-keyword">async</span> () =&gt; {
        <span class="hljs-keyword">try</span> { <span class="hljs-keyword">return</span> <span class="hljs-keyword">await</span> globalThis.WebAssembly.compileStreaming(globalThis.fetch(url)); }
        <span class="hljs-keyword">catch</span> { <span class="hljs-keyword">return</span> globalThis.WebAssembly.compile(<span class="hljs-keyword">await</span> (<span class="hljs-keyword">await</span> <span class="hljs-keyword">import</span>(<span class="hljs-string">"node:fs/promises"</span>)).readFile(url)); }
    })(), {
    }
))(<span class="hljs-keyword">new</span> URL(<span class="hljs-string">"release.wasm"</span>, <span class="hljs-keyword">import</span>.meta.url));</code></pre>

<p>If we only wanted drop these files into a web or Node.js application, this might be fine. But when publishing a package for npm, there are a few problems above. While an application might happily ignore the <code>import(&quot;node:fs/promises&quot;)</code> code unless it hits the catch block, our users&#39; <em>bundlers</em> might not. Moreover, older bundlers don&#39;t support <code>import.meta.url</code>, and cannot manage static binary resources.</p>
<p>We&#39;ll dive deeper into both of those compatibility issues in a later section, but for now let&#39;s just strip out the problematic  bindings by changing the AssemblyScript configuration. In <code>asconfig.json</code>, set <code>&quot;bindings&quot;: &quot;raw&quot;</code> under the compiler options:</p>
<pre><code class="hljs json">{
    ...
    "options": {
        "bindings": "raw"
    }
}</code></pre>

<p>Now run &quot;<code>npm run asbuild</code>&quot; again, and the problematic parts of our <code>build/release.js</code> bindings should be gone.</p>
<h2 id="writing-the-wrapper">Writing the wrapper</h2>
<p>Wrapping AssemblyScript&#39;s output with hand-written code isn&#39;t strictly necessary, but I&#39;d encourage it when publishing the package to npm. Wrapping allows us to:</p>
<ol>
<li>Make the package easier to import for our users</li>
<li>Display documentation with JSDoc</li>
<li>Provide more specific TypeScript hints</li>
<li>Perform input validation before data is passed into the WASM module<sup>[<a href="#footnote-1">1</a>]</sup></li>
<li>Implement features that would be difficult or slow in WASM<sup>[<a href="#footnote-2">2</a>]</sup></li>
</ol>
<p>WebAssembly and its various toolchains will improve, and I expect this list to be shorter in a few years.</p>
<p>Let&#39;s start by writing a simple wrapper that supports only Node.js, and in the next section we&#39;ll adapt it for other environments. Without modifying <code>./assembly/tsconfig.json</code> (which affects our AssemblyScript compilation), we&#39;ll add a new file, <code>./tsconfig.json</code>, for our TypeScript wrapper. Replace &quot;my-package&quot; with whatever package name you prefer.</p>
<pre><code class="hljs json">{
    <span class="hljs-attr">"compilerOptions"</span>: {
        <span class="hljs-attr">"paths"</span>: {
            <span class="hljs-attr">"my-package"</span>: [<span class="hljs-string">"./"</span>]
        },
        <span class="hljs-attr">"typeRoots"</span>: [<span class="hljs-string">"node_modules/@types"</span>],
        <span class="hljs-attr">"moduleResolution"</span>: <span class="hljs-string">"node"</span>,
        <span class="hljs-attr">"module"</span>: <span class="hljs-string">"ESNext"</span>,
        <span class="hljs-attr">"lib"</span>: [<span class="hljs-string">"ESNext"</span>, <span class="hljs-string">"DOM"</span>],
        <span class="hljs-attr">"target"</span>: <span class="hljs-string">"ESNext"</span>,
        <span class="hljs-attr">"declaration"</span>: <span class="hljs-literal">true</span>,
        <span class="hljs-attr">"strict"</span>: <span class="hljs-literal">true</span>
    },
}</code></pre>

<p>In <code>src/wasm.ts</code>, write:</p>
<pre><code class="hljs typescript"><span class="hljs-comment">// src/wasm.ts — Placeholder WASM loader.</span>
<span class="hljs-keyword">import</span> { readFile } <span class="hljs-keyword">from</span> <span class="hljs-string">'node:fs/promises'</span>;
<span class="hljs-keyword">const</span> wasmURL = <span class="hljs-comment">/* #__PURE__ */</span> <span class="hljs-keyword">new</span> URL(<span class="hljs-string">'./release.wasm'</span>, <span class="hljs-keyword">import</span>.meta.url);
<span class="hljs-keyword">const</span> wasm = <span class="hljs-comment">/* #__PURE__ */</span> readFile(wasmURL);
<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> wasm;</code></pre>

<p>Here, we&#39;re loading the WASM binary from a TypeScript module<sup>[<a href="#footnote-3">3</a>]</sup> we define, which we&#39;ll override for different environments later. The <code>#__PURE__ */</code> annotations allow bundlers to strip away (or &quot;tree-shake&quot;) our WASM binary if the application never uses it. While optional, this will be important if our package has multiple WASM modules, or becomes an optional dependency of other downstream packages.</p>
<p>In <code>src/index.ts</code> (<em>not</em> <code>assembly/index.ts</code>), add the wrapper:</p>
<pre><code class="hljs typescript"><span class="hljs-comment">// src/index.ts</span>
<span class="hljs-keyword">import</span> wasm <span class="hljs-keyword">from</span> <span class="hljs-string">'./wasm'</span>;
<span class="hljs-keyword">import</span> { instantiate, __AdaptedExports } <span class="hljs-keyword">from</span> <span class="hljs-string">'../build/release'</span>;

<span class="hljs-comment">// Initialization.</span>

<span class="hljs-keyword">let</span> exports: <span class="hljs-keyword">typeof</span> __AdaptedExports;

<span class="hljs-comment">/* Promise resolving when WebAssembly is ready. */</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> ready = <span class="hljs-comment">/* #__PURE__ */</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt;<span class="hljs-function">(<span class="hljs-params"><span class="hljs-keyword">async</span> (<span class="hljs-params">resolve, reject</span>) =&gt; {
    <span class="hljs-keyword">try</span> {
        <span class="hljs-keyword">const</span> <span class="hljs-built_in">module</span> = <span class="hljs-keyword">await</span> WebAssembly.compile(<span class="hljs-params"><span class="hljs-keyword">await</span> wasm</span>);
        exports = <span class="hljs-keyword">await</span> instantiate(<span class="hljs-params"><span class="hljs-built_in">module</span> <span class="hljs-keyword">as</span> BufferSource, {}</span>);
        resolve(<span class="hljs-params"></span>);
    } <span class="hljs-keyword">catch</span> (<span class="hljs-params">e</span>) {
        reject(<span class="hljs-params">e</span>);
    }
}</span>);

// <span class="hljs-params">Wrapper</span> <span class="hljs-params">API</span>.

/**
 * <span class="hljs-params">Returns</span> <span class="hljs-params">the</span> <span class="hljs-params">sum</span> <span class="hljs-params">of</span> <span class="hljs-params">two</span> 32-<span class="hljs-params">bit</span> <span class="hljs-params">signed</span> <span class="hljs-params">integers</span>.
 * <span class="hljs-params">param</span> <span class="hljs-params">a</span> <span class="hljs-params">Integer</span> <span class="hljs-params">in</span> <span class="hljs-params">the</span> <span class="hljs-params">range</span> -2,147,483,648 <span class="hljs-params">to</span> +2,147,483,647.
 * <span class="hljs-params">param</span> <span class="hljs-params">b</span> <span class="hljs-params">Integer</span> <span class="hljs-params">in</span> <span class="hljs-params">the</span> <span class="hljs-params">range</span> -2,147,483,648 <span class="hljs-params">to</span> +2,147,483,647.
 */
<span class="hljs-params">export</span> <span class="hljs-params">function</span> <span class="hljs-params">add</span>(<span class="hljs-params">a: <span class="hljs-built_in">number</span>, b: <span class="hljs-built_in">number</span></span>): <span class="hljs-params">number</span> {
    <span class="hljs-params">if</span> (<span class="hljs-params">!exports</span>) {
        <span class="hljs-params">throw</span> <span class="hljs-params">new</span> <span class="hljs-params">Error</span>(<span class="hljs-params">'WebAssembly not yet initialized: <span class="hljs-keyword">await</span> "ready" <span class="hljs-keyword">export</span>.'</span>);
    }
    <span class="hljs-params">return</span> <span class="hljs-params">exports</span>.<span class="hljs-params">add</span>(<span class="hljs-params">a, b</span>);
}</span></code></pre>

<p>The <code>instantiate</code> function provided by the AssemblyScript compiler takes the WASM binary and returns an instantiated, ready-to-use WASM module. Our wrapper includes any documentation, type definitions, and input validation we&#39;ll want for the public API.</p>
<p>Loading and instantiating the WASM module can take a while, and we don&#39;t want to block the entire application during that time. We&#39;ll export a <code>ready</code> promise, and users of our package should await that promise:</p>
<pre><code class="hljs typescript"><span class="hljs-keyword">import</span> { ready, add } <span class="hljs-keyword">from</span> <span class="hljs-string">'my-package'</span>;

<span class="hljs-keyword">await</span> ready;
add(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>); <span class="hljs-comment">// → 4</span></code></pre>

<h2 id="bundling-the-package">Bundling the package</h2>
<p>With our TypeScript wrapper written, we&#39;re ready to bundle and test the package. We&#39;ll use a few more dependencies. The <code>typescript</code> and <code>ts-node</code> dependencies can be skipped if you&#39;re writing plain JavaScript. The test framework, <code>ava</code>, is optional. Use another test framework that supports TypeScript and ES Modules<sup>[<a href="#footnote-4">4</a>]</sup> if you prefer, or none at all.</p>
<pre><code class="hljs plaintext">npm install --save-dev microbundle ava typescript ts-node</code></pre>

<p>I strongly recommend using <a href="https://github.com/developit/microbundle">Microbundle</a> — an opinionated <a href="https://rollupjs.org/">Rollup</a> wrapper — for reasons that will be apparent when we start adapting the package for different environments. While it isn&#39;t a common bundler for building web <em>applications</em>, it&#39;s an exceptional choice for building <em>libraries</em>.</p>
<p>In <code>package.json</code>, under the <code>scripts</code> section, we&#39;ll define our build and test commands:</p>
<pre><code class="hljs json">{
    "scripts": {
        "build": "npm run asbuild &amp;&amp; npm run build:node",
        "build:node": "microbundle build --target node --format modern,cjs --raw --no-compress --output build/my-package-node.js --external node:fs/promises",
        "test": "ava test/*.test.ts",
        ...
    },

    ...</code></pre>

<p>Define the entry points to the compiled package, as our users and unit tests will need that information. Remove any existing versions of these entries in <code>package.json</code>, then add the entries below:</p>
<pre><code class="hljs json">{
    "name": "my-package",
    "type": "module",
    "sideEffects": false,
    "types": "./build/index.d.ts",
    "exports": "./build/my-package-node.modern.js",

    ...</code></pre>

<p>Careful readers may notice a small mismatch in our <code>exports</code> path and the <code>--output</code> flag given to Microbundle. That&#39;s intentional: the <code>.modern.js</code> suffix is added by Microbundle automatically.</p>
<p>When we build the package, our script will compile the AssemblyScript module and then the TypeScript wrapper to the <code>build/</code> directory.</p>
<pre><code class="hljs bash">npm run build</code></pre>

<p>We now have a compiled JavaScript bundle in <code>./build/my-package-node.modern.js</code>, type declarations in <code>./build/index.d.ts</code>, and a WASM binary in <code>./build/release.wasm</code>. Let&#39;s test the code.</p>
<h2 id="testing-the-package">Testing the package</h2>
<blockquote>
<p><strong>NOTICE:</strong> <em>As of this writing, Ava needs <a href="https://github.com/avajs/ava/blob/main/docs/recipes/typescript.md#enabling-avas-support-for-typescript-test-files">additional configuration</a> to support ES Modules and TypeScript together. In <code>package.json</code>, add the options below:</em></p>
<pre><code class="language-json">{
    &quot;ava&quot;: {
        &quot;extensions&quot;: {
            &quot;ts&quot;: &quot;module&quot;
        },
        &quot;nodeArguments&quot;: [
            &quot;--loader=ts-node/esm&quot;
        ]
    },

    ...</code></pre>
</blockquote>
<p>We&#39;ll write a simple unit test, <code>test/index.test.ts</code>:</p>
<pre><code class="hljs typescript"><span class="hljs-comment">// index.test.ts</span>
<span class="hljs-keyword">import</span> test <span class="hljs-keyword">from</span> <span class="hljs-string">'ava'</span>;
<span class="hljs-keyword">import</span> { ready, add } <span class="hljs-keyword">from</span> <span class="hljs-string">'my-package'</span>;

test(<span class="hljs-string">'add'</span>, <span class="hljs-keyword">async</span> (t) =&gt; {
    <span class="hljs-keyword">await</span> ready;
    t.is(add(<span class="hljs-number">2</span>, <span class="hljs-number">2</span>), <span class="hljs-number">4</span>, <span class="hljs-string">'2 + 2 = 4'</span>);
    t.is(add(<span class="hljs-number">5</span>, <span class="hljs-number">0</span>), <span class="hljs-number">5</span>, <span class="hljs-string">'5 + 0 = 5'</span>);
    t.is(add(<span class="hljs-number">10</span>, <span class="hljs-number">-3</span>), <span class="hljs-number">7</span>, <span class="hljs-string">'10 - 3 = 7'</span>);
});</code></pre>

<p>Then, run the test.</p>
<pre><code class="hljs bash">npm <span class="hljs-built_in">test</span></code></pre>

<p>With any luck, the test should pass!<sup>[<a href="#footnote-5">5</a>]</sup></p>
<h2 id="compatibility-concerns">Compatibility concerns</h2>
<p>If you&#39;ve made it this far, nice work! You may be wondering why this blog post couldn&#39;t have been written with half the words and half the dependencies. Here&#39;s why.</p>
<p>Our package is now working in a Node.js test environment, but it&#39;s not going to run in a web browser, and could easily cause failures in our users&#39; build systems. These environments have incompatible requirements, and our challenge here is to provide builds that work for everyone without creating a maintenance nightmare for ourselves.</p>
<p>Replacing the <code>src/wasm.ts</code> loader with a stub, we&#39;re going to create three new loaders and a build target for each:</p>
<pre><code class="hljs typescript"><span class="hljs-comment">// src/wasm.ts — Stub, replaced at compile-time.</span>
<span class="hljs-keyword">const</span> wasm = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Uint8Array</span>(<span class="hljs-number">0</span>);
<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> wasm;</code></pre>

<pre><code class="hljs typescript"><span class="hljs-comment">// src/wasm-compat.ts — Legacy bundler WASM loader.</span>
<span class="hljs-keyword">const</span> WASM_BASE64 = <span class="hljs-string">''</span>;
<span class="hljs-keyword">const</span> wasm = <span class="hljs-comment">/* #__PURE__ */</span> fetch(<span class="hljs-string">'data:application/wasm;base64'</span> + WASM_BASE64);
<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> wasm;</code></pre>

<pre><code class="hljs typescript"><span class="hljs-comment">// src/wasm-node.ts — Node.js WASM loader.</span>
<span class="hljs-keyword">import</span> { readFile } <span class="hljs-keyword">from</span> <span class="hljs-string">'node:fs/promises'</span>;
<span class="hljs-keyword">const</span> wasmURL = <span class="hljs-comment">/* #__PURE__ */</span> <span class="hljs-keyword">new</span> URL(<span class="hljs-string">'./release.wasm'</span>, <span class="hljs-keyword">import</span>.meta.url);
<span class="hljs-keyword">const</span> wasm = <span class="hljs-comment">/* #__PURE__ */</span> readFile(wasmURL);
<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> wasm;</code></pre>

<pre><code class="hljs typescript"><span class="hljs-comment">// src/wasm-default.ts — Web &amp; modern bundler WASM loader.</span>
<span class="hljs-keyword">const</span> wasm = <span class="hljs-comment">/* #__PURE__ */</span> fetch(<span class="hljs-comment">/* #__PURE__ */</span> <span class="hljs-keyword">new</span> URL(<span class="hljs-string">'./release.wasm'</span>, <span class="hljs-keyword">import</span>.meta.url));
<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> wasm;</code></pre>

<p>We&#39;ll extend our <code>package.json#exports</code> entries to list each of these options:</p>
<pre><code class="hljs json">{
    "type": "module",
    "sideEffects": false,
    "source": "./src/index.ts",
    "types": "./build/index.d.ts",
    "main": "./build/my-package-node.cjs",
    "module": "./build/my-package-default.modern.js",
    "exports": {
        "compat": {
            "types": "./build/index.d.ts",
            "require": "./build/my-package-compat.cjs",
            "default": "./build/my-package-compat.modern.js"
        },
        "node": {
            "types": "./build/index.d.ts",
            "require": "./build/my-package-node.cjs",
            "default": "./build/my-package-node.modern.js"
        },
        "default": {
            "types": "./build/index.d.ts",
            "require": "./build/my-package-default.cjs",
            "default": "./build/my-package-default.modern.js"
        }
    },

    ...</code></pre>

<p>And finally, update <code>package.json#scripts</code> to create the necessary builds:</p>
<pre><code class="hljs json">{
    "scripts": {
        "build": "npm run asbuild &amp;&amp; npm run build:compat &amp;&amp; npm run build:node &amp;&amp; npm run build:default",
        "build:compat": "microbundle build --target web --format modern,cjs --raw --no-compress --no-sourcemap --output build/my-package-compat.js --external ./release.wasm --alias ./wasm=./wasm-compat --define WASM_BASE64=`base64 -i build/release.wasm`",
        "build:node": "microbundle build --target node --format modern,cjs --raw --no-compress --no-sourcemap --output build/my-package-node.js --external node:fs/promises --alias ./wasm=./wasm-node",
        "build:default": "microbundle build --target web --format modern,cjs --raw --no-compress --output build/my-package-default.js --external ./release.wasm --alias ./wasm=./wasm-default",
        "clean": "rm build/*",
        ...
    },

    ...</code></pre>

<blockquote>
<p><strong>NOTICE:</strong> <em>If you chose to use JavaScript instead of TypeScript, the <code>--alias</code> flag should include &quot;.js&quot; extensions.</em></p>
</blockquote>
<p>Running &quot;<code>npm run build</code>&quot; now, Microbundle compiles each variation into the <code>build/</code> folder. Inspect <code>my-package-*.modern.js</code> and notice that each variation is bundled into a single JavaScript file containing the appropriate WASM loader.</p>
<p>Node.js, Web applications, and modern bundlers should find the right version of the package by default. Users with older bundlers should explicitly import from <code>my-package/compat</code> to get a self-contained module, with the WASM binary embedded as a base64 string. This embedding adds +33% file size and some parsing overhead, so it&#39;s just a fallback for these older tools.</p>
<p>We now have a fully-functional package, ready for npm, with basic support for all major JavaScript platforms. If the package will require different implementations or functionality on each platform, the same approach can be extended without further duplication.</p>
<p>Complete code from this tutorial is available on GitHub:</p>
<ul>
<li><a href="https://github.com/donmccurdy/assemblyscript-library-template">donmccurdy/assemblyscript-library-template</a></li>
</ul>
<p>The articles below offer a deeper look at writing and optimizing AssemblyScript code:</p>
<ul>
<li><em><a href="https://www.fastly.com/blog/porting-javascript-or-typescript-to-assemblyscript">Porting JavaScript (or TypeScript) to AssemblyScript</a></em></li>
<li><em><a href="https://surma.dev/things/js-to-asc/index.html">Is WebAssembly magic performance pixie dust?</a></em></li>
</ul>
<p>Thanks for reading, and feel free to <a href="https://fosstodon.org/@donmccurdy">reach out</a> with any questions!</p>
<hr>
<p><small><a class="footnote" name="footnote-1"><sup>1</sup></a> Not a security concern, but error handling in WASM is sometimes opaque, and tends to increase the size of the WASM binary more than equivalent error handling in JavaScript would require.</small></p>
<p><small><a class="footnote" name="footnote-2"><sup>2</sup></a> As of this writing, DOM manipulation or calling external JavaScript dependencies would be common examples.</small></p>
<p><small><a class="footnote" name="footnote-3"><sup>3</sup></a> Or JavaScript modules. Add &quot;.js&quot; extensions to your import statements, if so.</small></p>
<p><small><a class="footnote" name="footnote-4"><sup>4</sup></a> As much as I wish that support for ES Modules and TypeScript were table stakes for a testing framework, my dear reader, it is not.</small></p>
<p><small><a class="footnote" name="footnote-5"><sup>5</sup></a> You may see a warning, <em>&quot;ExperimentalWarning: Custom ESM Loaders is an experimental feature&quot;</em>. Node.js support for anything related to ES Modules continues to not be great, and we&#39;re including CommonJS fallbacks in our published package to deal with that. We&#39;ll ignore this warning from the test suite though, and hope for better support in upcoming Node.js versions.</small></p>
]]></description><link>https://www.donmccurdy.com/2023/05/24/publishing-webassembly-modules-with-assemblyscript/</link><guid isPermaLink="true">https://www.donmccurdy.com/2023/05/24/publishing-webassembly-modules-with-assemblyscript/</guid><pubDate>Wed, 24 May 2023 00:00:00 GMT</pubDate></item><item><title><![CDATA[Improving API documentation in TypeScript (Part 1)]]></title><description><![CDATA[<p>The JavaScript and TypeScript ecosystems represent the largest community of software developers working professionally today.<sup>[<a href="#footnote-1">1</a>]</sup> The prevailing package repository, <a href="https://www.npmjs.com/">npm</a>, is the largest such registry for any language.<sup>[<a href="#footnote-2">2</a>]</sup> While I’d prefer to say authors of TypeScript packages enjoy similarly-mature tools for writing and sharing API documentation, the real situation is more complicated.</p>
<p>In this article I’ll discuss gaps in current documentation tools for TypeScript libraries, and where I hope the TypeScript ecosystem might go from here.</p>
<details>

<summary>What is API documentation?</summary>

<p>API documentation is reference material helping other software developers use a software library. Libraries evolve over time, so it’s common to write documentation in source files alongside code, as comments in <a href="https://jsdoc.app/">JSDoc</a> or <a href="https://tsdoc.org/">TSDoc</a> syntax. In addition to handwritten explanations, API references generally include technical details (property types, method signatures, …) of the library’s API, which can be inferred from the source code by tooling.</p>
<p>In smaller projects, API documentation may be contained in a <a href="https://github.com/donmccurdy/three-to-cannon#api">single Markdown file</a>. In larger projects, documentation may require a <a href="https://gltf-transform.donmccurdy.com">complete website</a>, which often becomes both more complex to maintain and more necessary to onboard new users and contributors.</p>
</details>

<details>

<summary>What is an API model?</summary>

<p>API models are declarative representations of the public surfaces (classes, methods, properties, …) in a library. The model provides information about parameters and return types, without any dependency on the internal implementation.</p>
<p>Given a simple TypeScript function,</p>
<pre><code><span class="hljs-keyword">function</span> create<span class="hljs-constructor">Widget(<span class="hljs-params">widgetType</span>: <span class="hljs-params">string</span>, <span class="hljs-params">gizmoCount</span> = 4)</span>: Widget {
  <span class="hljs-comment">// ...</span>
}</code></pre>

<p>… a hypothetical API model might look like this:</p>
<pre><code>{
  <span class="hljs-attribute">name</span>: <span class="hljs-string">'createWidget'</span>
    params: [
        {name: <span class="hljs-string">'widgetType'</span>, type: <span class="hljs-string">'string'</span>},
        {<span class="hljs-attribute">name</span>: <span class="hljs-string">'gizmoCount'</span>, type: <span class="hljs-string">'number'</span>, optional: true, defaultValue: <span class="hljs-number">4</span>}
    ],
    <span class="hljs-selector-tag">returns</span>: {<span class="hljs-attribute">type</span>: <span class="hljs-string">'Widget'</span>}
}</code></pre>

<p>API models for a full library will be considerably more complex, but can be subdivided into chunks like the one above, and formatted as HTML or Markdown.</p>
</details>

<h2 id="whats-missing">What’s missing</h2>
<p>Existing TypeScript documentation tools<sup>[<a href="#footnote-3">3</a>]</sup> <sup>[<a href="#footnote-4">4</a>]</sup> <sup>[<a href="#footnote-5">5</a>]</sup> share two fundamental limitations.</p>
<ol>
<li><em>No standard, stable, serializable, and type-safe representation of an API model exists.</em></li>
<li><em>Without such an API model, mature and flexible formatting tools cannot exist.</em></li>
</ol>
<p>TypeScript needs a representation of an API model that is type-safe and <a href="https://semver.org/">semantically versioned</a>. The model should provide enough information to render formatted API documentation with simple templates, and should allow a developer to “bring your own framework”, without being tied to a particular web stack.</p>
<p>Creating such an API model doesn’t necessarily mean throwing out current tools — parsing TypeScript code to construct an API model is pretty complicated. While they don’t offer this today, existing projects like API Extractor or TypeDoc might be used to produce a generic API model with these characteristics.</p>
<!-- The information required to determine the full API of a single class may be spread across many files, with inheritance, generics, type aliases, namespaces, and imports from external libraries all contributing contribute to the final API of a class. -->

<p>But it’s unrealistic to expect maintainers of projects like API Extractor and TypeDoc to produce formatting tools compatible with many web frameworks. Add requirements of flexible styling for small and large projects, and that’s an open-ended, ecosystem-sized task. Without a stable API model that independent projects can depend on, it’s not going to happen.</p>
<p>This wishlist could go on. Documentation appearing alongside source code in GitHub and GitLab repositories. Better integrations with IDEs and commandlines . Internationalization (<a href="https://web.dev/learn/design/internationalization/">i18n</a>) support for libraries with global communities. Live examples running alongside API documentation. These are not impossible goals today, but they’re much harder than they need to be.</p>
<h2 id="next-steps">Next steps</h2>
<p>TypeScript needs better ways to represent packaged APIs. Those could be built on existing tools, like API Extractor. From there, adding API documentation into websites built and deployed with <em>[your favorite web framework]</em> becomes a whole lot easier.</p>
<p>I have ideas about where to go from here, but it’s a large project and I’d love to hear from others: Does this describe your experience with TypeScript API documentation? Have I missed anything important about the problem? Would you like to be involved?</p>
<p>Reach out, or look for <em>Part 2</em> in a future post.</p>
<hr>
<p><small><a class="footnote" name="footnote-1"><sup>1</sup></a> Source: <a href="https://survey.stackoverflow.co/2022/#section-worked-with-vs-want-to-work-with-programming-scripting-and-markup-languages">Stack Overflow Developer Survey</a>, 2022.</small></p>
<p><small><a class="footnote" name="footnote-2"><sup>2</sup></a> Source: <a href="https://www.linux.com/news/state-union-npm/">State of the Union: npm</a>, 2017.</small></p>
<p><small><a class="footnote" name="footnote-3"><sup>3</sup></a> <a href="https://typedoc.org/">TypeDoc</a>: The generated JSON API model seems intended for use by the theming system (untyped; built on Handlebars), and available themes are a mixed bag. Moreover, I’ve experienced serious quality control issues here. The project began in 2014 and remains pre-1.0.0, maintained on weekends and without any obvious financial sponsorship (which I think it both needs and deserves). TypeDoc is the most popular tool for documentation in TypeScript, but it’s not easy to recommend without adding major caveats.</small></p>
<p><small><a class="footnote" name="footnote-4"><sup>4</sup></a> <a href="http://documentation.js.org/">documentation.js</a>: I’ve enjoyed using documentation.js for smaller JavaScript libraries, and would fully recommend it in that context. However, its TypeScript support is not really advertised, and appears to be limited to JavaScript-compatible constructs: no generics, type aliases, etc.</small></p>
<p><small><a class="footnote" name="footnote-5"><sup>5</sup></a> <a href="https://api-extractor.com/pages/setup/generating_docs/">API Extractor</a>: Part of Microsoft’s Rush Stack, API Extractor is the “heavy artillery” on this list. Though more difficult to configure than others, it appears well designed and well maintained. The API model (packaged separately as <a href="https://www.npmjs.com/package/@microsoft/api-extractor-model">@microsoft/api-extractor-model</a>) is mostly a 1:1 reflection of the source code, and would require further processing to use in an HTML template. Inherited members and generics are not resolved on child classes, for example, and JSDoc comments are unparsed. HTML formatting is mostly left as an exercise left to the reader. Still, the extractor and other components would be great foundations for other tools.</small></p>
]]></description><link>https://www.donmccurdy.com/2022/12/03/improving-api-documentation-in-typescript-pt-1/</link><guid isPermaLink="true">https://www.donmccurdy.com/2022/12/03/improving-api-documentation-in-typescript-pt-1/</guid><pubDate>Sat, 03 Dec 2022 00:00:00 GMT</pubDate></item><item><title><![CDATA[Converting glTF PBR materials from spec/gloss to metal/rough]]></title><description><![CDATA[<h2 id="contents">Contents</h2>
<ul>
<li><a href="#background">Background</a></li>
<li><a href="#updates-for-threejs">Updates for three.js</a></li>
<li><a href="#converting-specgloss-to-metalrough">Converting spec/gloss to metal/rough</a></li>
</ul>
<h2 id="background">Background</h2>
<p>When the Khronos Group ratified glTF 2.0 in 2017, the format included a PBR shading model based on the metallic roughness (“metal/rough”) workflow, and an extension (<a href="https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Archived/KHR_materials_pbrSpecularGlossiness/README.md"><code>KHR_materials_pbrSpecularGlossiness</code></a>) for the specular glossiness (“spec/gloss”) workflow.<sup>[<a href="#footnote-1">1</a>]</sup> Differences between the metal/rough and spec/gloss workflows are mostly subtle.<sup>[<a href="#footnote-2">2</a>]</sup> Broadly, the metal/rough workflow tends to be slightly more memory-efficient and performance-friendly for realtime, and the spec/gloss workflow provides slightly more artistic control.</p>
<p>In the following years, new glTF extensions — <a href="https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_ior/README.md"><code>KHR_materials_ior</code></a> and <a href="https://github.com/KhronosGroup/glTF/blob/main/extensions/2.0/Khronos/KHR_materials_specular/README.md"><code>KHR_materials_specular</code></a> — bridged the gap between these PBR workflows. Bringing artistic control over dielectric F0 into metal/rough, allow artists gain flexibility. At the same time, engines can enable these features conditionally and keep the performance advantages of the metal/rough workflow by default.</p>
<p>The metal/rough workflow has been understandably more popular in the years since, even outside of the glTF ecosystem. glTF 2.0 has continued to evolve, launching an excellent <a href="https://github.com/KhronosGroup/glTF/tree/main/extensions#extensions-for-gltf-20">series of advanced PBR features</a>: clearcoat, thin-surface transmission, volumetric transmission, iridescence, sheen, and more. These extensions have been widely adopted by 3D viewers and engines in the years since, improving PBR capabilites on many platforms.</p>
<p>Bringing nascent PBR features into a single, well-defined standard is complex: complex to define clearly and fully, and complex to implement correctly and efficiently. Requiring that these features support two PBR workflows, and wrangling the entire ecosystem to implement everyting twice, would have been painful for everyone involved.</p>
<p>Thankfully, Khronos didn’t do that. Recent PBR features have been designed for the metal/rough workflow, and in late 2021 the <code>KHR_materials_pbrSpecularGlossiness</code> extension was archived with the statement below:</p>
<blockquote>
<p><em>Archived extensions may be useful for reading older glTF files, but they are no longer recommended for creating new files.</em></p>
</blockquote>
<h2 id="updates-for-threejs">Updates for three.js</h2>
<p>With <a href="https://github.com/mrdoob/three.js/">three.js</a> r147, in November 2022, support for the extension <code>KHR_materials_pbrSpecularGlossiness</code> will be dropped. This allows important cleanup for <a href="https://threejs.org/docs/index.html?q=gltf#examples/en/loaders/GLTFLoader"><code>THREE.GLTFLoader</code></a>. three.js has always used metal/rough as its PBR shading model, and the loader has historically needed to build a custom ShaderMaterial to support spec/gloss materials.</p>
<p>Most glTF 2.0 files already use metal/rough, and in those cases no changes are needed. However, certain sources<sup>[<a href="#footnote-3">3</a>]</sup> may provide spec/gloss glTF downloads for some files. When that happens, three.js users will need to convert their models from the spec/gloss workflow to metal/rough.</p>
<h2 id="converting-specgloss-to-metalrough">Converting spec/gloss to metal/rough</h2>
<p>Fortunately, conversion from spec/gloss to metal/rough is lossless in any engine that supports <code>KHR_materials_specular</code> and <code>KHR_materials_ior</code> extensions.<sup>[<a href="#footnote-4">4</a>]</sup> Using the <a href="https://gltf-transform.donmccurdy.com/">glTF Transform</a> library, here are three ways to convert existing assets from the spec/gloss workflow.</p>
<h3 id="web">Web</h3>
<p>Go to <a href="https://gltf.report/">gltf.report/</a> and drag your model into the viewer. You’ll see a warning that the materials are about to be converted to metal/rough. After that, just download the result with the <em>Export</em> button on the right hand side. That’s it!</p>
<h3 id="commandline">Commandline</h3>
<p>With <a href="https://nodejs.org/">Node.js ≥14 installed</a>, open a terminal run the following commands:</p>
<pre><code class="hljs bash"><span class="hljs-comment"># install</span>
npm install --global @gltf-transform/cli

<span class="hljs-comment"># convert</span>
gltf-transform metalrough input.glb output.glb</code></pre>

<p>These steps will convert any spec/gloss materials in the file to metal/rough.</p>
<h3 id="script">Script</h3>
<p>If you’re using JavaScript or TypeScript to process glTF models as part of a Node.js server or frontend web application, then you can use glTF Transform’s scripting API to do this conversion. After choosing the <a href="https://gltf-transform.donmccurdy.com/classes/core.platformio.html">appropriate platform I/O</a> service, run:</p>
<pre><code class="hljs typescript"><span class="hljs-keyword">import</span> { WebIO, NodeIO } <span class="hljs-keyword">from</span> <span class="hljs-string">'@gltf-transform/core'</span>;
<span class="hljs-keyword">import</span> { KHRONOS_EXTENSIONS } <span class="hljs-keyword">from</span> <span class="hljs-string">'@gltf-transform/extensions'</span>;
<span class="hljs-keyword">import</span> { metalRough } <span class="hljs-keyword">from</span> <span class="hljs-string">'@gltf-transform/functions'</span>;

<span class="hljs-comment">// configure I/O</span>
<span class="hljs-keyword">const</span> io = <span class="hljs-keyword">new</span> WebIO().registerExtensions(KHRONOS_EXTENSIONS); <span class="hljs-comment">// web</span>
<span class="hljs-keyword">const</span> io = <span class="hljs-keyword">new</span> NodeIO().registerExtensions(KHRONOS_EXTENSIONS); <span class="hljs-comment">// node.js</span>

<span class="hljs-comment">// read</span>
<span class="hljs-keyword">const</span> <span class="hljs-built_in">document</span> = <span class="hljs-keyword">await</span> io.read(<span class="hljs-string">'path/to/model.glb'</span>);

<span class="hljs-comment">// convert</span>
<span class="hljs-keyword">await</span> <span class="hljs-built_in">document</span>.transform(metalRough());

<span class="hljs-comment">// write</span>
<span class="hljs-keyword">const</span> glb = io.writeBinary(<span class="hljs-built_in">document</span>); <span class="hljs-comment">// → Uint8Array</span></code></pre>

<h3 id="cleaning-up">Cleaning up</h3>
<p>In some situations you may find that file size increases as a result of the conversion. Larger size isn’t a result of the metal/rough workflow (files originally authored for metal/rough will generally be smaller) but can be a side effect of conservative choices made during material conversion. If that happens, here are some ways to clean things up.</p>
<p>First, try removing the IOR and Specular extensions from the converted file by opening it in <a href="https://gltf.report">gltf.report/</a>, going to the script tab in the sidebar, and running the following script:</p>
<pre><code class="hljs typescript"><span class="hljs-keyword">import</span> { prune } <span class="hljs-keyword">from</span> <span class="hljs-string">'@gltf-transform/functions'</span>;

<span class="hljs-keyword">const</span> DENYLIST = [<span class="hljs-string">'KHR_materials_ior'</span>, <span class="hljs-string">'KHR_materials_specular'</span>];

<span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> extension of <span class="hljs-built_in">document</span>.getRoot().listExtensionsUsed()) {
    <span class="hljs-keyword">if</span> (DENYLIST.includes(extension.extensionName)) {
        extension.dispose();
    }
}

<span class="hljs-keyword">await</span> <span class="hljs-built_in">document</span>.transform(prune());</code></pre>

<p>If your model still looks correct, then congrats! You didn’t need some of the spec/gloss features anyway, and your model will be smaller as a result. If things don’t look right anymore, then discard your changes. Either way, you can continue to the next step.</p>
<p>Next, we’ll try optimizing textures, using the glTF Transform CLI.</p>
<pre><code class="hljs bash"><span class="hljs-comment"># install</span>
npm install --global @gltf-transform/cli

<span class="hljs-comment"># optimize</span>
gltf-transform mozjpeg tmp.glb tmp.glb
gltf-transform oxipng tmp.glb output.glb</code></pre>

<p>The CLI will attempt to optimize textures using the <a href="https://github.com/mozilla/mozjpeg">mozjpeg</a> and <a href="https://github.com/shssoichiro/oxipng">oxipng</a> utilities, bundled by <a href="https://github.com/googlechromelabs/squoosh/">squoosh</a>. Customizing compression settings (use the <code>--help</code> flag) may allow you to go further. Finally, you may also want to <a href="https://github.com/KhronosGroup/3D-Formats-Guidelines/blob/main/KTXArtistGuide.md">convert textures to the KTX2 format</a>. KTX2 offers major advantages over PNG and JPEG in 3D scenes, but also requires a bit more work to manage size/quality tradeoffs in compression.</p>
<h2 id="conclusion">Conclusion</h2>
<p>For three.js users affected by the r147 release, and anyone else interested in converting your glTF 2.0 files to the metal/rough shading model, I hope this guide has been helpful. If not, please reach out on the issue trackers for any of the projects linked.</p>
<hr>
<p><small><a class="footnote" name="footnote-1"><sup>1</sup></a> Later extensions also provided a shadeless (“unlit”) model, <code>KHR_materials_unlit</code>.</small></p>
<p><small><a class="footnote" name="footnote-2"><sup>2</sup></a> I recommend <a href="https://substance3d.adobe.com/tutorials/courses/the-pbr-guide-part-2">Allegorithmic’s PBR Guide</a> if you’re interested in those details.</small></p>
<p><small><a class="footnote" name="footnote-3"><sup>3</sup></a> Notably, <a href="https://sketchfab.com/">Sketchfab</a>.</small></p>
<p><small><a class="footnote" name="footnote-4"><sup>4</sup></a> Conversion is <em>often</em> lossless even without those extensions, but you’ll need to test whether the results are acceptable for your content without them.</small></p>
]]></description><link>https://www.donmccurdy.com/2022/11/28/converting-gltf-pbr-materials-from-specgloss-to-metalrough/</link><guid isPermaLink="true">https://www.donmccurdy.com/2022/11/28/converting-gltf-pbr-materials-from-specgloss-to-metalrough/</guid><pubDate>Mon, 28 Nov 2022 00:00:00 GMT</pubDate></item><item><title><![CDATA[Compressing LUTs with KTX 2.0]]></title><description><![CDATA[<blockquote>
<p>“Lookup tables (LUTs) are a technique for optimizing the evaluation of functions that are expensive to compute and inexpensive to cache, ... By precomputing the evaluation of a function over a domain of common inputs, expensive runtime operations can be replaced with inexpensive table lookups. ... LUTs are also useful when wanting to separate the calculation of a transform from its application. For example, in color pipelines it is often useful to “bake” a series of color transforms into a single lookup table, which is then suitable for distribution and re-use.”</p>
<p><em>~ Nick Shaw, <a href="https://nick-shaw.github.io/cinematiccolor/luts-and-transforms.html#x37-1570004.4">Cinematic Color 2: LUTs and Transforms</a></em></p>
</blockquote>
<p>2D and 3D renderers use LUTs to transform pixel values very flexibly and very quickly. LUTs achieve technical or stylistic goals like custom tone mapping, color gradients for scientific visualization, day/night transitions, baking results of a simulation, and so on. LUTs can enhance both photorealistic rendering and stylized graphics.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/xkAzeukUzlQ" title="YouTube video player" frameborder="0" allow="autoplay; clipboard-write; encrypted-media; picture-in-picture" allowfullscreen></iframe>

<p>The three minute video above, by <a href="https://www.youtube.com/user/EightyEightGames">EightyEightGames</a>, gives a short and clear introduction to LUTs in games. A quick search for <a href="https://twitter.com/search?q=LUT%2520gamedev">“gamedev LUT” on Twitter</a> should offer some other interesting ideas.</p>
<p><em>What follows is a guide — for web developers, or anyone else — to compressing LUTs for faster loading in realtime applications. With KTX 2.0, we&#39;ll reduce both file size and parsing time by ~99% compared to traditional formats like <code>.cube</code>.</em></p>
<h2 id="whats-in-a-lut">What&#39;s in a LUT</h2>
<p>Simpler operations like contrast or saturation can be applied with small 1D LUTs. For more sophisticated operations, larger 3D LUTs are required. Complete 3D LUTs, containing possible mappings on an 8-bit sRGB [0,1] input domain, would store 256x256x256 = 16,777,216 values. Fortunately, that&#39;s rarely necessary — we interpolate between values when sampling from the table, and a smaller 3D LUT containing only 65⨉65⨉65 = 275K values is often enough.</p>
<p>Common LUT file formats, like <code>.cube</code> or <code>.spi3d</code>, store these values in plaintext rows:</p>
<pre><code class="hljs">LUT_3D_SIZE <span class="hljs-number">65</span>
<span class="hljs-number">0.000176</span> <span class="hljs-number">0.000176</span> <span class="hljs-number">0.000176</span>
<span class="hljs-number">0.015625</span> <span class="hljs-number">0.000176</span> <span class="hljs-number">0.000176</span>
<span class="hljs-number">0.031250</span> <span class="hljs-number">0.000176</span> <span class="hljs-number">0.000176</span>
... &lt;<span class="hljs-number">274</span>,<span class="hljs-number">622</span> more rows&gt;</code></pre>

<p>This is not an efficient way to store data, and 275K values in plaintext add up fast: each 3D LUT might easily be 5-10 MB. For desktop applications, this might be fine. But on the web, most use cases would benefit from much smaller files.</p>
<p>Lossless compression (gzip, brotli, or zstd) reduces the download size of these files pretty well. But we don&#39;t just need to download the files — we also need to parse them and upload them to the GPU as textures, for fast parallel lookups in a fragment shader. After decompressing a compressed text file, we&#39;re still stuck with 5-10 MB of text to process and convert to a texture, before we can draw any pixels. On a 2021 M1 Pro chip, JavaScript might spend 100–200ms just parsing a 5–10MB LUT in these plaintext formats, but we can do much better.</p>
<h2 id="gpu-optimized-compression-with-ktx-20">GPU-optimized compression with KTX 2.0</h2>
<p>With <a href="https://github.khronos.org/KTX-Specification/">KTX 2.0</a>, a flexible texture container format defined by the Khronos Group in 2021, we can store the GPU texture directly instead of using plain text. We&#39;ll avoid nearly all of that parsing overhead: in binary formats like KTX 2.0, loaders only need to parse a small header to learn what&#39;s in the file. After that the loader can slice views out of the larger payload, without parsing any pixel data.</p>
<p>KTX 2.0 supports <a href="https://registry.khronos.org/vulkan/specs/1.3-extensions/man/html/VkFormat.html#_c_specification">a lot of GPU texture formats</a>, and a few other “universal” formats. glTF 2.0 models (<code>.gltf</code>, <code>.glb</code>) often use  KTX 2.0 with <a href="https://github.com/BinomialLLC/basis_universal/">Basis Universal</a> texture compression. We aren&#39;t going to use Basis Universal in this case<sup>1</sup> — LUTs work best with other types of compression. We&#39;ll instead use uncompressed texture formats (u8, f16, and f32), with some lossless compression applied to the container (“supercompression”).</p>
<p>To convert a LUT to KTX2 in your own software, you&#39;d do something like this:</p>
<ol>
<li>Parse the original LUT, allocating an array of 8-bit, 16-bit, or 32-bit values</li>
<li>Write the array to a KTX2 container, perhaps using <a href="https://github.com/KhronosGroup/KTX-Software">KTX-Software</a> or <a href="https://github.com/donmccurdy/KTX-Parse">ktx-parse</a></li>
<li>Apply supercompression, perhaps using KTX-Software&#39;s <code>ktxsc</code> command</li>
</ol>
<p>Using JavaScript and <a href="https://threejs.org/">three.js</a>, we could also define a LUT programmatically:</p>
<pre><code class="hljs javascript"><span class="hljs-keyword">const</span> customLUT1D = <span class="hljs-keyword">new</span> THREE.DataTexture(
    <span class="hljs-keyword">new</span> <span class="hljs-built_in">Float32Array</span>([<span class="hljs-number">0</span>, <span class="hljs-number">0.1</span>, ..., <span class="hljs-number">1.0</span>]),
    NUM_VALUES,
    <span class="hljs-number">1</span>,
    THREE.RedFormat,
    THREE.FloatType,
    THREE.UVMapping,
    THREE.ClampToEdgeWrapping,
    THREE.ClampToEdgeWrapping,
    THREE.LinearFilter,
    THREE.LinearFilter
);</code></pre>

<p>Because three.js also includes utilities like <a href="https://github.com/mrdoob/three.js/blob/dev/examples/jsm/loaders/LUTCubeLoader.js">LUTCubeLoader.js</a>, <a href="https://github.com/mrdoob/three.js/blob/dev/examples/jsm/loaders/LUT3dlLoader.js">LUT3dlLoader.js</a>, <a href="https://github.com/mrdoob/three.js/blob/dev/examples/jsm/exporters/KTX2Exporter.js">KTX2Exporter.js</a>, and <a href="https://github.com/mrdoob/three.js/blob/dev/examples/jsm/loaders/KTX2Loader.js">KTX2Loader.js</a>, we can use these to convert LUT-specific formats (<code>.cube</code>, <code>.3dl</code>) to KTX 2.0. To make this easier, I made a small <del>Glitch</del> application for loading .cube LUTs and converting them to KTX2. Try it out here:</p>
<blockquote>
<p>🔗 <a href="https://lut-to-texture.donmccurdy.com/">https://lut-to-texture.donmccurdy.com/</a></p>
</blockquote>
<p>This tool encodes the LUT to a more compact binary representation, but doesn&#39;t yet include supercompression. I haven&#39;t been able to find a Zstandard encoder that works in a web browser — if you know of one, please let me know. In the meantime, add supercompression in a separate step with KTX-Software&#39;s <code>ktxsc</code> utility:</p>
<pre><code class="hljs shell">ktxsc --zcmp 19 lut.ktx2</code></pre>

<h2 id="results">Results</h2>
<p>Testing KTX 2.0 compression with a 3D LUT, the results are dramatic. The KTX 2.0 file is just 1% the size of the original, parses in a fraction of the time, and loses none of the original precision.</p>
<table>
<thead>
<tr>
<th align="left">format</th>
<th align="right">size (MB)</th>
<th align="right">parse (ms)</th>
</tr>
</thead>
<tbody><tr>
<td align="left">.cube</td>
<td align="right">7.414 MB</td>
<td align="right">90 – 100 ms</td>
</tr>
<tr>
<td align="left">.ktx2</td>
<td align="right">94 KB</td>
<td align="right">0.1 – 0.2 ms</td>
</tr>
</tbody></table>
<h2 id="using-the-lut">Using the LUT</h2>
<p>Using a <code>.ktx2</code> LUT in three.js is no different than using <code>.cube</code>. Just replace THREE.LUTCubeLoader with THREE.KTX2Loader. While some KTX 2.0 files must be encoded for particular GPUs using WASM transcoders, these LUTs do not require transcoding. If the LUT is being applied to the full viewport, that&#39;s done in post-processing — either <a href="https://github.com/mrdoob/three.js/blob/master/examples/webgl_postprocessing_3dlut.html">three/postprocessing</a> or <a href="https://github.com/pmndrs/postprocessing">pmndrs/postprocessing</a> will work.</p>
<pre><code class="hljs javascript"><span class="hljs-comment">// three.js post-processing</span>
<span class="hljs-keyword">const</span> lut = <span class="hljs-keyword">await</span> loader.loadAsync( <span class="hljs-string">'./lut.ktx2'</span> );

<span class="hljs-comment">// pmndrs post-processing</span>
<span class="hljs-keyword">import</span> { LookupTexture } <span class="hljs-keyword">from</span> <span class="hljs-string">'postprocessing'</span>;
<span class="hljs-keyword">const</span> lut = LookupTexture.from( <span class="hljs-keyword">await</span> loader.loadAsync( <span class="hljs-string">'./lut.ktx2'</span> ) );</code></pre>

<p>In other engines, you may need to ask about support for 3D textures in KTX2 files. This is a newer and less widely-supported pattern for now. I&#39;d love to hear if you found this article useful, and how you&#39;re using LUTs in your projects. Please reach out on Twitter!</p>
<h2 id="more-technical-details">More technical details</h2>
<p><strong>Input domain:</strong> Every LUT is defined only for a certain range of input values. Often that range is RGB ∈ [0,1], but another input domain might be defined at the top of the file. Inputs outside this range are clamped, so it&#39;s important to get the range right. Custom input domains are <em>not</em> embedded in the KTX 2.0 texture, and would need to be assigned manually before using the KTX 2.0 LUT in an application<sup>2</sup>.</p>
<p><strong>Color spaces:</strong> Like its input domain, every LUT makes assumptions about the color space of input values. Output values may be in the same color space, or the LUT may include conversion to another color space. Unfortunately, color space information is typically not defined in the LUT file. The output color space can be included in a KTX 2.0 file, if you know it.</p>
<p><strong>KTX 2.0 vs. PNG:</strong> We&#39;ve discussed <code>.cube</code> and other LUT-specific formats, but not traditional image formats like PNG. PNG can store 1D LUTs in some cases, and 3D LUTs that are flattened (24⨉24⨉24px → 24⨉576px). PNG compression works well, but it has two major limitations in this context. First, output values are constrained to the [0,1] range in PNG, while KTX 2.0 supports unbounded floating point values. Second, precision of output values is limited to 8 bits by PNG — that&#39;s just enough for a <em>final</em> pixel color, and is likely to cause banding if any additional processing (tone mapping, color space conversion, etc.) is applied after the LUT later in the post processing stack. While PNG has a 16-bit mode, <a href="https://stackoverflow.com/q/6413744">no web browser currently supports it</a>.</p>
<p><strong>Other LUT formats:</strong> If you have LUTs in a format other than those discussed here, tools like <a href="https://opencolorio.readthedocs.io/en/latest/tutorials/baking_luts.html#config-free-baking"><code>ociobakelut</code></a> may be helpful to convert or reformat the LUT. I&#39;m not aware of any LUT-specific formats that are both (1) open, and (2) as efficient as the KTX 2.0 approach described here.</p>
<hr>
<p><sup>1</sup> Basis Universal, by <a href="https://www.binomial.info/">Binomial LLC</a>, is an outstanding technology for compressing textures used in 3D models and materials, reducing GPU memory usage, and generally improving performance in GPU-based applications. <a href="https://github.com/BinomialLLC/basis_universal">Learn more about Basis Universal</a>, if you aren&#39;t using it for material textures yet.</p>
<p><sup>2</sup> KTX 2.0 does support key/value data and an extension-like mechanism, so formal or informal ways of embedding the LUT domain could be added in the future.</p>
]]></description><link>https://www.donmccurdy.com/2022/08/31/compressing-luts-with-ktx2/</link><guid isPermaLink="true">https://www.donmccurdy.com/2022/08/31/compressing-luts-with-ktx2/</guid><pubDate>Wed, 31 Aug 2022 00:00:00 GMT</pubDate></item><item><title><![CDATA[Writing volumetric refraction in glTF 2.0]]></title><description><![CDATA[<table>
<thead>
<tr>
<th><em>Before</em></th>
<th><em>After</em></th>
</tr>
</thead>
<tbody><tr>
<td><img src="/assets/images/2021/0026-preview_before.jpg" alt="Classic &#39;Suzanne&#39; monkey head, inside a textured box. Head is opaque off-white."></td>
<td><img src="/assets/images/2021/0026-preview_after.jpg" alt="Classic &#39;Suzanne&#39; monkey head, inside a textured box. Head shows glass-like transmission and refraction, with blue attenuation of transmitted light."></td>
</tr>
</tbody></table>
<p>With its release in 2017, glTF 2.0 created a baseline metal/rough physically-based-rendering (PBR) material workflow — the first of its kind to become widely-adopted in web and realtime applications. Since then, Khronos has published <a href="https://github.com/KhronosGroup/glTF/tree/main/extensions">multiple advanced extensions</a> for newer PBR effects like transmission, volumetric refraction, and clearcoat, IOR, and more. These extensions push the capabilities of realtime web viewers, in a good way.</p>
<p>Support for reading and writing these extensions in modeling tools generally lags behind the standards a bit. Blender can render refraction with its <a href="https://docs.blender.org/manual/en/latest/render/shader_nodes/shader/principled.html">Principled BSDF material</a>, but can only export transmission — not volumetric refraction — as of this writing.</p>
<p>In the meantime, one workaround is to write a short script adding the latest PBR material extensions to an existing glTF 2.0 asset. The same general approach should work for other upcoming extensions too, not just volumetric refraction. I&#39;ll use <a href="https://gltf.report/">glTF Report</a> as a sandbox for quickly previewing changes, and <a href="https://gltf-transform.donmccurdy.com/">glTF Transform</a> as the scripting API. The script could just as easily be used in a Node.js environment or a custom web application.</p>
<pre><code class="hljs typescript"><span class="hljs-keyword">import</span> {
    MaterialsIOR,
    MaterialsSpecular,
    MaterialsTransmission,
    MaterialsVolume
} <span class="hljs-keyword">from</span> <span class="hljs-string">'@gltf-transform/extensions'</span>;

<span class="hljs-comment">// Register extensions, and define extension material properties.</span>

<span class="hljs-keyword">const</span> transmissionExt = <span class="hljs-built_in">document</span>.createExtension(MaterialsTransmission);
<span class="hljs-keyword">const</span> transmission = transmissionExt.createTransmission()
    .setTransmissionFactor(<span class="hljs-number">1.0</span>);

<span class="hljs-keyword">const</span> volumeExt = <span class="hljs-built_in">document</span>.createExtension(MaterialsVolume);
<span class="hljs-keyword">const</span> volume = volumeExt.createVolume()
    .setThicknessFactor(<span class="hljs-number">1.0</span>)
    .setAttenuationDistance(<span class="hljs-number">1.0</span>)
    .setAttenuationColorHex(<span class="hljs-number">0x4285f4</span>);

<span class="hljs-keyword">const</span> iorExt = <span class="hljs-built_in">document</span>.createExtension(MaterialsIOR);
<span class="hljs-keyword">const</span> ior = iorExt.createIOR()
    .setIOR(<span class="hljs-number">1.75</span>);

<span class="hljs-keyword">const</span> specularExt = <span class="hljs-built_in">document</span>.createExtension(MaterialsSpecular);
<span class="hljs-keyword">const</span> specular = specularExt.createSpecular()
    .setSpecularFactor(<span class="hljs-number">1.0</span>);

<span class="hljs-comment">// Update material.</span>

<span class="hljs-keyword">const</span> material = <span class="hljs-built_in">document</span>.getRoot()
    .listMaterials()
    .find(<span class="hljs-function">(<span class="hljs-params">mat</span>) =&gt;</span> mat.getName() === <span class="hljs-string">'MyMaterial'</span>);

material
    .setAlphaMode(<span class="hljs-string">'OPAQUE'</span>)
    .setMetallicFactor(<span class="hljs-number">0.0</span>)
    .setRoughnessFactor(<span class="hljs-number">0.2</span>)
    .setExtension(<span class="hljs-string">'KHR_materials_transmission'</span>, transmission)
    .setExtension(<span class="hljs-string">'KHR_materials_volume'</span>, volume)
    .setExtension(<span class="hljs-string">'KHR_materials_ior'</span>, ior)
    .setExtension(<span class="hljs-string">'KHR_materials_specular'</span>, specular);</code></pre>

<p>Note that <code>metallicFactor</code> is set to <code>0.0</code>. Fully-metallic materials are never transmissive, and disabling the metallic factor entirely is a quick way to ensure we don&#39;t have that problem. Materials with <code>metallicRoughnessTexture</code> controlling the metal/rough effect could keep it at <code>1.0</code> and let the texture do the rest.</p>
<p>Additionally, most realtime renderers today cannot display a transmissive material through another transmissive material. This may change as renderers improve over time, but for now it&#39;s important to keep it in mind and ensure that transmissive objects and opaque objects use separate materials. Transmission is also a fairly expensive effect — costing more GPU time for every pixel affected — so this separation should improve performance.</p>
<p>For testing, I&#39;ve provided a sample <code>.blend</code> and <code>.glb</code> model, exported without any material extensions. Running the script should give the result shown below.</p>
<p>Resources: <a href="/assets/resources/SuzanneInBox.zip">SuzanneInBox.zip</a></p>
<script type="module" async src="/assets/lib/model-viewer.min.js"></script>
<p><model-viewer src="/assets/models/0026-SuzanneInBox.glb"
    alt="Classic 'Suzanne' monkey head, inside a textured box. Head shows glass-like transmission and refraction, with blue attenuation of transmitted light."
    camera-controls
    style="--poster-color: transparent; width: 100%; height: 400px;"></model-viewer></p>
]]></description><link>https://www.donmccurdy.com/2021/11/11/writing-volumetric-refraction-gltf/</link><guid isPermaLink="true">https://www.donmccurdy.com/2021/11/11/writing-volumetric-refraction-gltf/</guid><pubDate>Thu, 11 Nov 2021 12:08:00 GMT</pubDate></item><item><title><![CDATA[Personal updates & leaving Google]]></title><description><![CDATA[<p>Six years after joining Google to work on <a href="https://sunroof.withgoogle.com/">Project Sunroof</a>, and later <a href="https://insights.sustainability.google/">Environmental Insights Explorer</a>, this will be my last week at the company. Enough essays have been written about burnout in the past few months: from smug takes on millennial privilege, to sweeping predictions about the future of work. None of those articles have felt satisfying, and I&#39;m not eager to write my own. So I&#39;ll just say that the past year was challenging, despite working alongside wonderful people.</p>
<p>For now I plan to rest, build a few side projects, and freelance part time. I remain very interested in environmental data and policy, and expect to do more of that in the future. Recent reading includes Saul Griffith&#39;s <a href="https://www.rewiringamerica.org"><em>Rewiring America</em></a> — a short, memorable handbook for the next ten years — and Elizabeth Kolbert&#39;s <a href="https://bookshop.org/books/under-a-white-sky-the-nature-of-the-future-9780593136270/9780593136270"><em>Under a White Sky</em></a>, a thoughtful but sobering look at the reshaping of our planet.</p>
<p>While I won&#39;t be considering full-time employment much for the next few months, I&#39;ll continue to be active in the open source community, and spending more time writing. If you&#39;re interested in helping me make those into sustainable parts of my work, I&#39;m also opening a <a href="https://github.com/sponsors/donmccurdy/">GitHub Sponsors page</a>. To get in touch about freelance work or other questions, please reach out to <a href="mailto:contact@donmccurdy.com">contact@donmccurdy.com</a>.</p>
]]></description><link>https://www.donmccurdy.com/2021/07/06/personal-updates-leaving-google/</link><guid isPermaLink="true">https://www.donmccurdy.com/2021/07/06/personal-updates-leaving-google/</guid><pubDate>Tue, 06 Jul 2021 15:26:00 GMT</pubDate></item><item><title><![CDATA[Scotland bikepacking]]></title><description><![CDATA[<blockquote>
<p>NEW POST, OLD TRIP: The past year hasn&#39;t been one for travel, thanks to COVID-19. But in reminiscing about going places, I&#39;ve finally found the motivation to write about a trip I took two years ago.</p>
</blockquote>
<p>I visited Scotland during the summer of 2019, taking a solo bikepacking trip along parts of <a href="https://www.sustrans.org.uk/find-a-route-on-the-national-cycle-network/route-78-the-caledonia-way/">NCN78</a> (“The Caledonia Way”). The trip began in Glasgow, where I rented a Whyte Friston gravel bike from <a href="https://www.billybilslandcycles.co.uk/pages/hire/">Billy Bilsland Cycles</a> before taking a train to Inverness. Travel by train was quiet, scenic, and relatively easy.</p>
<figure>
  <a href="https://www.strava.com/routes/2803871087139234548" target="_blank">
    <img src="/assets/images/2021/0024-scotland-route.png" alt="Strava route and elevation profile.">
  </a>
  <figcaption>
    <strong>Figure:</strong> Route and elevation profile. Available as [Strava Route](https://www.strava.com/routes/2803871087139234548) or [.GPX file](https://www.strava.com/routes/2803871087139234548/export_gpx).
  </figcaption>
</figure>

<h2 id="arrival--train-to-inverness">Arrival — Train to Inverness</h2>
<p>Leaving Glasgow early and reaching Inverness by mid-afternoon, I checked into an Airbnb and spent some time exploring town. The Women&#39;s World Cup (U.S. vs. Netherlands) was showing that day, so I found a pub, ordered food, and settled in to watch. That evening I visited another pub for live music.</p>
<figure>
  <img src="/assets/images/2021/0024-loadout.jpg" alt="An orange bike with two panniers and other supplies on the back rack, leaning against a fence.">
  <figcaption>
    My rental bike, fully loaded with a bivy and other camping supplies. Given the weather and easy access to hostels, a lighter setup might have been ideal.
  </figcaption>
</figure>

<h2 id="day-1--inverness-to-fort-william">Day 1 — Inverness to Fort William</h2>
<p>Leaving early for the longest day of riding (66mi), I went south from Inverness, first along quiet side roads, then along Loch Ness and B852. Lunch in Fort Augustus, then continued to Fort William. Fort William would have been the place to stop and hike <a href="https://en.wikipedia.org/wiki/Ben_Nevis">Ben Nevis</a>, spending an extra day, but I hadn&#39;t made plans for this and the weather wasn&#39;t cooperative.</p>
<h2 id="day-2--fort-william-to-oban">Day 2 — Fort William to Oban</h2>
<p>The route from Fort William to Oban included ferries, and was much easier than the previous day&#39;s ride, at 41mi.  This left plenty of time for exploring Oban, which is worthwhile. My original plan had included some camping along the route. However, rain was coming and hostels were plentiful, so I took the more comfortable option and the hot showers.</p>
<figure>
  <img src="/assets/images/2021/0024-gravel-path.jpg" alt="A secluded gravel path.">
  <figcaption>
    A more secluded section of the trail.
  </figcaption>
</figure>

<h2 id="day-3--oban--benderloch-loop">Day 3 — Oban / Benderloch Loop</h2>
<p>While the official route continues south from Oban, there isn&#39;t an easy train back to Glagow further along. I only wanted to spend one more day on the trail, so instead of going south I did a 30mi loop out of Oban. This route featured more gravel than the other days, and more challenging navigation as it wasn&#39;t on a well-established cycling route.</p>
<figure>
  <img src="/assets/images/2021/0024-day-4.jpg" alt="A paved and foggy descent toward a lake.">
  <figcaption>
    A foggier section on the fourth day of the ride.
  </figcaption>
</figure>

<h2 id="return--train-to-glasgow">Return — Train to Glasgow</h2>
<p>Train back to Glasgow, returned the rental bike, and continued on to Edinburgh for a couple days of free time. The Scottish National Gallery of Modern Art and live music at Sandy Bells were two highlights in Edinburgh.</p>
<hr>
<h2 id="conclusion">Conclusion</h2>
<p>The NCN78 is a well-planned route, quiet and unhurried. It was also my first try at solo bikepacking, so I was grateful for never being too far from a warm bed and hot meal along the way. A few practical notes:</p>
<ul>
<li>July provided plenty of daylight — sunset was around 9:30pm — and allowed more time to explore. It also brought unpredictable weather, and I rode through some amount of rain and fog on all days of the trip. I didn&#39;t mind the rain, but the fog was a bit much on the last day of the ride.</li>
<li>If you&#39;re planning to take your bike on the train, reserve that in advance. There were limited spaces for bikes, although the bicycle compartment didn&#39;t appear to fill up on my trip.</li>
<li>The route was probably 70% paved, 25% firm dirt/gravel, and one short section, just a few miles, of chunky gravel. On a gravel bike with ~45mm tires this was all fine, although I did get a flat on the chunkier section. A road bike with 35-40mm tires would have been fine on most of it, but that loose gravel wouldn&#39;t have been much fun.</li>
</ul>
<figure class="width-large">
  <img src="/assets/images/2021/0024-trail.jpg" alt="Small, single-track dirt trail winding through a hilly landscape.">
  <figcaption>
    A walking trail just off the first day's ride.
  </figcaption>
</figure>
]]></description><link>https://www.donmccurdy.com/2021/03/07/scotland-bikepacking/</link><guid isPermaLink="true">https://www.donmccurdy.com/2021/03/07/scotland-bikepacking/</guid><pubDate>Sun, 07 Mar 2021 22:10:00 GMT</pubDate></item><item><title><![CDATA[Projects up for adoption: A-Frame Extras and Physics System]]></title><description><![CDATA[<p>I&#39;ve spent some time unsure what to do with a couple of my older projects, <a href="https://github.com/donmccurdy-up-for-adoption/aframe-extras">A-Frame Extras</a> and <a href="https://github.com/donmccurdy-up-for-adoption/aframe-physics-system">A-Frame Physics System</a>, and finally decided it&#39;s time to step back from developing both. They&#39;re each modestly popular projects, and I love seeing what people are building with them, but I haven&#39;t been dedicating the time that either project requires — even for maintenance — recently.</p>
<p>Both repositories are up for adoption<sup><a href="#footnote-1">1</a></sup> under the <a href="https://github.com/donmccurdy-up-for-adoption">donmccurdy-up-for-adoption</a> account. I prefer this to archiving them as I&#39;ve done with more outdated projects: there are still plenty of active users here, and I&#39;d be glad to see them continue. If you&#39;re interested in maintaining either project yourself or with others, please reach out with a GitHub issue.  Until (and unless) someone else takes over, both will be unmaintained indefinitely.</p>
<p>I have a few personal and practical reasons for this decision:</p>
<ol>
<li><strong>I don&#39;t have a Facebook account</strong>, and I don&#39;t plan on creating one. With recent announcements, I have limited time before <a href="https://www.oculus.com/blog/a-single-way-to-log-into-oculus-and-unlock-social-features/">Facebook disables my Oculus Quest</a> and I lose the ability to develop and test with my own VR hardware.</li>
<li><strong>Bug fixes for WebXR projects are unusually time-consuming.</strong> In a traditional software library, a public API can be unit tested, providing some confidence that a new change works correctly. With WebXR interaction and physics simulation, I haven&#39;t found a real alternative to manually testing on lots of devices. Perhaps there&#39;s a better way to automate testing for this type of project, but that&#39;s not something I&#39;m personally excited to spend time inventing.</li>
<li><strong>Limited time / other interests.</strong> The free time I&#39;m willing to spend on OSS is finite — it isn&#39;t a day job and probably won&#39;t become one. I don&#39;t want to give the impression that these projects are actively supported when they&#39;re not, while leaving issues and PRs unanswered.</li>
</ol>
<p>I will continue working on other things, building out <a href="https://github.com/donmccurdy/glTF-Transform">glTF-Transform</a> and contributing to <a href="https://github.com/mrdoob/three.js">three.js</a> and <a href="https://github.com/KhronosGroup/glTF">glTF</a>. I&#39;m also taking online classes in probability and statistics, and generally trying to do the best I can with an unusually challenging year.</p>
<hr>
<small>

<p><a class="footnote" name="footnote-1"><sup>1</sup></a> Credit to Tom MacWright for <a href="https://macwright.com/2016/01/30/adopt.html">suggesting the approach</a>, which feels more direct than an &quot;unmaintained&quot; notice, and less mothball-like and final than archiving.</p>
</small>
]]></description><link>https://www.donmccurdy.com/2020/09/06/projects-up-for-adoption/</link><guid isPermaLink="true">https://www.donmccurdy.com/2020/09/06/projects-up-for-adoption/</guid><pubDate>Sun, 06 Sep 2020 11:35:00 GMT</pubDate></item><item><title><![CDATA[Color management in three.js]]></title><description><![CDATA[<p><img src="/assets/images/2020/06/gamma.png" alt="Illustration of bad gamma."></p>
<p><em>A particularly clear example of gamma correct (left) and incorrect (right) rendering. <a href="https://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/">Source</a>.</em></p>
<blockquote>
<p><em><strong><mark>Updated November 9, 2022:</mark></strong> With three.js ≥r139, I contributed a more comprehensive <a href="https://threejs.org/docs/#manual/en/introduction/Color-management">guide to color management</a> in the official three.js documentation. I&#39;ve maintained and updated this post as a short tl;dr, but I recommend reading the full guide for a stronger understanding.</em></p>
</blockquote>
<h2 id="definitions">Definitions</h2>
<ul>
<li><strong>Linear-sRGB / THREE.LinearEncoding</strong>: Linear transfer functions, Rec. 709 primaries, D65 white point.</li>
<li><strong>sRGB / THREE.sRGBEncoding</strong>: sRGB transfer functions, Rec. 709 primaries, D65 white point.</li>
</ul>
<h2 id="best-practices">Best practices</h2>
<ol>
<li><strong>Textures</strong> with color data (<code>.map</code>, <code>.emissiveMap</code>, &hellip;) should be configured with <code>.encoding = sRGBEncoding</code>. Non-color textures use <code>LinearEncoding</code>. Exceptions exist for some formats (like OpenEXR), which typically use <code>LinearEncoding</code> for color data.</li>
<li><strong>Vertex colors</strong> should be stored in Linear-sRGB.</li>
<li><strong>Materials and lights</strong> require RGB components in Linear-sRGB. Hexadecimal and CSS colors are generally sRGB, and must be <a href="https://threejs.org/docs/?q=color#api/en/math/Color.convertSRGBToLinear">converted</a>.<sup><a href="#footnote-1">1</a></sup></li>
<li><strong>Renderer</strong> should have <code>.outputEncoding = sRGBEncoding</code> when not using post-processing. With three.js default post-processing, use <code>LinearEncoding</code> and apply a GammaCorrectionShader as the last pass in post.<sup><a href="#footnote-2">2</a></sup></li>
</ol>
<p><a href="https://threejs.org/docs/#examples/en/loaders/GLTFLoader">THREE.GLTFLoader</a> provides (1), (2), and (3) out of the box. Other file formats vary.</p>
<small>

<p><a class="footnote" name="footnote-1"><sup>1</sup></a> With three.js r139, <code>THREE.ColorManagement.legacyMode = false</code> addresses issue (3), by automatically converting hexadecimal (<code>0xF0F0F0</code>) and CSS-like (<code>&quot;crimson&quot;</code>) colors from sRGB to Linear-sRGB. This option is enabled by default in React Three Fiber.</p>
<p><a class="footnote" name="footnote-2"><sup>2</sup></a> With React Three Fiber&#39;s post-processing, or the <a href="https://github.com/pmndrs/postprocessing/"><code>postprocessing</code></a> package for vanilla three.js, use <code>.outputEncoding = sRGBEncoding</code>. An explicit GammaCorrectionShader pass is not required.</p>
</small>

<hr>
<h2 id="motivation">Motivation</h2>
<p>The goal of all that is a &quot;linear workflow&quot; — lighting calculations are done on linear values. When values are in gamma space (or &quot;gamma workflow&quot;), instead, surfaces will respond inconsistently to lighting, appearing too bright in places and too dark in others, or somewhat washed out. And obviously, light and color values brought from other PBR programs would not match as expected. You can tune your lights carefully and eventually get consistent results with a gamma workflow, too, but the lighting will be more fragile to future changes.</p>
<p>John Novak describes the problem well:</p>
<blockquote>
<p>[With gamma workflows, or incorrectly-configured linear workflows] the material and lighting parameters we would need to choose would be completely devoid of any physical meaning whatsoever; they’ll be just a random set of numbers that happen to produce an OK looking image for <em>that particular scene</em>, and thus not transferable to other scenes or lighting conditions. It’s a lot of wasted energy to work like that. ... It’s also important to point out that incorrect gamma handling in 3D rendering is one of the main culprits behind the “fake plasticky CGI look” in some (mostly older) games.</p>
</blockquote>
<p><strong>References</strong></p>
<ul>
<li><a href="https://docs.unity3d.com/Manual/LinearRendering-LinearOrGammaWorkflow.html">Unity • Linear or gamma workflow</a></li>
<li><a href="https://blog.johnnovak.net/2016/09/21/what-every-coder-should-know-about-gamma/">What every coder should know about gamma</a></li>
</ul>
]]></description><link>https://www.donmccurdy.com/2020/06/17/color-management-in-threejs/</link><guid isPermaLink="true">https://www.donmccurdy.com/2020/06/17/color-management-in-threejs/</guid><pubDate>Wed, 17 Jun 2020 09:23:00 GMT</pubDate></item><item><title><![CDATA[Unpaid work (visualization)]]></title><description><![CDATA[<p><a href="https://data.world/makeovermonday/2020w14">Week 14, 2020</a> of <a href="https://data.world/makeovermonday">#MakeoverMonday</a>. I intended for this to be a bit of practice with different styles of visualization; it ended up being mostly a deep-dive into <a href="https://gmousse.gitbooks.io/dataframe-js/#dataframe-js">dataframe.js</a>, which I hadn&#39;t used before. Most time was spent data cleaning, discovering and resolving a few issues mentioned below. Got a bit more comfortable with dataframe.js by the end of the visualization, but probably should have just formatted the data in another tool before bringing it into JavaScript, in retrospect.</p>
<ul>
<li>Original visualization: <a href="https://unstats.un.org/unsd/gender/timeuse/index.html">Unpaid work: Allocation of time and time-use</a></li>
<li>Data source: <a href="https://unstats.un.org/unsd/gender/timeuse/index.html">UN Stats</a></li>
</ul>
<div id="summary" class="async-content"></div>

<figure class="width-large">
    <center><h3>Share of paid and unpaid work, by gender</h3></center>
    <div id="main" class="async-content"></div>
</figure>

<script type="module">

import notebook from '/assets/resources/notebooks/unpaid-work/index.js';
import {Runtime, Library, Inspector} from '/assets/resources/notebooks/unpaid-work/runtime.js';

new Runtime().module(notebook, name => {
  if (['summary', 'main'].includes(name)) {
    const el = document.querySelector('#' + name);
    el.classList.remove('async-content');
    return new Inspector(el);
  }
});

</script>

<p>Source code in Observable: <a href="https://observablehq.com/@donmccurdy/unpaid-work">observablehq.com/@donmccurdy/unpaid-work</a></p>
]]></description><link>https://www.donmccurdy.com/2020/04/06/vis-makeover-monday-unpaid-work/</link><guid isPermaLink="true">https://www.donmccurdy.com/2020/04/06/vis-makeover-monday-unpaid-work/</guid><pubDate>Mon, 06 Apr 2020 10:41:00 GMT</pubDate></item><item><title><![CDATA[Hard drive backups]]></title><description><![CDATA[<p>I recently changed how I&#39;m storing personal files and backing up my computer. As background, I use an older 2012 Apple laptop with a 500GB SSD, and that hard drive has been ~90% full for a while. I&#39;ve been backing it up to an external HDD in my apartment, which is insurance against a hard drive failure but not against a fire, in which case both physical devices are gone. While Apple Time Machine can provide automatic backups, I do them manually – the HDD is loud, my apartment is small, and the noise is enough to keep me awake.</p>
<p>With the new backup plan I had three goals:</p>
<ol>
<li>Backups should be automatic and silent.</li>
<li>Backups should be replicated in the cloud.</li>
<li>Larger files (RAW photos, music) should move off my laptop to free up space, but should still be backed up.</li>
</ol>
<p>I bought an internal SSD and separate enclosure. This works out a little cheaper than buying an external SSD, and it&#39;s dead simple to assemble the two. If you don&#39;t care about noise, a 2TB 3.5&quot; HDD (currently $65) would be much cheaper, or you could get a larger drive.</p>
<ul>
<li><a href="https://www.newegg.com/western-digital-blue-2tb/p/N82E16820250089">Western Digital Blue SSD (2TB)</a> – $220</li>
<li><a href="https://www.newegg.com/orico-2588us3-bk-enclosure/p/0VN-0003-000H1">ORICO 2.5&quot; Enclosure</a> – $15</li>
</ul>
<p>I partitioned the drive (0.8TB for Time Machine, 1.2TB for media). Including my laptop&#39;s drive, that&#39;s three volumes:</p>
<ul>
<li>Primary – laptop SSD (0.5TB)</li>
<li>Time Machine – external SSD (0.8TB)</li>
<li>Media – external SSD (1.2TB)</li>
</ul>
<p>For cloud backups, I&#39;m using <a href="https://www.backblaze.com/">Backblaze</a> and paying $6/month for unlimited storage. The primary drive is backed up to the external SSD and Backblaze. Media on the external SSD is backed up only to Backblaze. Both happen automatically. The idea of self-hosting backups on Google Cloud Storage or S3 was tempting, but no.</p>
<p>My laptop has plenty of free space again, and it&#39;s nice having my Lightroom photo library on a separate drive with enough storage for future needs. And of course, backups are no longer dependent on my physical apartment.</p>
<hr>
<small>
A few alternatives:

<ul>
<li>Use an HDD instead of an SSD. Cheaper and more storage, but noisy and physically larger.</li>
<li>Back up exclusively to Backblaze or another online provider, and skip the external drive.</li>
<li>Back up to a rotating pair of external drives, keep one drive in a safe deposit box, and skip the online provider. This option has no monthly cost, but requires good habits about backing up and rotating the drives.</li>
<li>Adobe Lightroom offers 1TB photo storage with its $10/month plan, which isn&#39;t that much more than $6/month for Backblaze. If I cared more about online access to my photos, or were already paying an Adobe subscription, that option might be tempting.<small>
</li>
</ul>
]]></description><link>https://www.donmccurdy.com/2019/05/18/hard-drive-backups/</link><guid isPermaLink="true">https://www.donmccurdy.com/2019/05/18/hard-drive-backups/</guid><pubDate>Sat, 18 May 2019 11:32:00 GMT</pubDate></item><item><title><![CDATA[Three.js NodeMaterial introduction]]></title><description><![CDATA[<p>Node-based materials have been an experimental part of the <a href="https://threejs.org/">three.js library</a> for a few years now under <a href="https://github.com/mrdoob/three.js/tree/dev/examples/js/nodes"><em>three/examples/js/nodes/</em></a>, thanks to the efforts of <a href="https://github.com/sunag">Sunag</a>. There are <a href="https://github.com/sunag">great examples</a>, but <em>NodeMaterial</em> is still a work in progress and not a drop-in replacement for the default materials yet. Nevertheless, <em>NodeMaterial</em> can already be used to achieve some nice effects, without writing custom materials from scratch.</p>
<p>A classic  material, like <a href="https://threejs.org/docs/#api/en/materials/MeshStandardMaterial"><em>THREE.MeshStandardMaterial</em></a>, has a discrete number of inputs (<code>color</code>, <code>opacity</code>, <code>metalness</code>, <code>roughness</code>, ...), each accepting a simple scalar value. Node-based materials have mostly the same inputs, but <em>each input can accept a complex expression</em>. This gives the opportunity to adjust — or even radically alter — how the property behaves. Here&#39;s a simple example:</p>
<figure>
<div id="view" class="async-content"></div>
<figcaption>
  A sphere, with geometry displaced over time in the vertex shader. [Source code](https://observablehq.com/@donmccurdy/three-nodematerial-example) in an Observable notebook.
</figcaption>
</figure>

<p>The GLSL for this effect is concise. Use the vertex <code>y</code> position and the current time as inputs to a <code>sin()</code> wave, then map that to a scale varying from <code>0.8</code> to <code>1.2</code>, displacing the vertex by +/-20%.</p>
<pre><code class="hljs glsl">vPosition *= <span class="hljs-built_in">sin</span>( vPosition.y * time ) * <span class="hljs-number">0.2</span> + <span class="hljs-number">1.0</span>;</code></pre>

<p>Custom effects like this will require us to modify the <em>MeshStandardMaterial</em> shader somehow. With <em>NodeMaterial</em>, we can do that in a declarative way, allowing three.js to build the shader for us:</p>
<pre><code class="hljs js"><span class="hljs-keyword">const</span> material = <span class="hljs-keyword">new</span> THREE.StandardNodeMaterial();

<span class="hljs-comment">// Basic material properties.</span>
material.color = <span class="hljs-keyword">new</span> THREE.ColorNode( <span class="hljs-number">0xffffff</span> * <span class="hljs-built_in">Math</span>.random() );
material.metalness = <span class="hljs-keyword">new</span> THREE.FloatNode( <span class="hljs-number">0.0</span> );
material.roughness = <span class="hljs-keyword">new</span> THREE.FloatNode( <span class="hljs-number">1.0</span> );  

<span class="hljs-keyword">const</span> { MUL, ADD } = THREE.OperatorNode;
<span class="hljs-keyword">const</span> localPosition = <span class="hljs-keyword">new</span> THREE.PositionNode();
<span class="hljs-keyword">const</span> localY = <span class="hljs-keyword">new</span> THREE.SwitchNode( localPosition, <span class="hljs-string">'y'</span> );

<span class="hljs-comment">// Modulate vertex position based on time and height.</span>
<span class="hljs-comment">// GLSL: vPosition *= sin( vPosition.y * time ) * 0.2 + 1.0;</span>
<span class="hljs-keyword">let</span> offset = <span class="hljs-keyword">new</span> THREE.Math1Node(
  <span class="hljs-keyword">new</span> THREE.OperatorNode( localY, time, MUL ),
  THREE.Math1Node.SIN
);
offset = <span class="hljs-keyword">new</span> THREE.OperatorNode( offset, <span class="hljs-keyword">new</span> THREE.FloatNode( <span class="hljs-number">0.2</span> ), MUL );
offset = <span class="hljs-keyword">new</span> THREE.OperatorNode( offset, <span class="hljs-keyword">new</span> THREE.FloatNode( <span class="hljs-number">1.0</span> ), ADD );

material.position = <span class="hljs-keyword">new</span> THREE.OperatorNode( localPosition, offset, MUL );</code></pre>

<p>This is more code than the plain GLSL above, but consider what we <em>didn&#39;t</em> have to do:</p>
<ul>
<li>Decide where in the shader to inject the new uniform inputs.</li>
<li>Decide where in the shader to inject the animation.</li>
<li>Deal with broken code in future releases of three.js, because some minor change in the source shader broke a regular expression used to inject things.</li>
</ul>
<p>Node-based materials have gained popularity in tools like <a href="http://acegikmo.com/shaderforge/">Shader Forge</a>, <a href="https://blogs.unity3d.com/2018/02/27/introduction-to-shader-graph-build-your-shaders-with-a-visual-editor/">Unity</a>, <a href="https://docs.unrealengine.com/en-us/Engine/Rendering/Materials/Editor">Unreal</a>, <a href="https://www.sidefx.com/">Houdini</a>, and <a href="https://docs.blender.org/manual/en/latest/render/blender_render/materials/nodes/introduction.html">Blender</a>, for their expressive flexibility. While those tools provide a user interface for constructing the shader graph, no such UI exists for <em>THREE.NodeMaterial</em> just yet. As an experiment I&#39;ve explored parsing a node graph created in another tool, <a href="https://shade.to/">Shade for iOS</a>, and converting that to equivalent three.js nodes:</p>
<figure>
  <video style="width: 100%;" autoplay muted loop>
    <source src="/assets/images/2019/03/grass-wind.webm" type="video/webm">
    <source src="/assets/images/2019/03/grass-wind.mov" type="video/mp4">
  </video>
  <figcaption style="max-width: 550px; margin: 0 auto;">
    A node-based shader, from the [Shade for iOS](https://shade.to/) examples, using an instanced glTF grass mesh with a procedural wind animation. [Live demo](https://three-shadenodeloader.donmccurdy.com/).
  </figcaption>
</figure>

<p>Node-based materials are declarative, optimizable, and <a href="https://en.wikipedia.org/wiki/Composability">composable</a>. They are relatively easy to reuse and share. For example, a developer could write a series of <em>new</em> nodes (composed of the core nodes) for complex behaviors. Published on NPM, those nodes would be accessible to all three.js users:</p>
<pre><code class="hljs js"><span class="hljs-keyword">import</span> { StandardNodeMaterial } <span class="hljs-keyword">from</span> <span class="hljs-string">'three/examples/js/nodes/'</span>;
<span class="hljs-keyword">import</span> { GrassWindNode } <span class="hljs-keyword">from</span> <span class="hljs-string">'@donmccurdy/three-grass-wind'</span>; <span class="hljs-comment">// not on NPM.</span>

<span class="hljs-keyword">const</span> material = <span class="hljs-keyword">new</span> THREE.StandardNodeMaterial();
material.position = <span class="hljs-keyword">new</span> GrassWindNode({ <span class="hljs-attr">windSpeed</span>: <span class="hljs-number">5.0</span> });</code></pre>

<p>In time I&#39;m hopeful that node-based materials will encourage more creative and reusable materials in the three.js community, and enable third-party libraries like <a href="https://github.com/zadvorsky/three.bas">THREE.BAS</a> to integrate more easily with the three.js core library.</p>
<script type="module">
  import { Runtime, Inspector, createLibrary } from '/notebook-runtime.js';
  import notebook from 'https://api.observablehq.com/@donmccurdy/three-nodematerial-example.js';

  const el = document.querySelector('#view');
  const library = createLibrary(el);

  Runtime.load(notebook, library, (cell) => {
    if (cell.name === 'view') {
      el.classList.remove('async-content');
      return new Inspector(el);
    }
  });
</script>
]]></description><link>https://www.donmccurdy.com/2019/03/17/three-nodematerial-introduction/</link><guid isPermaLink="true">https://www.donmccurdy.com/2019/03/17/three-nodematerial-introduction/</guid><pubDate>Sun, 17 Mar 2019 17:40:00 GMT</pubDate></item></channel></rss>