In a past life, I wrote an article called HTML is a Serialized Object Graph and That Changes Everything which made the case why understanding the declarative nature of HTML as an object graph which sits alongside JavaScript’s imperative DOM API is so crucial to becoming a well-rounded web developer who prioritizes performance, accessibility, efficiency, and future-proofing.
I want to continue unpacking some of those concepts, namely efficiency, because every so often I come across a troubling perspective: that Object-Oriented Programming (OOP) is somehow “bad” and therefore the DOM is bad because it’s based on OOP.
Let’s be very clear here, and I consider this statement to be wholly undeniable:
The Web Platform Would Not Exist Without OOP.
It’s not just that web APIs often take advantage of OOP. They would not exist without OOP! Virtually every web API you have ever used is designed specifically within the OOP paradigm. And every entity in the lifecycle of a web page is based on objects. These objects are instances of classes and in a great many cases these classes are subclasses of other classes. This means that key functionality is spread across a class hierarchy. That isn’t a bug, it’s a feature! And this means that it’s absurd to think that to write a web app you need to reach for a “framework”. The Web Platform is a framework.
Here’s a list of many of Web Platform classes you have used (or the libraries you import use):
Window – this is available as a “global” window instance which is a window (see what I did there?) into many other web APIs. In addition, when you directly call screen, location, document, and some other global objects, you are actually accessing them via window. This is also true of a great many “functions”…for example, did you know that setTimeout is actually a method of the window object? (MDN Docs)
Screen – this is a also a global instance which provides you with a number of properties describing the web browser’s display. (MDN Docs)
Storage – when you access either the localStorage or sessionStorage property of window, you are accessing an instance of Storage. This is why the public API is the same for these two types of storage even though the internals (and thus behavior) are a bit different. (MDN Docs)
Document – now here’s where things get even more interesting. When you access the document property of window, you are accessing the current document that’s being loaded and displayed (specifically an HTMLDocument instance). However, you can create new documents! It’s actually possible just to write const newDoc = new Document() and boom! You have a new instance of the Document class, with all of the properties & methods thereof. (Typically you’ll create a document using a parser, such as the static class method Document.parseHTMLUnsafe which returns a new HTMLDocument instance.) The Document class is itself a subclass of Node. Every part of the DOM ultimately is a subclass (of a subclass, of a subclass) of Node, such as Element, Comment, Attr, and Text. But Node itself is actually a subclass of EventTarget, which is the ultimate abstract superclass. Even Window is a subclass of EventTarget. This is how (and why) the entire events system works in the browser! (MDN Docs)
Request/Response – when you access the fetch method of window, you probably provide a URL and perhaps some additional settings like the HTTP method, headers, a body payload, etc. But that’s simply shorthand for creating a new instance of the Request class. And you can actually do that! fetch(params) is the same as const request = new Request(params); fetch(request). Which is pretty handy because that means you can pass Request objects around, save them, modify them, and do whatever you want with them before passing them to fetch. The return value of fetch is, well you guessed it! An instance of the Response class. When you call .json() or .text() after getting the response, those are methods on Response. (MDN Docs)
If you’re ever wondering what the superclass is (if any) for a particular class, you can write Object.getPrototypeOf(ClassNameHere) in the console. All of the native HTML elements types, such as HTMLDialogElement, will be subclasses of HTMLElement. This is why when you write your own custom element, you write class MyWebComponent extends HTMLElement. You are subclassing the same superclass as all the other elements in HTML!
You can also monkey patch classes and objects. This is quite common in many OOP paradigms, but generally frowned upon in the context of web/browser APIs because of the fear of future breakage. Nevertheless, if you know what you’re doing and choose a suitable naming convention, I say go for it! For example, there’s an example on MDN of how to get a blob URL from a fetched image resource which you can then pass along to an img tag. Let’s monkey patch Response to make this a little easier!
// Monkey patchResponse.prototype.userlandBlobURL=asyncfunction(){constblob=awaitthis.blob()returnURL.createObjectURL(blob)}// Test it out!constresp=awaitfetch("https://placehold.co/600x400")constblobURL=awaitresp.userlandBlobURL()// 'blob:https://example.com/bc10ebbc-d1bc-44fa-a23e-3bf398061502'constimg=document.createElement("img")img.src=blobURLdocument.body.append(img)
Perhaps JavaScript should have some kind of “blessed” naming convention for “custom methods” the way HTML does for custom elements (<you-must-use-dashes>). At any rate, I chose userland as a prefix in this example. YMMV!
To go back to the word I used above, efficiency, the beauty of the web platform is once you learn these class hierarchies and the various relationships between the objects in the framework that is built into the browser, you are well on your way to understanding how to build in a high-quality fashion. As I’ve mentioned many times, the reason you don’t want to spray “div tag soup” all over your users’ download web pages is because every tag is an object instance. Got 5 <div>s? That’s five HTMLDivElement object instances loaded into memory. Got 5000 <div>s? That’s five thousandHTMLDivElement object instances loaded into memory. Thankfully browsers are themselves marvels of efficiency, written in low-level C++, so even then it might not be the end of the world. But don’t be an asshole and assume your tab is the only tab running on a machine.
Imagine a world where every damn browser tab has an app loaded in it with an incredibly over-engineered and overwrought DOM littered with needless element instances in a sprawling in-memory object graph. Sadly we don’t need to imagine it because that world is already here. But that doesn’t mean we need to be OK with it. We can push back and fight for a more considerate world, one where you have much more functionality written into far fewer objects because you understand what these objects are and how they work.
Don’t be caught flat-footed the next time a frontend engineer asks you: “So, what are all the objects loaded into memory as your web application boots up? Describe them to me.” Oh dear, has no one ever asked you that before?!
🚨 Web dev education still has a l-o-o-ong way to go… 🫠🤪
As someone who has worked extensively on codebases using TypeScript, as well as codebases using JavaScript, I am here to inform you: I greatly prefer JavaScript.
It’s not that I don’t like specifying types for my variables and function signatures. I do! In fact, I like it so much I even do it in Ruby. 😲
But you see, what I don’t like is my types being part of “the code”. I believe code should be strictly about the behavior. What it’s called. What it does. The “metadata” surrounding the code—this is a string, that is an integer—makes perfect sense as part of documentation in the form of code comments. Because guess what? You should be documenting your code anyway. Whoever said don’t write lots of comments in code is grossly mistaken.
Yes indeed, it is my sincere belief that you should be documenting your functions, and your value objects, and your classes, and all sorts of other things as much as possible. (Without going overboard…usually a sentence or two is quite sufficient.) Which brings us to…
When it comes to documenting JavaScript, JSDoc is the way to go. Even if you don’t expressly use the tool to generate an API site (I actually have never done that personally!), your JSDoc comments are interpreted by a wide variety of tools and editors. Which brings us to…
Wait, if you prefer JavaScript, why are you talking about TypeScript??
Because TypeScript is how you type check JavaScript even if you’re writing JavaScript with JSDoc and not TypeScript. (Confused yet? 🥴)
Here’s an example. To declare a new string variable in TypeScript proper, you might write something like this:
letstr:stringstr="Hello world"str=123// this will cause a type error!
However, you can also get the exact same benefits in pure JavaScript by adding // @ts-check as the first line of a .js file and then specifying types via JSDoc:
// @ts-check/** @type {string} */letstrstr="Hello world"str=123// this will cause a type error!
If you’re using an IDE like VSCode or Zed, you’ll likely see the type hints and errors show up automatically, but it also might be a good idea to npm install typescript -D because you may want to run tsc separately to generate TypeScript declaration files or to type check your files in a CI process. (In my package.json I have a script which looks like this: "build:types": "npx tsc".)
You can configure how the type checking works by adding a jsconfig.json file to your project’s root folder. Here’s one I use:
At the same time, you’ll also probably need to add a tsconfig.json file at some point, like so:
{//Changethistomatchyourproject"include":["src/**/*"],"compilerOptions":{//TellsTypeScripttoreadJSfiles,as//normallytheyareignoredassourcefiles"allowJs":true,//Generated.tsfiles"declaration":true,//Thiscompilerrunshould//onlyoutputd.tsfiles"emitDeclarationOnly":true,//Typesshouldgointothisdirectory.//Removingthiswouldplacethe.d.tsfiles//nexttothe.jsfiles"outDir":"types",//gotojsfilewhenusingIDEfunctionslike//"Go to Definition"inVSCode"declarationMap":true}}
I know that might all sound like a lot, but I promise you once you get a project going with your editor and your CLI tooling, you can replicate that setup across countless more projects and it becomes second nature.
Rather than continue to just talk about using JSDoc for typing, let’s dive into some examples!
JSDoc in the wild
Here’s a class constructor which accepts a number of arguments.
classReciprocalProperty{/**
*
* @param {HTMLElement} element - element to connect
* @param {string} name - property name
* @param {(value: any) => any} signalFunction - function to call to create a signal
* @param {() => any} effectFunction - function to call to establish an effect
*/constructor(element,name,signalFunction,effectFunction){this.element=elementthis.name=namethis.type=this.determineType()// etc.}}
As you can see, it’s fine to keep using any at times when you really don’t “care” about the exact nature of the variable in question. And again, the nice thing about putting your type information in as part of the code documentation is…now you have documentation! 🙌
Here’s an example of specifying a variable type that is a JavaScript object (a “record” in TypeScript parlance) with typing for the key/value pairs:
You can also see here we’re calling this.element["_reciprocalProperties"] instead of this.element._reciprocalProperties because TypeScript gets fussy about accessing properties it doesn’t know about. Using [] syntax bypasses those complaints (just make sure you know what you’re doing!).
Here’s an example of specifying a type inline as part of a for…of loop:
for (conststopof/** @type {StreetcarStatementElement[]} */([...this.children])){stop.operate()}
That particular syntax took me a while to figure out! 😅 When in doubt, put your inline /** @type */ declarations in front of a piece of code wrapped in parentheses. That usually does the trick.
Here’s an example of specifying a type inline within a method signature:
You can also use @typedef to define the equivalent of TypeScript’s interfacewhich is documented here.
And finally: every once in a while, you may just need to do something funky that TypeScript simply isn’t going to like, and that’s OK! As the documentation states:
TypeScript may offer you errors which you disagree with, in those cases you can ignore errors on specific lines by adding // @ts-ignore or // @ts-expect-error on the preceding line.
JavaScript + JSDoc + tsc should be the industry default
I have stated on multiple occasions that I believe it has been a huge mistake for “TypeScript” to become a sort of industry default. I firmly encourage everyone I talk to to start writing real web open standard .js files which don’t require any build steps or tooling to execute properly, all while utilizing the power combo of JSDoc + tsc to gain all of the benefits of type hints in IDEs and type checking in CI. It really is the best of both worlds, and the number of cases when you must give in and start authoring .ts files proper is vanishingly small.
It’s possible there are certain frameworks you may need to use which essentially “require” you to author your code using TypeScript. In those instances, so be it. But if you have any control over the shape of a project, I hope you’ll consider using good ol’ fashioned JavaScript. After all, it’s ECMAScript—not TypeScript—that is the lingua franca of the web.
Hot take: the killer feature of htmx which transforms it from a nice lil’ opinionated DSL for HTML-based reactivity to a powerhouse full-stack framework with which you can build ambitious, even experimental application architectures is this:
In UX-speak, this translates to: “As a user, when I click the Click Me! button, I would like to see the new content appear in the box” (the “box” being a <div id="parent-div"> element somewhere on the page, and “content” being an HTML fragment returned by the server).
htmx comes with a number of swap mechanisms at your disposal, including ones like beforeend which is equivalent to targetEl.append(anotherEl) and even delete which ignores a response and simply deletes a target element.
Let’s call these built-in swaps. You can certainly accomplish a lot with only the built-in swaps, as evidenced by the growing popularity of htmx. However this leads us to the question at hand: Can I create my own swap?
Yes!
The genius of htmx is that the hx-swap attribute can be anything you want. hx-swap="bop"? Sure. hx-swap="snap-crackle-pop" Absolutely!
Yet the behavior you’ll end up with out-of-the-box is the same as innerHTML because you haven’t defined what these new swap mechanisms are. That’s where extensions come in.
Let’s write the most simple extension imaginable. Assuming there’s an htmx import or global defined, in your JavaScript entrypoint write:
htmx.defineExtension("bop",{isInlineSwap:(swapStyle)=>swapStyle==="bop",handleSwap:(swapStyle,_target,/** @type {Element} */fragment)=>{if (swapStyle==="bop"){console.log("Bop!",fragment)// log the server HTML fragmentreturntrue}}})
You’ll also need to add an hx-ext attribute on your site layout’s body:
Clicking Boop! will go through the normal htmx ritual of calling the server and parsing the incoming HTML into a DOM fragment. However this time, your custom swap will kick in and you’ll get a Bop! console log. 🎉
Let’s talk about boosting for a moment. htmx provides an option where forms automatically submit using the AJAX technique so you can easily use swapping techniques. You can boost via the hx-boost attribute either on a container of your page or on <body> itself.
However, once you do this, your browser URL will change when htmx handles a request/response thus adding to your history. If you don’t want your custom swap to affect browser history and you don’t want to require users to fiddle with htmx’s history attribute, you can do so manually. Let’s add an event handler to our extension:
This sets the hx-push-url manually if it’s not already present on the element triggering the swap, thus indicating to htmx it shouldn’t mess with browser history.
And…that’s all folks! If you’ve ever looked at a library like htmx and wondered but what if I want to write my own code to deal with an incoming HTML fragment? (or am I the only weirdo who does this?! 😅), this is your solution.
It wouldn’t be a fresh That HTML Blog installment without a demo, so for this one I thought I’d come up with a fun little strategy of returning messages from the backend so the frontend can display a toast notification. Since my fav UI library Web Awesome sadly doesn’t come with toasts yet (unlike its Shoelace predecessor), I roped in the Simple Notify library which is actually pretty stellar.
In a nutshell, my custom toasts htmx extension pulls in HTML like this:
<outputclass="toast-notification"data-title="Success!">
I'm a toast from the server!
</output>
Could you have come up with your own vanilla JS solution, perhaps fetching some JSON instead and using that to populate the toast values? Sure…but then that wouldn’t be very htmx-y now would it?
The htmx ethos is all about using HTML as the source of truth in your application. HTML fragments can be used verbatim to populate the DOM, but why not use HTML an an interchange format as well? After all, once you introduce custom elements into the mix, you’re practically writing XML using a schema you invent for the task at hand. Is that a bad thing? I would say no! And considering the recent outcry over removing XSLT from browsers, I’m not the only one. 😂
So that’s htmx extensions. What will you create with it? Hop on over to Human Web Collective’s indie_web_dev@humansare.social Forum or Discord to chat about this and a whole lot more!
Previously on That HTML Blog, I wrote about a new library I released called Reciprocate. The TL;DR is that it lets you add signal-based reactivity (both properties & attributes) to your otherwise vanilla web components (particularly handy for “HTML Web Components” aka custom elements with templates already rendered by the server and preexisting in the DOM).
However, if you wanted to write more “traditional” frontend UI components with dynamic, declaratively-written templates defined in the custom element JS itself, you’d need to reach for something else. Fans of libraries such as Lit might wonder Where is my html function? I want my html function!
Created by Joe Pea, it’s a 2.3kB minzipped library after my own heart because it does one thing and one thing only: give you an html function. 🙌 Essentially it lets you author tagged template literals containing HTML and extra syntactic sugar to set properties and add event handlers (just like in Lit!), and then insert those templates to any place in the DOM. It doesn’t even have to be anything related to a custom element, but obviously that’s where my brain goes.
In fact, as I read the details of how it works, it got me thinking: could I take some Reciprocate demo components and rewrite the HTML templates using nimble-html?
Yes indeed, it works beautifully. With just a bit of boilerplate all tucked away in a BaseElement, you can write a new component with DX which feels very familiar:
Pretty awesome, right?! I love the idea that I could use Reciprocate alone to assist with some simple server-rendered HTML Web Components, and then if I needed more client-side smarts and lots of dynamic interactivity, I could simply import html from nimble-html and we’re off and away.
This really does prove out my thesis that the best way of thinking about web components isn’t that you want to adopt the One True Library to do it all. Rather, you should layer on small, self-contained APIs as needed and build up your own frontend stack that works for your specific projects and use cases.
Ideally in the future, more and more of those APIs are provided by the browser itself. In the meantime, we can reach for userland solutions which focus on doing one or two things very well.
I cannot sing the praises of fetch loudly enough. As a frontend (and backend!) JavaScript developer, having ready access to this native web API where you can pull down HTML fragments or JSON data with a line or two of code is simply brilliant.
But Gabor Koos wants you to know there are a few things lacking in such a simplistic approach when it comes to shipping robust production-ready code:
The network is unpredictable and you have to be ready for it. You better have answers to these questions: What happens when the network is slow or unreliable? What happens when the backend is down or returns an error? If you consume external API’s, what happens when you hit the rate limit and get blocked? How do you handle these scenarios gracefully and provide a good user experience?
Honestly, vanilla Fetch is not enough to handle these scenarios. You need to add a lot of boilerplate code to handle errors, retries, timeouts, caching, etc. This can quickly become messy and hard to maintain.
And the solution Gabor provides is a tiny 2.3KB wrapper around fetch called, appropriately, ffetch. Here are some of its features:
Pending requests – real-time monitoring of active requests
Per-request overrides – customize behavior on a per-request basis
Here’s what a sample request looks like:
importcreateClientfrom'@gkoos/ffetch'// Create a client with timeout and retriesconstapi=createClient({timeout:5000,retries:3,retryDelay:({attempt})=>2**attempt*100+Math.random()*100,})// Make requestsconstresponse=awaitapi('https://api.example.com/users')constdata=awaitresponse.json()
There are many more usage examples here, including how to write your own API client class with a lot of built-in smarts. In this regard, ffetch reminds me a bit of a Ruby gem I like to use when writing requests in a Ruby backend, faraday.
Some of you may be wondering “well, if I’m going to use a library instead of vanilla fetch, why not just use axios?”
Well, a couple of reasons I can think of. One is that ffetch mirrors the fetch API so you can use it as a “drop-in” replacement. Axios doesn’t even use fetch, it wraps the older XMLHTTPRequest API. Second, it comes in at a whopping 14KB minzipped—that’s 600% larger than ffetch. 🤯
So yeah. My advice is always to use native web APIs until you outgrow them (and get sick of writing a ton of boilerplate), then see if you can reach for something which is as lightweight and “vanilla-adjacent” as possible to layer on a few more smarts. ffetch appears to be a worthy candidate, and I look forward to trying it out on my next project.
Have you ever been chipping away on a vanilla web component when you began to wonder “hey waitaminute…how do I make it so when I set this JavaScript property, the equivalent HTML attribute is updated? And if I set this attribute, the equivalent property is updated? Or when either is updated, my component will re-render?! Where are all the APIs for this stuff??”
Alas, there are none. 😥
Why the native web platform doesn’t provide primitives for custom elements to act like, well, any other native element is a question for another time…because you need not worry about this strange oversight any longer! 🤓
Introducing Reciprocate. This has been a long time coming…
(Feel free to skip down to the explainer bits, or keep reading for a little backstory!)
Approximately three years ago, I first started noodling around with the then brand-new Signals package from the Preact team. It felt so whiz-bang and cool, I couldn’t even wrap my head around it at first. Once I reverse-engineered it enough that I could understand it…jeeezzus, what a mind job. 😎
After a bit of time passed, it started to dawn on me that this could prove to be a great underlying foundation for handling “fine-grained reactivity” within web components, as a lightweight alternative to the Lit base element (which I had been using a lot at the time).
I spent 2023 on-and-off iterating on a solution and for a brief while released it as part of some other experiments I’d been working on with and alongside Lit. Yet I eventually arrived at the decision to embark on development of my own custom element-based component format which would also offer server-based rendering and various neat-o features like HTML Modules. This umbrella project I called ❤️ Heartml.
As I was working on building up that feature set more and more, I chickened out on my original goal of offering a series of “unbundled” micro-libraries which people could simply add to their own vanilla web components. I figured if anybody would care to adopt Heartml, it’d have to be an install-and-you’re-done turnkey solution.
However…
Fast forward to summer 2025, and I was feeling incredibly uncertain about my approach with Heartml. One of the stated goals of the Lit project is that as more native web APIs offer good “DX” and helpful ergonomics for web developers, Lit itself would need to ship less code. I love that philosophy! With Heartml, I started to feel like I should take that concept even a step further by making all of its userland features opt-in, rather than just another magical black box you feed your own code into.
So I started over…with my original plan. 😅 I stripped out the reactivity from a monolithic HeartElement (which thankfully wasn’t too challenging because my code was still very modular), polished it up with some new smarts to make creating signals and writing “effects” easier than ever, and tada!I’m proud to release it as the first public Heartml library: Reciprocate. 🎉
So, what does Reciprocate do? To quote from the project:
Reciprocate is a helper utility for adding signal-based reactivity and attribute/property reflection to any vanilla custom element class.
The ReciprocalProperty class takes advantage of fine-grained reactivity using a concept called “signals” to solve a state problem—often encountered in building components (vanilla or otherwise). Signals are values which track “subscriptions” when accessed within an side-effect callback. You write an effect, access one or more signals within that effect, and then any time any of those signal values change, that effect is re-executed.
Now if you’ve ever tried to write attribute/property reflection yourself (supporting multiple value types like numbers, booleans, and even JSON) and then include that custom code in all of your web components, you know what a PITA it is. And believe me, I’ve done it far too many times. I’m more than happy to let a tiny library handle it for me! And being able to use signals for those attribute/property values in order write rendering effects against? That’s the cherry on top! 🍒
Reciprocate tries to be so unbundled, it doesn’t even come with its own signals implementation. In the readme it shows how to use Preact Signals, but you can also use alien-signals or potentially any other library out there. And one day if TC39 adds signals support into JavaScript directly, we can immediately leverage that!
Here’s an example of what writing the ubiquitous counter demo looks like using Reciprocate. Notice that we don’t inherit from anything other than HTMLElement. Yes, this is a real vanilla custom element! Reciprocate is simply an add-on. Even better, it uses “HTML Web Component” smarts to preserve the server-rendered content as-is and only “resume” it for future renders.
exportclassMyCounterextendsHTMLElement{staticobservedAttributes=["count"]static{customElements.define("my-counter",this)}#effects#resumed=falseconstructor(){super()this.count=0this.#effects=reciprocate(this,signal,effect)}connectedCallback(){this.addEventListener("click",this)/** Create the effects which run whenever signal values are changed */this.#effects.run(()=>{constcountValue=this.count// set up subscriptionif (this.#resumed)this.querySelector("output").textContent=countValue})this.#resumed=true}handleEvent(event){if (event.type==="click"){constbutton=event.target.closest("button")switch(button.dataset.behavior){case"dec":this.count--;break;case"inc":this.count++;break;}}}disconnectedCallback(){this.#effects.stop((fn)=>fn())}attributeChangedCallback(name,_,newValue){this.#effects.setProp(name,newValue)}}
OK, as you can see we’re handling click events, managing state, and re-rendering on state changes. All the typical stuff you need to deal with in components. And if you’re wondering to yourself, wait, how does it know that this.count should be turned into a signal?! That’s thanks to the power of Object.keys. Reciprocate knows what your properties are right when it’s called and sets up the appropriate signals and getters/setters.
And if you’re also wondering to yourself, well, that does feel like a fair bit of boilerplate mixed in with component code 😕…never fear. You can write a lightweight base element (an example is in the readme) and then each component will be about as simple as can be! For example:
There are a lot more details about how Reciprocate works in the readme, and you can even take a peek at the test suite for a riff on the “declarative signal binding” demo featured previously on That HTML Blog, which in theory would mean you could eliminate the rendering effects in JavaScript—let the HTML template provide the logic for you! 🤯
As I mentioned up top, Reciprocate is the first library in a series of libraries to come from the Heartml project. Next up, I’m hoping to feature an interesting take on server roundtrip of interactive commands based on using fetch (via forms or otherwise). By combining web components and a modular set of commands via server requests, you could build a very sophisticated client/server web application with still mostly vanilla code.
In summary…
A lot of libraries out there—even really good ones—want you to do things their way. I love Lit, but Lit is quite opinionated and falls down in scenarios where Lit is just a suboptimal solution. Likewise, I also admire htmx quite a bit, but that too is very opinionated and throws a lot of “magic” at you that you might not even want or need.
I like tools which adhere to the Unix philosophy. I like micro-libraries which you can chain together to make higher-order logic work the way you want it to. I don’t want to replace HTMLElement. I don’t want to replace fetch. I simply want to provide you with some ergonomics to make vanilla web APIs feel even more powerful and expressive. That’s the goal. Feel free to submit issues and PRs with your feedback and ideas!
Some Technical Details
Reciprocate is a small library, but some of the details of building it may interest you.
First of all, everything is written in JavaScript. Not TypeScript. This is a fundamental belief I hold: we should be writing for the web with the language of the web.
However, typing information is very useful and I would argue is a form of documentation. Furthermore, the act of static typing is the act of documenting your code. As such it makes perfect sense to use JSDoc. By annotating your code with JSDoc comments, both the types as well as other information you want to share about classes and methods and functions can all live within the same syntax that is 100% JavaScript compatible.
It’s easy to build and export types, as the TypeScript docs demonstrate. Other folks who write with JSDoc-based JavaScript or TypeScript proper can equally benefit from those types published in the NPM package. This sensible approach has already been adopted by a growing number of well-known packages such as the Svelte frontend framework.
Now let’s talk about testing.
In the past, I had been using Modern Web’s web-test-runner to write real browser-based tests for JS code, but a couple of reasons gave me pause for this latest project:
It felt a bit like a magic black box to me, and as you may be surmising by now, I hate black boxes. 😅
I had been using web-test-runner with Playwright for cross-browser testing which is a project from Micro$oft. I am actively boycotting all Microsoft technology at this point, which left me needing to use something else. While I’m by no means a fan of Google either, Puppeteer is a project by the Chrome team directly which feels relatively “neutral” to me. And if I’m going to bother switching to Puppeteer, why not re-examine my whole testing infra?
So after a ridiculous amount of yak shaving to find a possible alternative, I ended up with a blessedly straightforward setup using Mocha.
Mocha is an oldie but goodie in the JavaScript testing world, which is something I tend to love. Now out of the box Mocha doesn’t offer browser-based testing per se, which is why you need to spin up a static server in order to test via Puppeteer. Initially I used Express because that’s what was demonstrated in this Mocha example, but then I started to wonder if I even needed that dependency.
Turns out, I did not! (MDN) You can built a functional zero-dependency static web server with Node, as that linked MDN article shows, and after a few tweaks I was able to wave Express bye-bye.
The nice thing about this setup is it’s incredibly fast. I can run the tests from scratch in just a couple of seconds. And furthermore, I’m testing an entire website! If I were to spin up the server outside of the test harness and visit it in a web browser, I could see the exact same behavior on display.
The only possible downside to my setup is it’s buildless. 🤯 That’s right, I’m not using a bundler of any kind anywhere. Not for the tests, not even for publishing my package! Everything is raw ESM, and I think that’s the coolest thing ever. The static web server literally loads dependency code right out of the local node_modules folder. It’s possible that some setup in the future will break down if I need to utilize a package that doesn’t publish to NPM well for a buildless architecture. But so far, so good!
(But…you may be wondering why I don’t publish a “dist” flavor of the package? Because there are plenty of CDNs out there which will do that for you! esm.sh, jsDelivr, and others. And of course, a local bundler like esbuild will likewise solve the same problem. It’s time to embrace our glorious ES Modules future, and bid CommonJS and shipping janky bundled/minified code farewell!)
One final note:the Reciprocate repo is hoted on Codeberg! That’s right, my boycott of Micro$oft includes GitHub as well, which at this point might as well be called Micro$oft Copilot CodeHub 3000. Personally, I refuse to play their idiotic games, and I heartily encourage you to migrate off of GitHub yourself and support a true open source ecosystem!
So there you have it. I’m very pleased with the architecture of this new package, so much so that I expect it to be the blueprint of many more to come in the ❤️ Heartml family. Enjoy! 😊
Over the past few years I’ve said on many occasions that signals, a reactive primitive popularized by many JavaScript frameworks and libraries (and most easy to pick up using the implementation by the Preact folks), makes it much easier to add declarative binding to your HTML using basic vanilla DOM techniques + signal effects.
But I haven’t had a concrete example to show off just how simple yet powerful this can be…until now! 😎
In these examples, I show the ubiquitous counter demo, a todo list with a completion count, and a typewriter printing out a message—all with a binding solution in only 20 lines of code! (And that includes the import statement to load the signals library!) Here’s the entire solution:
I import the relevant functions from the signals library.
I start the function block.
I first gather up all the elements within the container featuring attributes of bind-*
I loop through each of those elements.
I collect one (or more) of the bind-* attributes.
I loop through those attributes.
If the values object (aka signals) doesn’t include the key provided in the HTML, we’ll exit the loop.
I get the name of the attribute minus the bind- prefix.
I create a new effect. This is where the magic happens! This callback will re-run every time any signal or computed value changes that gets referenced within the effect.
If the attribute is bind-text, I’ll set the text of the element with the value.
By using textContent, we ensure all HTML is escaped (so this should be the default approach).
Or if the attribute is bind-html, I’ll set the element’s innerHTML with the value.
And here’s where we (unsafely) do that. 😅
Otherwise…
I’ll use the value to set the element’s attribute using whatever the label was coming after bind-.
End block.
End block & function.
End block.
End block. (Monotonous, isn’t it? 😃)
End block.
And that’s all folks! Now you can simply pass a container element and an object containing signal values to bindSignals and suddenly your bog-standard HTML has been granted superpowers!
Caveats: there are a few more smarts we should consider here before yeeting the code right into a repo and pushing to production!
The attribute setting logic is too basic. What about boolean-style attributes like hidden where you want value === true to result in a hidden="" attribute and value === false to result in the removal of that attribute? I think this could be done with a simple change such as values[attr.value].value === false ? el.removeAttribute(name) : el.setAttribute(name, values[attr.value]), but testing validation is required!
You might want to bind properties of elements and not just attributes (which will always resolve to string data), but that’s a whole ‘nuther can of worms!
While I showcase using this technique within a custom element, that also opens up a can of worms because attribute/property reflection is usually something you want a custom element to support as part of reactivity. While it’s beyond the scope of this article, I’ve written such code in vanilla JS as well—more than once!—so perhaps I’ll document that in a future installment.
Sometimes you’ll want to bind an array and loop through multiple elements which include bound values of objects inside the array, not just one-off values. This is definitely another degree of complexity as you have to manage adding/updating/removing multiple items (usually involving an embedded <template> which is used to “stamp” each collection item).
If you were to set up “nested” components where the bound HTML of “inner” components shouldn’t mess up “outer” components with their own bound HTML, this solution would break. You’d need to figure out how to determine the unique boundaries of each component. If you use web components with shadow DOM, however, that might be problem solved! (Assuming such an approach doesn’t introduce other problems…)
And finally…you will want to clean up effects when elements are removed from the DOM. If you’re in a standard website environment (an MPA), that’s not an issue because a full-page refresh will clear the JS execution context as well as the DOM. But in an SPA environment, if you remove elements which have been bound, suddenly you introduce memory leaks and possibly errors. The effect method returns a disposal callback—the most obvious solution would be to gather those as you set up a custom element in connectedCallback, and then in disconnectedCallback you execute the callbacks to cancel the effects.
Whew! Don’t let that all throw you off however! I think for simple projects where you want something smarter than littering your vanilla JavaScript with a bunch of textContent= statements everywhere, but less committed than pulling in an entire binding/rendering library, you can start with a solution as basic as this and ratchet up the complexity only when needed. And don’t forget, once you have a more established solution working, you could wrap it up into a little library of your own and re-use it across multiple projects.
So what do you think? Does this get you excited about using signals in your vanilla JavaScript? Are you hoping the web platform adds signals as a native feature so we don’t need to pull in a userland library? (I sure am!) Stranger things have happened!
Ever since the momentous release of Tailwind CSS version 4 at the start of this year (January 2025), I have been telling people that the CSS Framework Wars™ are finally over. CSS won! And amazingly, so did Tailwind. Plot twist!
You could take an existing project, add Tailwind 4 to it, and nothing needs to change (provided you can import TW’s design tokens and utilities but not its full reset). Then you could incrementally use some tokens or some utilities only when and where needed.
Tailwind has desperately needed to understand it’s just a tool coexisting with many other tools in a vast developer landscape where vanilla web standards have been improving exponentially. CSS in particular has leaped forward at a dizzying rate. Tailwind’s “best practices don’t work” marketing line is ludicrous in 2025 when the “best practices” being argued against are from 2010.
Regarding that last line, I’m pleased to say that’s nowhere to be found on the new Tailwind homepage. In fact, their biggest marketing line now is:
Tailwind is unapologetically modern, and takes advantage of all the latest and greatest CSS features to make the developer experience as enjoyable as possible.
And while “modern” in some contexts is silly indeed, in this case that’s exactly what we’ve been clamoring for. Tailwind 4 may look familiar to those who are familiar with Tailwind, but it’s actually a ground-up rewrite that feels like it was developed by CSS fans, for CSS fans. Gone are the days of configuring the CSS framework with (*checks notes*) JavaScript, and using design tokens which simply don’t exist in vanilla CSS as real CSS variables.
If you had polled a bunch of people who didn’t like Tailwind before (me! me! ooo pick me!) and asked them how to “fix” Tailwind; then swirled the responses in a jar, pulled most of them out, and then made those fixes, that’s essentially what we got in version 4. 🤯
That’s a pretty big swing, and one I simply couldn’t be more thrilled with. And to demonstrate just how excited I am, I created a demo!
It’s a simple site built with Eleventy + PostCSS, and it showcases Tailwind 4, Web Awesome (formerly Shoelace), and custom vanilla CSS all playing quite nicely together. The Tailwind config is only slightly non-standard, and I’ll explain why.
Configuring Tailwind to Be “Non-Viral”
The out-of-the-box default way to install Tailwind is to add this to your CSS entrypoint:
@import"tailwindcss";
This imports the “preflight” (a basic CSS reset), Tailwind’s theme (aka all the design tokens), and the JIT (Just-In-Time) utility classes functionality.
That will work insofar as you want Tailwind, only Tailwind, and nuthin’ but Tailwind. However, if you have literally any other setup to your project—perhaps you already have a CSS architecture in mind (pssst, my course CSS Nouveau is now free! 😎), or you want to use an established design system, or whatever the case may be—what you really want is an installation of Tailwind which doesn’t “take over” anything. Literally until you need to use a particular Tailwind feature, it should be invisible…a “non-viral” flavor of Tailwind.
That is a surprisingly straightforward proposition! Here’s what our CSS entrypoint should be changed to instead:
The @layer directive sets up your CSS cascade layers for the Tailwind imports. All major browsers since March 2022 support cascade layers, so at this point in time you should be fine. 🤞 (Tailwind 4 specifically requires browsers this new.)
We skip importing any preflight and go straight to the theme import, applying it to the theme layer. We ask for a “static” theme meaning all the CSS variables of the theme are exported (this is critically important as I will soon explain), and we also specify that all the variables should have a tw prefix. So instead of --color-red-500, you’ll get --tw-color-red-500.
Finally, we import the JIT utilities via the utilities layer and similarly we ask for a tw prefix. So instead of text-red-500, you’d write tw:text-red-500 (note the colon here…this tripped me up for hours wondering why tw-text-red-500 wasn’t working! 😅).
The reasons we need to specify theme(static) is because by default, Tailwind will apply JIT logic to the CSS variables as well, meaning only the variables needed to support utility classes which have already been used in the project will be included in the theme. This is supposed to be a performance gain, but I think it’s bananas because why would you ever want your “design system tokens” to be missing and unavailable to vanilla CSS use cases in production?!
Thankfully with that resolved and this new CSS entrypoint in place, it is now possible to use Tailwind alongside literally anything and you incur no hassles and no penalties! You can write this:
I personally would prefer to use calc(var(--spacing) * 5) rather than --spacing(5) (a Tailwind-ism) to keep things as vanilla as possible, but that minor quibble aside, gosh darn if it doesn’t feel good to be writing real CSS! Banish @apply ... and theme(...) to the hell from whence they came! (Technically they’re still available, but please for the love of god let’s not keep using them!)
Tailwind’s new CSS-first theming documentation is quite extensive, and you can easily add your own custom variables, styles, and even utilities all using vanilla or vanilla-adjacent syntax. You quite literally never ever have to touch a JavaScript config file ever again, ever. 🥰
I can’t even begin to tell you what a relief this all is. Do you know how much time I’ve spent in the past writing code manually to convert Tailwind’s proprietary JavaScript-based theming system to native CSS variables? Hours. Days. I even wrote an app that does it. We can chuck that out the window now! 😄
So What’s the Catch?
There’s not really any catch. I will say that all my usual caveats about why it’s better for 95%+ of your styling to be authored in real stylesheets instead of littering your HTML with cryptic shorthand class names are still in effect. I’m not saying I, Jared will personally switch to Tailwind utilities whole hog and stop authoring .css files.
But the genius of Tailwind 4 is you don’t have to make those lofty decisions on a project-by-project basis. You can make those decisions on a page-by-page, or component-by-component, or line-by-line basis. You could prototype a new landing page quickly with utility classes, then strip out those utility classes and use Tailwind’s design tokens in a 100% vanilla .css file instead. You don’t need to compromise one way or another. You can convert variables-in-stylesheets to utilities-in-HTML and back again, however it makes sense. This is how it was always supposed to be. (As I’ve been shouting from the rooftops for years now!) And boy am I thankful the Tailwind development team decided this was a direction they could go in; which, to be perfectly honest, removes the majority of complaints I have had with Tailwind.
It’s weird to no longer be a vocal Tailwind critic, but hey, at least I can still go beat up on the React crowd for breaking web components oh snap, that’s been fixed too! 😂
Hallelujah! 🙌 2025 is a freakin’ incredible year for fans of vanilla-first web development.*
* (Except for the rise of genAI wreaking havoc on our industry, but that’s a tale for another time…)
August 2025 Update: I actually overcomplicated my workaround described below. There’s a simpler workaround, and that’s in this CodePen—thanks to the commandForElement property which lets you specify via JavaScript literally any element to be the recipient of a command invocation!
Yes, I am a web components hammer and every API looks like a shadow DOM nail.
(I’m not sure that analogy quite makes sense, but you get the picture!)
So I thought it would be fun to revisit the Invoker Commands API (MDN) which is available in Chromium-based browsers and hopefully coming soon to Safari and Firefox (both are currently in testing).
I’ve talked about this API before on the blog, and I continue to feel pretty excited about it since it’s the first truly declarative “click and see something happen!!” API we’ve ever seen on the web other than forms. I mean, think about it for a moment…isn’t it quite odd that there’s never been a way to write HTML (not JavaScript!) which says “when you click this button, do that thing”?!
Now we can do that nope we can’t do that if we’re inside shadow DOM and the button being clicked is supposed to do a thing via its host component. 😡
Le sigh…
But thankfully there’s always a workaround in the Wide World of the Web, so I’ve written up that workaround (and using Keith Cirkel’s cross-browser polyfill currently needed for invokers) and here’s a CodePen demonstrating it in action. I’m calling my little solution Invocably and maybe some day I’ll package it up as an NPM package so you can put the package in your package, yo dawg. But for now, CodePen it is.
A few notes:
The way invoker commands work is you need a commandfor attribute with the ID of another HTML element. The problem is, when you’re inside shadow DOM, there’s no ID available for the host component, as the host lives hoisted up in “light DOM”.
Invocably solves this by injecting a tiny little web component inside the shadow DOM with an ID of invocably that is a CommandEvent target. It then adds the appropriate commandfor attribute to every button in the shadow DOM with a command attribute so the event handler will work.
That injected component then forwards the event onto a handleCommand method in your host component. Bonus points for working even for commands coming elsewhere from light DOM. So you can write a simple command API right there for your component and Bob’s your uncle!
The only big monkey wrench in these works I can foresee is that if we’re talking about web components with shadow DOM, we might also have a whole design system going on where the buttons are also custom components. And if you’re using groovy-button instead of a simple button, I’m guessing this API just isn’t going to work automatically. The design system button would also need to replicate Invoker Commands click behavior.
But I don’t see that as a deal-breaker. In fact, I think the whole point of having a web-native API like this is so that even when we need to build more bespoke APIs for one use case or another, we can build directly on top of platform mechanics instead of having to reinvent the wheel from scratch every single time.
My hope is that in the near future, every vanilla-friendly web app & UI framework will be built out of the idea that invoker commands are how “click and a thing happens” gets done. It’ll just be baked into the system, with “commands” being an API primitive for every interactive component. This is a huge step forward for the web feeling like it’s a bona fide application development platform rather than a glorified document viewer, and I’m so here for it.
Ever since the backdrop-filter CSS property became a thing, combined with careful usage of gradients and shadowing it’s allowed web designers to make objects appear glassy to varying degrees. In recent years, we’ve even seen the rise of glassmorphism as a UI aesthetic.
Now with Apple’s rollout of Liquid Glass and an enthusiastic embrace of design details which seem to move us ever further away from “flat design” (a most welcome development in my opinion as a long-time lover of textures, shading, and tasteful 3D), I have no doubt we’ll see more attention paid to bringing elements of glassmorphism to web-based design systems. While I don’t recommend trying to recreate Apple’s Liquid Glass effects verbatim, we can use Liquid Glass as a point of inspiration for designing our own uniquely delightful interfaces.
To kick off such coverage here on That HTML Blog, I’ll direct you to this tutorial by Josh Comeau featuring some clever techniques for making frosted glass effects via backdrop-filter appear more realistic. As Josh puts it:
Here’s the problem: The backdrop-filter algorithm only considers the pixels that are directly behind the element.
By default, the gaussian blur algorithm is applied to all of the pixels behind the element. This means that if a big colorful element is near the element, it won’t have any effect.
That’s not really how frosted glass works in real life though. Light bounces off of objects and then goes through the glass. It looks so much better when the blurring algorithm includes nearby content.
Separate from this article, I’ve also been experimenting with adding additional filters besides blur to backdrop-filter such as adjusting contrast down and saturation up so you can get pretty exciting color effects without the headache of huge light/dark swings (a problem we saw even in Apple’s early Liquid Glass betas before they made tweaks to tame the excess). I hope to write up a demo for that soon.
Ultimately I think the best designs will pull from a number of graphics techniques to make it seem fun and fresh. Anything to move us past the era of no-depth boring-ass solid colors. (Speaking of which, may I interest you in a colorful squircle button?)