Skip to content →

Asa Dotzler's Blog Posts

“But AI gives me super powers!”

Close-up of a bolt with its hexagonal head rounded off / stripped.
Photograph stolen from Richard McCuistian at of CarParts.com

I often see comments from (mostly amateur) programmers claiming AI gives them programming superpowers. I don’t believe that’s true. I think LLMs generally take longer to master and provide inferior results compared to standard tools and methods we’ve had for years.

It’s like my auto mechanic bragging: “My magical cutting and welding machine fabricated this amazing wrench and removed those two bolt thingies lightning-quick, only stripping one!” My guy, there are precision wrenches in the toolbox that are easier to use and wouldn’t make the next mechanic’s life hell or cost me more down the road.

It’s not a perfect analogy, but novices relying on inconsistent, ad-hoc tools they don’t understand to do work they don’t understand sounds like a recipe for higher failure rates and costs.

Leave a Comment

Good Riddance

The man responsible for iOS 26, the worst usability regression in Apple history, has jumped ship for Meta, and presumably taken top minions with him.

Alan Dye, whose background before becoming Apple’s software interface design exec was brand design and print advertising, about as far from human computer interface work as you could find in the design space.

iOS 26’s and the even less well thought out MacOS 26 Tahoe interfaces have been ridiculed by many for terrible legibility from Dye’s widespread use of transparency effects. It was so bad that the core feature of the design called “Liquid Glass” was quickly amended with a user preference for a “Tinted Glass” that returned some legibility to the interface.

Dye doesn’t understand interface design and his work at Apple regressed their UI progress by more than a decade. He wasn’t even liked or trusted at Apple and had a terrible reputation across the broader interface design community. Apple blogger John Gruber had this to say of him.

It’s rather extraordinary in today’s hyper-partisan world that there’s nearly universal agreement amongst actual practitioners of user-interface design that Alan Dye is a fraud who led the company deeply astray.

A fraud. Let that sink in. Apple, renowned for its user interfaces for more than half a century, put a man who widely considered a fraud in charge of user interface. And though it was former Apple VP of industrial design, Jony Ive, that put Dye in charge, Tim Cook is ultimately responsible.

Tim Cook has been terrible for Apple customers. His product instincts are worse than useless, the 10 years and $30B blown on his utterly failed attempt at a product legacy, the Vision Pro VR goggles, is as clear an indication of that as any. But many critics at least gave him credit for building a solid team. That was misplaced. Most of Cook’s deputies over the last 15 years have been pushed out or left willingly after spectacular failures, across nearly every part of the business from AI to design to retail to Siri and search.

Tim Cook’s only customer for his entire tenure at Apple is Wall St. Steve Jobs’ supply chain expert, Cook has inverted Apple’s approach, molding existing products to more closely fit Apple’s supply chain rather than managing suppliers to support new and innovative products.

Tim Cook doesn’t give a shit about users, only that Wall St. is happy. Cook has been great for investors, terrible for customers. At least today we can find some comfort that another one of his failures, Alan Dye, is no longer around to wreak havoc on even more of SJ’s products and legacy.

Leave a Comment

iOS 26 is shaping up to be a usability nightmare

Apple’s new design language for iOS, called Liquid Glass, makes controls and other surfaces translucent. In addition to the translucency, there are animated distortion and lighting effects on these elements. Combined, these reduce overall usability, for no reason other than fashion. There are no features in this new OS where Liquid Glass improves usability, and countless regressions that will make iPhones harder to use.

Photo stolen from the web. Hand holding iPhone.
iPhone OS 26 beta with translucent controls. The Control Center panel is pulled down over the home screen. Home screen colors all bleed through the panel and many of its buttons sit directly on top of home screen buttons. This gives some of the control panel buttons, which are otherwise mostly colorless, bright coloring.

The foundation of Liquid Glass is translucency: things from the background show through to the foreground. Translucent buttons and other controls over arbitrary and changing content means the controls themselves become arbitrary and changing, and that’s just bad for usability. UI elements changing appearance because the content underneath has changed not only makes controls less recognizable, it often directly impairs readability.

With iOS 26, a translucent button might be tinted red with one wallpaper and green with another. Text might appear white against a white background, or black on black. Button borders can vanish depending on what’s underneath. Simply adding a new app to your home screen could change the coloring of all the controls on a sheet that opens above it. The Music App’s toolbar might be visible on top of one album cover but not another.

This is terrible for usability and accessibility.

Before I go further, understand this: accessibility is usability. Usability means usable by everyone, those who think of themselves as disabled, and those who don’t.

Consider contrast and its role in readability. White text on a white background is unreadable to everyone, right? But what about light gray text on a white background? It turns out we can measure these things and follow widely accepted guidelines to ensure legibility.

The Web Content Accessibility Guidelines, the most authoritative standard for software, call for a color contrast ratio of at least 4.5:1 for text legibility. Contrast ratio measures the difference in perceived brightness (luminance) between text and its background. A ratio of 1:1 means no contrast at all (e.g., white on white) and 21:1 is maximal contrast (e.g., black on white). The 4.5:1 minimum ensures most people can read the text. The further below that threshold, the more people are excluded.

Liquid Glass means buttons, controls, and text now have variable contrast ratios. White text within a translucent dialog might be very legible over a dark background. But on a light background, it could be harder to read, or even completely unreadable. Buttons suffer too. A button with little contrast against its background makes the tap target unclear. A highly contrasting border might help that button, but Liquid Glass uses translucent borders too, which can’t be relied on either.

Text, buttons, and controls in iOS 26 will literally disable millions, perhaps hundreds of millions, of users. Many people who could effectively use earlier versions of iOS will now have to dive into the “Accessibility” settings just to restore basic usability, or find another smartphone altogether. And this will affect some groups more than others. Older adults, for example, will struggle more than younger users. That’s because aging eyes experience thickening and yellowing of the lens, reducing light and contrast sensitivity.

And it’s not just a vision issue. People with memory challenges or other cognitive disabilities might struggle to find a button that was red yesterday and green today. Others may be distracted, overstimulated, or disoriented by the background “noise” bleeding through, and the animated reflections may worsen things, especially for people with motion sensitivities.

Apple has spent decades patting itself on the back for the accessibility of its software. Its quarterly shareholder reports routinely highlight its commitment to making inclusive products. And that praise was justified for many years. As an accessibility professional, I’ve long held Apple up as an example. Apple has done a good job on accessibility and it couldn’t have done so without teams of skilled usability and accessibility experts who had the power to insist on meeting basic standards.

That no longer seems to be the case.

The talent has left, or more likely, leadership has dis-empowered them, because this isn’t good design. It’s fashion over function, the opposite of what good design is supposed to be.

Leave a Comment

The Opening

After more consideration than I’d planned, I think this new anti-steering judgement against Apple, if it stands, will have considerable impact on Apple and the web.

The web has a large and competitive market for payment providers with just about every feature a developer or distributor could want, at low, low prices.

Apple has been cultivating its walled garden, farming developers with a 30% cut right off the top for over 16 years. Apple was fully immune to any competitive threats with policy that prevents App Store developers from linking users to web payment options, or even telling their users from within the app that payments could be made at the developer’s web site.

Apple has done everything in its power to maintain that wall, to hold onto its outrageous 30% cut with a defense it thought impenetrable. All the while, outside that wall, on the world wide web, competition forged a broad and capable ecosystem around payment processing.

The wall is now broken. It has not been removed, and Apple’s grip remains strong but it is like the breach of the Deeping Wall at Helm’s Deep; Once thought impossible, it is the new reality.

Screen capture from the Lord of the Rings' Battle of Helm's Deep showing the Deeping Wall explosion and opening.

Through that opening, App Store developers will be one user click away from collecting payments on the web, freed from Apple’s extortion, regaining a larger portion of their customer payments.

Big players like Spotify that already maintain capable payment setups on the web will immediately update their apps to offer users a web payment option rather than using Apple’s in app payments (IAP.) One link click and the developer and user both win, sometimes dramatically so.

Smaller apps will need to find a web payment provider and possibly other web providers to cover the parts of Apple’s offering they require. Once they’ve determined it’s a better deal and signed up for those services, developers can add a link and any kind of promotion they want to any part of their app experience. I expect new users will be shunted directly to the web payments while existing users may receive incentives to move payments out of the app and away from Apple’s huge cut.

If that’s where we land, Apple has two choices.

The first is deploy a new App Store developer agreement that attempts to recoup the losses Apple is facing with new fees or increases on existing fees.

The second, and far better option is for Apple to do the right thing and compete. Apple could immediately drop its rate from 30% to 10%, globally, and commit to making the developer and customer experience of the App Store the best in the world. Asserting its commitment to excellence, to being the best, Apple could seize this opportunity to invest in the App Store experience and take it from barely tolerable to awesome.

Apple absolutely can compete on a fair playing field, and this one is still tipped in their direction, so they have no excuses. Make the App Store great and win developer loyalty on the quality of the service, not the strength of the walls.

We’ll just have to wait to see which way Apple goes.

Leave a Comment

Technology Pricing

In an earlier post I wrote

Both [Apple and Meta] are pursuing AR (augmented reality) spectacles and also pushing the false narrative that spectacles are the natural evolution of goggles, that goggles weren’t an outrageously expensive mistake and would evolve to something better, as almost all of our tech gets smaller and better over time.

The second lie instigated by these companies and now percolating around the online VR fan base, particularly Apple users who paid about $4K for their goggles, is that hardware prices necessarily come down over time, that a second version will be more affordable than the first, and the third version even more so because “that’s just how technology works, it gets smaller and cheaper with each new version.”

But device prices are based heavily on hardware costs and those costs usually only come down with scale. There are cost savings as the cutting edge components, last year’s flagship chips for example, migrate to the mid-range a year or two later, and not long after that are incorporated into budget models. But that only happens when the scale of the flagship component is large and the demand for more budget friendly options is high. For laptops and smartphones, which have massive scale, components come down in cost rather quickly and this means that last year’s top of the line parts can be resold in this year’s mid-range products at more affordable prices.

Unfortunately for VR fans, goggles rely on low-volume cutting edge technologies and those components, like the micro-OLED displays in the Vision Pro or the bespoke lenses in the Quest 3, are all goggle-specific, where volumes are low and new generations are slow to arrive.

Those outrageously expensive Vision Pro displays, made by one company in extremely low volume, aren’t going to magically become less expensive this year compared to last, and that’s the case with most of the components in VR goggles. Costs don’t fall dramatically for short run custom parts and so prices won’t fall without headset vendors taking a haircut.

If Apple is going to bring down prices for its head mounted computers, it’s going to have to ramp production dramatically or lower the quality. Meta recognized a year or two ago, when sales of its Quest headset began to stall out at numbers that would embarrass the last place console maker, that scale wasn’t going to happen, so it built a budget model with cheaper components, worse visual fidelity and ergonomics, shaving about 20% off of the cost and price (Meta sells Quest at near cost while Apple sells Vision Pro at about a 250% markup.) Unfortunately, the budget model hasn’t sold very well, doing little or nothing to reverse the slide in Quest usage.

VR hardware companies are in a bind. The technology is expensive and demand is tepid. Without increased demand, or some technological breakthrough that’s unlikely to materialize, they can’t get the scale to bring their costs down which means high prices for consumers which further curbs demand.

So, what can they do? My advice would be to put all they can into finding the software killer app, the use case that will make these expensive products useful to a much larger audience. Meta pinned those hopes on gaming; Quest is a game console for your face. But a niche within a niche isn’t scale. I think Apple saw Meta struggling to break into the mainstream with a gaming device and so positioned Vision Pro as a productivity device. Unfortunately for Apple, most people do their productivity stuff at work and no one wants to wear goggles at work.

We’re probably entering another VR winter while we wait on the next technology breakthrough or an entirely new use case emerges. Hopefully next time’s the charm.

Leave a Comment

Bluesky is Different (for now)

I started using Facebook before “open registration” when it was still mostly exclusive to colleges and universities, people with .edu email addresses. I was working at Mozilla and Facebook was testing the waters around bringing more people to the social network. The first new category of users they let in after .edus was non-profits, and Mozilla was a friendly non-profit so mozilla.org emails were allowed to sign up.

Those were the days when you had a “Wall,” an extended profile page where you’d post messages that would also show up in the “News Feed,” a relatively new page then, one that aggregated the messages from all of your friends. This was still years before Farmville and other games hit the platform, the beginning of the end for me.

With open registration’s massive growth in users and the proliferation of games and other spammy nonsense cluttering the News Feed, I could see the writing on the wall, as it were. Eventually the reverse chronological News Feed would be renamed to “Home” and your friends’ messages would be interrupted with Facebook “featured posts,” the beginning of recommended content. To put the last nail in the coffin for the News Feed, Facebook changed the sorting to some undecipherable order that made keeping up with friends even more difficult that even the spam of the games era had.

I was also fairly early on Twitter. I signed up around the time of the SXSW award that propelled them into the limelight. That was shortly before a friend and colleague, Chris Messina, proposed the hashtag which soon after made following topics, not just people, pretty easy for users. That was 2007.

By about 2010 or 2011 Twitter was gaining traction with a considerably larger, non-tech user audience and the site was regularly folding under the weight of those numbers. That was the era of the “fail whale,” a graphic that would show when the site was unavailable. It was also when fake followers, spam, and the slow decimation of third party clients began turning Twitter from an open playground to an manure filled “walled garden.”

Facebook and Twitter were the two places I could share and keep up with my friends and colleagues but by 2012 and 2013 both had dramatically degraded the users experiences, prioritizing ad revenue over usability. That meant an increasing volume of ads in the feeds and extreme user tracking. Oh, and the above mentioned algorithmic feed manipulation.

Eventually both grew into the sewers they are today, no longer social networks but advertising networks called “social media.”

Today, those companies are owned and operated by billionaire assholes whose only interests are power and wealth. They manipulate users for profit and are no longer recognizable to the people who boosted them into the mainstream a decade and a half ago.

I’ve abandoned both platforms, as a result of the decline in usability and the moral bankruptcy of their owners and I’ve settled on a new social app called Bluesky. Bluesky is superficially, in many ways, similar to those I’ve abandoned, But it has one feature that makes all the difference in the world to me, custom feeds.

At Bluesky, there’s no requirement that you eat what the company feeds you. You can build your own feed, one that follows the rules you create, or you can easily use one that someone else has created that serves your needs.

My primary feed at Bluesky is not the default “Following” feed that’s filled with shared links with clickbait and brand images leading to news sites covering the outrage of the moment and all the repeat re-posts that amplify that noise. Instead, I use a feed built by the Sky Feeds account called OnlyPosts. This feed does exactly what it says, showing only the posts from people you are following, nothing else. There’s no recommended content, no re-posts, no replies, just the content posted by the accounts you care about. And, if you want to see what’s happening with friends of friends, Sky Feeds has another one called Mutuals that you might find valuable. I keep that one around for when the OnlyPosts feed isn’t moving as fast as my desire for distraction :-)

Where Bluesky’s default “Following” feed hardly does, that OnlyPosts feed makes Bluesky tolerable and meaningfully differentiates it from Twitter. That’s because the default feeds for these sites, even Bluesky, prioritize engagement above all else. Engagement, the time spent interacting on the site, is best served not by a rich collection of messages from your friends, colleagues, or anyone else you follow, but by maximizing outrage and strife.

I don’t know about you, but I get enough of that already. Exiting Twitter and Facebook was about getting away from that noise and manipulation. Fortunately for me, Bluesky allows me to do just that with user-created feeds, including the amazing OnlyPosts feed.

If you’re looking for a place you can read what others are saying, not just a wall of clickbait that enriches some of the worst people on the planet, give Bluesky and custom feeds a chance. I’m loathe to integrate a new social app that’s quite likely to go downhill eventually, but for now custom feeds tip the balance in favor of that investment. Maybe I’ll see you there.

One Comment

Worse Is Better

Around 1726, Montesquieu wrote, “Le mieux est le mortel ennemi du bien.” Translated to English it reads “The best is the mortal enemy of the good’

Simplified to “Perfect is the enemy of good,” in modern times, it warns against chasing goals which are so lofty as to prevent more practical accomplishments that solve for “good enough.”

I really like Richard P. Gabriel’s variation on this for software development, “Worse is better,” which suggests that fewer capabilities usually means higher quality achieved at lower cost. This contrasted with the earlier Stanford/MIT software development approach, “The Right Thing,” that stressed the need to support all reasonable use cases in simple, correct, and consistent ways.

Today, the Chinese frontier large language models used in modern “AI” cost millions to tens of millions of dollars while the frontier models in the US cost hundreds of millions to develop. This, of course, influences the prices these companies much (eventually) charge to support continued development and a return for investors.

According to the Institute of Electrical and Electronics Engineers (IEEE) the latest benchmarks comparing the leading Chinese models with the leading US models show the Chinese models lagging only 1.7% behind the US ones, the gap closing from 9.26% in the last year.

I believe that the fast followers with slightly worse models and far better prices are going to be the ultimate winners. During this bubble period, the leading US makers have all the cash they need to subsidize end user prices, but that cannot last. At some point, likely not far in the future, the bill will come due; if OpenAI, Meta, or Google have not found ways to lock in users, they will quickly be surpassed by more affordable model makers with “good enough” tech.

There is no “perfect” here to be had. The claims the big US model makers have been making about LLMs becoming self aware and taking over all of human kind’s work burdens were always bullshit.

Bullshit sells in Silicon Valley. Only about 1/3rd of 1% of startups here, including ones with millions or even billions of venture capital dollars invested, ever achieve significant success. Nevertheless, the money spigot is locked in the “on” position, because even just one of those startups taking off, achieving widespread success, can provide the VCs with a lucrative return. See Google and Facebook, Netflix, Uber, Spotify, etc., but consider that for each of those, there were hundreds of failures.

This generation of AI likely will produce significant value, as there are some very solid use cases emerging, such as language translation, code generation, image recognition, materials science, and sentiment analysis. But these practical use cases will soon not require frontier models to achieve their primary goal, user satisfaction. Language translation, for example, has been quite good for most popular languages for years, already working quite well for most users and eeking out a 1% improvement will not grow that audience meaningfully.

The fast follower models, the far cheaper, worse is better, good enough models whose makers deliver something that’s minimally sufficient to real customers, will surpass the ridiculously expensive frontier models from companies investing billions in pursuit of now diminishing returns, hoping to achieve a fantasy.

LLM AIs are not going to “wake up” and they’re not going to displace the entire human work force. That’s all nonsense. Further, the top models in are no longer seeing the drastic improvements with each newer and more costly release.

LLMs are reaching a plateau where ingesting even more data and buying even more GPUs to crunch that data is hardly moving the needle. In that light, the billions that continue to pour into these attempts is mostly wasted. Those who pivot early to building practical and user friendly features on top of far cheaper models will take the lead.

Microsoft has already signaled such an approach. They have been de-emphasizing the frontier models for a while, to focus on smaller, more targeted models that support the most practical use cases yet to emerge. They’re pulling back on new data centers and letting OpenAI and others chase perfection while they begin to tackle good enough. In this case, as with most popular software over the last 30 years or so, worse will be better.

Side by side photo of two toy robots. The photographer's description:
Bender: Scheming, cynical, and oddly charming bending unit who lives in New New York at the turn of the 31st century and who is fueled by alcohol.
Robby the Robot: Careful, strong, and loyal assistant who lives on the plant Altair IV in the early 23rd century and who can manufacture alcohol.
Bender vs. Robby the Robot (121/365) by JD Hancock and used under a Creative Commons license


Leave a Comment

What’s Up with VR and AR

In early 2014, Meta (then Facebook) acquired Oculus, a head-mounted display company that was generating buzz with a successful crowdfunding effort. Oculus was building the Rift, a screen worn on the face, tethered to a PC, that delivered “immersive” virtual reality content. Meta spent the next 5 years integrating the PC with the head-mounted display to eliminate the need to tether to a PC, and released that as the Oculus Quest, now called the Meta Quest and well into its third generation.

Meta made that purchase and invested tens of billions of dollars into the effort because Zuck had missed owning a meaningful piece of the mobile platform revolution and wanted to control (and monetize) the next generation of consumer computing device.

Photo of Mark Zuckerberg on a stage somewhere, from the waist up and featuring the Zuckster in his typical gray t-shirt with a big black head mounted display on his face with a strap over the top and another around the sides of his head holding the display in place. His arms are partially extended and holding a pair of controllers. He's got a ridiculous pursed lips look on the bottom half of his face, the part not obscured by the giant goggles protruding a few inches in front of it.
Mark Zuckerberg wearing a head mounted computer. Bloomberg photo.

Around the same time, as these things usually go because all these tech companies know most of what’s going on with the others, Apple began investing in its own head-mounted display. It took Apple almost 10 years to release theirs, the Vision Pro, and when it hit the scene about a year ago, it was indeed superior to Meta’s Quest in a number of ways, particularly in visual fidelity and style/fashion.

Tim Cook, from the centerfold spread in Vanity Fair magazine, wearing goggles with a big bulbous glass front, with his right hand pressing some buttons on the right side of the goggles or something like that. He stands in front of a desk in his office featuring a MacBook Pro and flatscreen monitor.
Tim Cook wearing a Vision Pro head mounted computer. Photo via Vanity Fair.

By the time Apple was nearing the finish line with the Vision Pro, it was increasingly clear that these face-mounted PCs weren’t going to achieve mainstream success. Quest had been in the market for several years with a number of popular game titles and was nevertheless stalled out with fewer than 10 million active users–estimates are around 7 million today. Considering the tens of billions of dollars invested, probably the single biggest investment Meta has made–going so far as to change the company name to reinforce its virtual reality “metaverse.”

Apple’s takeaway from Quest’s failure was that Meta had chosen the wrong use cases, that games weren’t enough; for these facial computers to grow into a meaningful business, they needed to cover laptop and smartphone use cases, not video game console uses. So Apple focused its later Vision Pro efforts on turning theirs into a “productivity” device, something that was good for things like Excel and video conferencing.

Apple didn’t only push into productivity, they shunned the entire gaming proposition by excluding precision controllers needed for most games, instead pushing eye tracking and hand and finger gestures as a “more natural” experience, at least for things like email and PowerPoint. Apple’s entire sales pitch for the Vision Pro, delivered by Tim Cook from the cover and centerfold spread of Vanity Fair magazine, was that this was the next generation PC platform, the device that would replace your laptop and smartphone.

Apple made about 500,000 units of the Vision Pro, capped by the production of advanced displays from Sony. Their hope was to quickly sell those 500K units and ramp up production with their suppliers in 2025. That didn’t happen.

Apple’s planned production ramp didn’t happen because the Vision Pro sales nosedived after the fanboys got theirs. A few months after release, Apple had sold about half their inventory, but closing out their first year of sales, they still had as much as 1/3rd of those units gathering dust in warehouses and so there were no new contracts with Sony Display or any of the other Vision Pro component makers.

Many blamed that failure on the outrageous pricing, from $3,500 to $4,000 depending on specs. The Quest, they said, sold tens of millions of units because it was only about $500, and though the visual experience and hardware styling were inferior to Vision Pro, the price was right. They were wrong.

It wasn’t the price that doomed the Vision Pro. It was the form factor.

Even Quest, with 5 years in the market and pricing in line with game consoles and mid-range smartphones, stalled out in 2023 with sales declining and third-party developers bailing. It turns out, gaming is the only thing these face computers were good for. The Vision Pro had next to no games and Quest gaming had lost momentum.

The issue with these PC goggles isn’t price or use cases, it’s that they’re goggles. They are 1 to 1.5 pounds of PCs you smash into your face with straps around your head. Even discounting the double-digit percentage of users who get nauseated using them, the rest of the potential market had to deal with eye strain, head and neck pain, and red and swollen goggle prints on their faces, not to mention what it does to hair and makeup, something about half the population cares quite a bit about.

Goggles are a dead end. The only chance they had at mainstream success was as ultra-light displays for an external PC but Apple and Meta went the opposite direction, cramming ever more technology into the devices. There are display-only (well, they have some other sensors too) headsets that weigh only a quarter pound, and had Meta and Apple started there and spent their tens of billions in R&D cutting that weight in half, eliminating the head strap and making them look and feel more like fashion sunglasses with side blinders, they might have had more success.

Unfortunately for all of us, that’s not the direction they took. Neither were interested in a PC accessory, a portable virtual screen for your face. They wanted to own the next generation computing platform and that meant pushing the whole experience inside of the device. Where a currently 4 oz device might have got down to 2 oz with their massive investments, instead the prototypes only got bigger and heavier chasing use cases that would give their makers a platform (and app stores) that would enhance their market power and wealth…. In their hubris and greed, Cook and Zuck built the most expensive consumer tech products ever and both failed to gain meaningful traction in the market because no one asked for and no one wanted strap-on facial PCs.

So where does that leave things. At Meta, the chief technology officer, and dudebro in charge of this stuff has told his staff, in a now-leaked memo, that 2025 is the make or break year for VR, and at Apple inside sources have told reporters that teams have already moved on from the goggles form factor. Both companies are pursuing AR (augmented reality) spectacles and also pushing the false narrative that spectacles are the natural evolution of goggles, that goggles weren’t an outrageously expensive mistake and would evolve to something better, as almost all of our tech gets smaller and better over time.

This is a lie. Spectacles have nothing in common with goggles. They will use entirely different tech stacks and provide a very different experience. Where goggles provide immersive environments with wide fields of view and infinite depth, creating virtual spaces that feel like real ones, spectacles will be more like your car’s heads up display: low resolution, narrow field of view, single fixed distance overlays good for notifications, navigation, and a few other simple use cases. Think Apple Watch, or maybe CarPlay and Android Auto in your glasses, not 3D movies and realistic games that blow your mind for how real they feel.

Spectacles do have one major advantage over goggles, and that’s critical. With enough investment, they have at least a small chance of mainstream success because glasses are a form factor most of us could see ourselves using.

Goggles are approaching the end of their evolutionary path. That’s what 10 years of Apple and Meta investing most of their effort gets you. They cannot get much better than what Apple and Meta have built. The components, which include lenses that cannot shrink much more, and displays that will not improve meaningfully any time soon, will never fit in a spectacles form factor. Without the PC inside, they might get close, but never all the way there. AR spectacles, on the other hand, are at the beginning of their evolution, starting with a somewhat bulky form factor that could improve significantly over the next decade or two if companies like Apple and Meta invest the same kind of effort they did for their goggles.

Meta recently demonstrated some of these technologies in a prototype called Orion. Their form factor is similar to fashion sunglasses but about 3 times heavier. That weight can come down, and the manufacturing cost, currently about $10,000 each, can also come down, but reaching anything even remotely acceptable on those fronts is a decade or more away, according to Meta’s CTO.

Photo of the Meta Orion prototype spectacles sitting on a table next to what looks like a small remote control and a wrist strap. The glasses are very chunky with lenses that look about 4 times thicker than typical fashion sunglasses, thick frames and arms.
Meta Orion AR prototype with “puck” pocket computer and wrist band controller. Photo via Meta.

In the mean time, we’ll get something that’s closer to Google Glass, a heads up display for your face that bombed in the market back in 2013 with users getting labeled as “glassholes.”

This time around, the makers are partnering with popular consumer glasses brands like Ray-Bans and aping the style of popular models like the Wayfarer, so they won’t look quite as silly as Google Glass, and they’ll have cameras and microphones and internet connections and some natural language commands making them considerably more useful.

I have some hope that these augmented reality spectacles can achieve more success than the DOA goggles form factor, but the path there is going to be slow, unfolding over the next decade the way the goggles form factor developed between 2015 and 2025, with incremental improvements to the in-market products from Meta, and a secretive development process and product that pops out of Apple nearly complete but considerably later.

Leave a Comment

Apple Inc. in 2025

Thomas Hawk’s portrait of Robert Scoble wearing Apple’s Vision Pro

These are my quick and dirty, somewhat off the cuff thoughts on the state of Apple in 2025 and how it got to here.

When Steve Jobs installed his chief operating officer and supply chain expert, Tim Cook, as the new Apple CEO, he did so in part because Cook was his perfect complement. Where Jobs was a product visionary that often didn’t care how it was made as much as what was made, Cook was the kind of guy who could put together the process to ensure it was made well. That meant dealing with the logistics, getting other businesses on board, sourcing the right components, and managing the production and business aspects of the product.

That was great for Jobs because it freed him from the nitty gritty details of bringing a (hopefully) profitable product to market and then maintaining that product and (hopefully) its lead with customers.

That was great for Apple when it had both Jobs and Cook. Without Jobs, however, Apple lost its product spark. Cook had a natural understanding of how supply chains work to increase efficiency, scale, and profitability, but none of Job’s understanding of how humans work, so after Jobs, Apple stopped building for people and began building for the supply chain.

Jobs would start with a human problem or desire and design experiences to satisfy those humans. Cook starts with his global manufacturing and business partners and builds the best product he can within the scope of those relationships. Building for people and building for the supply chain result in very different products. Where Jobs gave us the iPhone, Cook gave us a dozen iterations on the iPhone, each more profitable than the last.

And this gets to the heart of Apple’s biggest problem today. Cook took a product company selling the best possible technology products to all of us and financialized it. Where before we were the customer, today Apple stock holders are the customer. Where before Jobs gave us all great products, today Cook gives stock holders ever increasing wealth.

Apple is now the world’s most valuable company, thanks to Cook. But that value is not about us, it’s about Wall Street. Tim Cook, a glorified accountant, transformed Apple into a powerhouse but failed to put that power to use building anything new and useful for users.

About 10 years ago, Cook recognized that Apple was losing its innovation edge. Yet another iPhone version, accessory, or service wasn’t going to keep Apple in front of the pack for innovation and customer excitement so he launched two ambitious initiatives designed to change that. The first was the Apple Car, project Titan, an electric self driving car that would revolutionize the auto industry the way the iPhone did for communications. The second was the Vision Pro facial PC, back then called project N301.

The car went through several major changes in the years between its start in 2014 and it’s cancellation in 2024, from a consumer EV meant to compete with Tesla, to a shuttle or van, then to a set of hardware and software components other car makers might use, and finally back to a consumer vehicles. None of these approaches went very far and after about 10 years and 10 billion dollars, Cook cancelled the project.

The Vision Pro began life before Cook’s tenure, back in the mid-2000s. Jobs had patents for head mounted displays as early as 2007 and considered building a product for several years before his death. The reasons Jobs never built a head mounted display or head mounted full computer were many, but chief among them were technology limitations and form factor. Jobs understood that most people were never going to strap 1 lb of computing technology to their faces and it wasn’t then (and still isn’t today) to build immersive experiences in a spectacles form factor.

Cook, in his absolute hubris, thought Apple’s size and influence could change that.

He’d seen Google flop with its Glass AR spectacles shortly after he became CE), and he had his hands mostly full learning his new role and delivering iPhone updates and accessories so it wasn’t until about 2015 that he began putting significant resources toward this new project.

As Apple started investing in augmented reality headsets around 2015, Cook understood that it would take years to bring the new computing platform to customers and that various parts of it could be prototyped in their existing product line so the first bits were trialed on existing products like the iPhone. Those iPhone software frameworks, most importantly ARKit, were released in 2017.

ARKit was an augmented reality framework for the iPhone that supported applications like navigation, various retail experiences, and gaming. ARKit leveraged iPhones existing cameras and motion sensors for this and apps like Apple’s virtual tape measure showcased this. Apple’s Measure app demonstrated that cameras, without other specialized hardware, could provide real, if limited, value.

ARKit, for core functions, and RealityKit for the rendering layer, were precursors to project Borealis, which would become Vision OS, Apple’s operating system for this next generation of augmented reality computing. Combined with existing iPhone OS code and features like opaque AR overlays and spatial tracking from acquired companies like Vrvana, Apple built a powerful software platform for augmented reality devices.

By 2022, Cook had prototypes of his new facial PC and by 2023 the product was complete enough to show off at Apple’s annual developer conference. In early 2024, Apple shipped the Vision Pro and Vision OS software that powered it.

Vision Pro crammed a whole PC, plus a bunch of cameras and other sensors and high fidelity displays into a 1.5 pound device that you wore like goggles, strapped around your head and pressed into your face. This new experience was marketed by Apple as Spatial Computing.

But Apple was late to the game with Vision Pro.

In 2015, Facebook bought Oculus, a company with a head mounted display prototype that had gained significant attention through crowdfunding. That HMD evolved at Facebook from a simple display that plugged into your PC, to a self-contained facial computer it called Quest.

Facebook had spent a decade focused on the web and so missed the mobile revolution and the chance to own a lucrative mobile OS and app store. Zuckerberg was intent on building and owning the next big platform which he believed to be virtual reality headsets so he reoriented the entire company to focus on this, even renaming Facebook to Meta, borrowing the name from the word “metaverse,” coined by Neal Stephenson in his 1992 science fiction novel “Snow Crash” to describe a shared virtual world, and what Zuck hoped would be a social-first successor to the Internet, one that Facebook could own.

Quest had been in the market for about 5 years when Apple’s Vision Pro shipped. Despite having a solid collection of apps including popular games like Beat Saber, Meta’s VR experience had pretty much topped out with fewer than 10 million active users. Cook watched that all unfold and determined that Quest’s failure to reach the mainstream was its focus on games, that another game console, even one with exciting VR experiences, wasn’t big enough for Apple ambitions. With that in mind, Apple focused the Vision Pro experience on productivity, a more PC-like and less console-like experience.

Vision Pro arrived with one exciting feature Quest didn’t have (rather, didn’t have fully built out) that could overlay digital content over the real world. It did this by compositing a live video take by its outward facing cameras with virtual overlays all displayed inside the goggles. Quest’s version of this feature was more about safety, with low resolution (at first black and white) pass through that helped users not trip over their surroundings or smack other people in the face as they waved their hand held game controllers wildly. Apple’s mixed reality experience was high fidelity and gave the impression that you were looking at the real world, only with floating objects and windows at various positions and distances.

This mixed reality experience was ideally suited to productivity. Users could launch a giant screen, floating in front of them and mirroring their Mac applications while still feeling present in the real world thanks to the high fidelity pass through. Office workers, according to Apple, would abandon their laptops and work in their Excel spreadsheets, read and write emails, and video conference all inside the Vision Pro goggles. Cook even went so far as to pose on the cover of Vanity Fair magazine with a centerfold spread, interview and article that heralded spatial computing as the successor to PCs, laptops, and smartphones.

But things didn’t quite work out that way.

One year after the $3,500 Vision Pro release in early 2024, with Apple’s entire marketing budget and about half of its massive retail footprint and staff devoted to promotion, it had only sold a few hundred thousand units.

That’s hardly Apple scale. Realizing that such an expensive face-mounted PC was not competitive with Meta’s $500 Quest (and a recently released $300 model) Cook began work on a more affordable, perhaps $2,000 device. Cook cancelled plans for ramping production on the Vision Pro and shelved plans for a Vision Pro 2.

That’s an overly long accounting of why Apple missed the boat on AI. Having blown tens of billions of dollars and a decade of R&D on a failed electric car and a failing strap-on facial PC, Apple took a wait and see approach to the new LLM-based AI revolution. By the time ChatGPT was released, Cook realized his mistake and pivoted much of Apple’s resources to integrating AI with iPhone.

Apple would attempt to distinguish itself on privacy and user value. It moved some language models to the phone itself, and for others that could not fit on the phone, a secure remote solution was built. And rather than focusing on the chatbot experience that OpenAI, Google, and Microsoft used to take early leads in AI, Apple sought to integrate AI into existing iPhone features.

Those were welcome approaches given the emerging reputation generative AI was gaining for producing art slop and hallucinating facts.

But Apple couldn’t deliver. iPhone 16 was supposed to be the debut of Apple Intelligence, the branding they’d applied to Apple’s use of AI within iPhone. When that iPhone debuted last year, it was missing most of the promised AI features that Cook assured would arrive quickly with software updates.

Nearly a year after announcing Apple Intelligence, Cook has yet to deliver most of what was promised and the latest reporting is that those features will be delayed until no sooner than the next iPhone release. (There are consumer lawsuits pending over Apple’s misleading and outright false advertising for AI features in iPhone 16.)

Part of the problem Apple faces is that it was simply late to the party. Another part of the problem is that LLM-based AI isn’t really that good. The OpenAI, Google and Microsoft assurances that LLMs would lead to true artificial intelligence, something we should all be both excited for and afraid of (see “The Terminator” movie) have not panned out and the advancements in these language models have slowed significantly, already hitting a wall on increasing value with scaling alone.

It turns out that LLMs won’t keep getting better the more text you feed them. Obfuscating this failure, AI companies are increasingly turning to fine tuning the models, dramatically increasing the compute resources devoted to user requests, and other methods to maintain progress–and more importantly to keep investors happy and pouring ever more billions into the promised but never coming artificial super intelligence.

Despite all the effort being poured into generative AI, the machines are still fundamentally not intelligent. They are very good at producing content that seems accurate. Seeming to be accurate with no confidence, much less certainty, in that accuracy is called “bullshitting,” and Apple learned the hard way that a product that hallucinates facts, or as the experts say “confabulates” is a problem. They were forced to pull one of their iPhone AI features which summarized news headlines after the BBC published an expose demonstrating headlines were not simply incomplete in those summaries, but often told the opposite story from the original.

And there’s the rub. Generative AI isn’t very good at most of what its proponents and expectant users want from it. Large (and smaller) language models are excellent at language tasks like translating content from English to Latvian, and that kind of thing. Training language models to be excellent at language translation is not terribly difficult. I helped deliver Firefox’s web page translation capabilities a few years ago and we built language models that were quite accurate without costing a fortune. Language models excel at language task but they don’t actually know anything or have the ability to truly reason about things so we get the hallucinations and confabulations when we try to use them for things like fact retrieval and even summarization.

Apple missing the boat on AI was actually a good thing, IMO. They came late and tried to deploy simple but useful features, with careful consideration for privacy and performance, and they failed. That failure helps to demonstrate the limits of LLM-based AI and saved Apple the decade and tens of billions of dollars their competitors devoted to the technology and products built on that.

As Big Tech works to integrate LLM-based AI into every product they sell us, desperately chasing use cases that will help them recoup all those billions in R&D, Apple can slow roll AI features over the next couple of years, biding time until this AI bubble bursts and everyone moves on to the “next big thing.”

Unfortunately for Apple, strap-on facial PCs are not the next big thing. This should have been apparent with Microsoft’s abandoning the AR Hololens product, Magic Leap failing to break into the consumer market, and most of all Quest’s failure to transform Facebook into more than a collection of mobile apps running on Apple and Google computers.

So where does that leave Apple?

I suspect Tim Cook’s tenure is mostly complete and he will retire before long without the product legacy he so desperately chased this last decade. He will leave Apple with the most profitable consumer technology products in the world and lots of very happy stockholders, but a reputation as a glorified accountant, not a product genius like Steve Jobs.

Apple will keep cranking out new iPhone models and adding additional services it can sell to iPhone users, but the age of Apple innovation has been over for a while and without innovation, Apple will eventually succumb to competitors that can produce new kinds of value for users. I suspect that’s decades away, given the size of Apple’s war chest, but it’s probably inevitable.

Apple had a good run, building some of the best computers, including the ones we carry in our pockets and purses, and it was first with a highly lucrative mobile app store, but attempts at innovation post iPhone and post Jobs, have mostly fallen flat. That’s too bad. The hundreds of billions of dollars Apple’s invested and will invest in tech products could have given us all something great, something post-smartphone and maybe even post-Internet. Instead we get Animojis, HomePod and a Siri that can’t even properly add calendar entries or set an alarm.

A final thought. Apple isn’t alone here. Over the last 15 years or so, Big Tech has promised the power of the blockchain and crypto currencies, the internet of things, NFTs, VR, and AI. None of this has improved our lives or computing experiences materially considering the years and hundreds of billions of dollars invested. This stagnation, to me, demonstrates two things. First, much of the tech revolution kicked off in the ’80s and ’90s has reached maturity and the companies that “won” those wars have grown large, fat and content. Second, the market dominance that Apple, Google, Microsoft, Meta, etc. have makes it extremely difficult for new competitors, even those with superior solutions, to get a foothold. The answer for both of those is anti-trust prosecution. These companies are big, powerful, dangerous, and boring. It’s time to break them all up.

Leave a Comment

Exiting WordPress.com

I am no longer willing to send my money to Automattic, the WordPress hosting company founded by Matt Mullenweg, creator of the open source WordPress web publishing platform.

WordPress is one of the best blogging platforms available and it checks most of the boxes I care about, but its founder and not-so-benevolent dictator for life, Matt Mullenweg, has turned into a complete ass.

I shared a stage with him in 2005, discussing the future of open source and participatory software development just as Firefox and WordPress were reaching critical mass, in big part because of the open source community built plug-ins and extensions they offered. At the time, Matt seemed like a nice guy. He was sharp, friendly, and competent. But something has changed.

I don’t know if it was all the money he made from WordPress through Automattic, or the decades of being the WordPress top dog, or something entirely different, but he’s turned into a jerk, and a terrible lead for such important software. His threats and heavy handedness with contributors and competitors has really turned me off so I am no longer helping him grow that empire.

There are plenty of web hosts out there offering easy one-click WordPress installs and do that without cutting off access to important features like SSH, only available from WordPress.com as part of their expensive enterprise tiers.

I’ve moved from Automattic’s WordPress.com hosting to an outfit called SiteGround. It costs about half as much as what I was sending to Matt’s company while also providing all the command line and other tools unavailable in the more affordable Automattic plans.

It’s unlikely that I’ll stick with SiteGround, but today I read some more things that finally got me fed up enough to make a hasty move away from WordPress.com.

For some months now, as Matt’s behavior has spiraled down hill, I’ve been thinking about moving. Today I pulled the trigger with the first acceptable solution I could find and that was SiteGround. They had a promotional offer, $36 for a full year, with a free domain and domain transfer, SSL, easy WordPress migration, CDN, and email, so I took it.

Sometime between now and when that SiteGround discount ends, I expect to get set up at my regular host, Pair Networks, and probably move off of WordPress, which I find less trustworthy considering Matt’s behavior of late. I’m looking into Ghost now, but might end up with a simpler static site generator like Hugo or Eleventy.

Once I’ve got that set up, my plan is to assemble all of the posts I’ve made at several earlier blogs going back to the early 2000s, as well as the stuff I’ve posted to Facebook and Twitter. If I get ambitious, I might pull in some forum and newsgroup posts from my pre-blogging days. We’ll see.

If you’ve got experience hosting something like Ghost or Hugo, I’d love to hear from you. The comments here are open so let me know how it’s going for you, or anything else you think I might find helpful.

Leave a Comment