I’m a few days late publishing this, but this October marks the tenth anniversary of my first day working at Mozilla. I’m on my third hardware refresh (a Dell XPS which I can’t recommend), still just my third CEO, and now 68 reorgs in.
For something as momentous as breaking into two-digit territory, there’s not really much that’s different from last year. I’m still trying to get Firefox Desktop to use Glean instead of Legacy Telemetry and I’m still not blogging nearly as much as I’d like. Though, I did get promoted earlier this year. I am now a Senior Staff Software Engineer, which means I’m continuing on the journey of doing fewer things myself and instead empowering other people to do things.
As for predictions, I was spot on about FOG Migration actually taking off a little — in fact, quite a lot. All data collection in Firefox Desktop now either passes through Glean to get to Legacy Telemetry, has Glean mirroring alongside it, or has been removed. This is in large part thanks to a big help from Florian Quèze and his willingness to stop asking when we could start and just migrate the codebase. Now we’re working on moving the business data calculations onto Glean-sent data, and getting individual teams to change over too. If you’re reading this and were looking for an excuse to remove Legacy Telemetry from your component, this is your excuse.
My prediction that there’d be an All Hands was wrong. Mozilla Leadership has decided that the US is neither a place they want to force people to travel to nor is it a place they want to force people to travel out of (and then need to attempt to return to) in the current political climate. This means that business gatherings of any size are… complicated. Some teams have had simultaneous summits in cities both within and without the US. Some teams have had one or the other side call in virtually from their usual places of work. And our team… well, we’ve not gathered at all. Which is a bummer, since we’ve had a few shuffles in the ranks and it’d be good to get us all in one place. (I will be in Toronto with some fellow senior Data Engineering folks before the end of the year, but that’s the extent of work travel.) I’m broadly in favour of removing the requirement and expectation of travel over the US border — too many people have been disappeared in too many ways. We don’t want to make anyone feel as though they have to risk it. But it seems as though we’re also leaning away from allowing people to risk it if they want to, which is a level of paternalism that I didn’t want to see.
I did have one piece of “work” travel in that I attended CSV Conf in Bologna, Italy. Finally spent my Professional Development budget, and wow what a great investment. I learned so much and had a great time, and that was despite the heat and humidity (goodness, Italy. I was in your North (ish). In September. Why you gotta 30degC me like this?). I’m on the lookout for other great conferences to attend in 2026, so if you know any, get in touch.
My prediction that I’d still be three CEOs in because the search for a new one wouldn’t have completed by now: spot on. Ditto on executing my hardware refresh, though I’m still using a personal monitor at work. I should do something about that.
My prediction that we’d stop putting AI in everything has partially come true. There’s been a noticeable shift away from “Put genAI in it and find a problem for it to (maybe) solve” towards “If you find a problem that genAI can help with, give it a try.” You wouldn’t notice it, necessarily, looking at feature announcements for Firefox, as quite a lot of the integration infrastructure all landed in the past couple of months, making headlines. My feelings on LLMs and genAI have gained layers and nuance since last year. They’re still plagiarism machines that are illegally built by the absolute worst people in ways that worsen the climate catastrophe and entrench existing inequalities. But now they’ve apparently become actually useful in some ways. I’ve read reports from very senior developers about use cases that LLMs have been able to assist with. They are narrow use cases — you must only use it to work on components you understand well, you must only use it on tasks you would do yourself if you had the time and energy — but they’re real. And that means my usual hard line of “And even if you ignore the moral, ethical, environmental, economic, and industry concerns about using LLMs: they don’t even work” no longer applies. And in situations like a for-profit corporation lead by people from industry… ignoring the moral, ethical, environmental, economic, and industry concerns is de rigeur.
Add these to the sorta-kinda-okay things LLMs can do like natural language processing and aiding in training and refinement of machine translation models, and it looks as though we’re figuring out the “reheat the leftovers” and “melt butter and chocolate” use cases for these microwave ovens.
It still remains to be seen if, after the bubble pops, these nuclear-powered lake-draining art-stealing microwaves will find a home in many kitchens. I expect the fully-burdened cost will be awfully prohibitive for individuals who just want it to poorly regurgitate Wikipedia articles in a chat interface. It might even be too spicy for enterprises who think (likely erroneously) that they confer some instantaneous and generous productivity multiplier. Who knows.
All I know is that I still don’t like it. But I’ll likely find myself using one before the end of the year. If so, I intend to write up the experience and hopefully address my blogging drought by publishing it here.
Another thing that happened this year that I alluded to in last year’s post was the Google v DOJ ruling in the US. Well, the first two rulings anyway. Still years of appeal to come, but even the existing level of court seemed to agree that the business model that allows Mozilla to receive a bucketload of dollabux from Google for search engine placement in Firefox (aka, the thing that supplies most of my paycheque) should not be illegal at this time. Which is a bit of a relief. One existential threat to the business down… for now.
But mostly? This year has been feeling a little like 2016 again. Instead of The Internet of Things (IoT, where the S stands for Security), it’s genAI. Instead of Mexico and Muslims it’s Antifa and Trans people. The Jays are in the postseason again. Shit’s fucked and getting worse. But in all that, someone still has to rake the leaves and wash the dishes. And if I don’t do it, it won’t get done.
With that bright spot highlighted, what are my predictions for the new year:
- I will requisition a second work monitor so I stop using personal hardware for work things.
- FOG Migration (aka the Instrumentation Consolidation Project) will not fully remove all of Legacy Telemetry by this time next year. There’s evidence of cold feet on the “change business metrics to Glean-sent data” front, and even if there weren’t, there’s such a long tail that there’s no doubt something load-bearing that’d delay things to Q4 2025. I _am_ however predicting that FOG Migration will no longer being all-encompassing work — I will have a chance to do something else with my time.
- I predict that one of the things I will do with that extra time is, since MoCo insists on a user population measurement KPI, push for a sensible user population measurement. Measuring the size of the user population by counting distinct _profiles_ we’ve _received_ a data packet from on a day (not that the data was collected on that day)? We can do better.
- I don’t think there’s going to be an All Hands next year. If there is, I’d expect it to be Summit style: multiple cities simultaneously, with video links. Fingers crossed for Toronto finally getting its chance. Though I suppose if the people of the US rose up and took back their country, or if the current President should die, that could change the odds a little. Other US administrations saw the benefit of freedom of movement, regardless of which side of the aisle.
- Maybe the genAI bubble will have burst? Timing these things is impossible, even if it weren’t the first time in history that this much of the US’ (and world’s) economy is inflating it. The sooner it bursts, the better, as it’s only getting bigger. (I suppose an alternative would be for the next shiny thing to happen along and the interest in genAI to dwindle more slowly with no single burst, just a bunch of crashes. Like blockchain/web3/etc. In that case a slower diminishing would be better than a sooner burst.)
- I predict that a new MoCo CEO will have been found, but not yet sworn in by this time next year. I have no basis for this prediction: vibes only.
To another year of supporting the Mission!
:chutten









