aw, christine dug up this old hn comment on the scuba whitepaper. <3 yep, scuba was exactly this transformative for us at parse, and that experience led straight to us starting honeycomb.
fun historical note: scuba was actually hacked together in order to claw their way out of mysql hell. that's where it got so many of its unique qualities -- arbitrarily wide events, high cardinality dimensions, etc. all the shit you gotta have for debugging db queries.
it was only after that they were like ... hey, this kinda works for other stuff too huh... 🤜🤛
it's so fascinating to me how out of left field that origin story is. explains why it shares so little DNA with the rest of the monitoring ecosystem, doesn't it?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I devoured the recap @martinfowler posted from the deer valley summit. Loved it.
But the notes suggest we may be replicating a perennial blind spot in software engineering: treating code like the outcome, and production like an afterthought.
Formal methods and test suites are like flight simulators.
Flight simulators are genuinely impressive. Airlines use them, pilots log real hours, they catch real failure modes. Nobody's saying skip the simulator.
But a pilot who has only flown simulators is not a pilot.
I woke up this am, scanned Twitter from bed, and spent an hour debating whether I could stomach the energy to respond to the latest breathless fatwa from Paul Graham.
I fell asleep again before deciding; just as well, because @clairevo said it all more nicely than I would have.
(Is that all I have to say? No, dammit, I guess it is not.)
This is so everything about PG in a nutshell, and why I find him so heartbreakingly frustrating.
The guy is brilliant, and a genius communicator. He's seen more and done more than I ever will, times a thousand.
And he is so, so, so consistently blinkered in certain predictable ways. As a former fundamentalist, my reference point for this sort of conduct is mostly religious.
And YC has always struck me less like an investment vehicle, much more like a cult dedicated to founder worship.
Important context: that post was quote tweeting this one.
Because I have also seen designers come in saying lovely things about transformation and user centricity, and end up wasting unthinkable quantities of organizational energy and time.
If you're a manager, and you have a boot camp grad designer who comes in the door wanting to transform your org, and you let them, you are committing professional malpractice.
The way you earn the right to transform is by executing consistently, and transforming incrementally.
(by "futureproof" I mean "true 5y from now whether AI is writing 0% or 100% our lines of code)
And you know what's a great continuous e2e test of your team's prowess at learning and sensemaking?
1, regularly injecting fresh junior talent
2, composing teams of a range of levels
"Is it safe to ask questions" is a low fucking bar. Better: is it normal to ask questions, is it an expected contribution from every person at every level? Does everyone get a chance to explain and talk through their work?
The advance of LLMs and other AI tools is a rare opportunity to radically upend the way we talk and think about software development, and change our industry for the better.
The way we have traditionally talked about software centers on writing code, solving technical problems.
LLMs challenge this -- in a way that can feel scary and disorienting. If the robots are coming for our life's work, what crumbs will be left for you and me?
But I would argue that this has always been a misrepresentation of the work, one which confuses the trees for the forest.
Something I have been noodling on is, how to describe software development in a way that is both a) true today, and b) relatively futureproof, meaning still true 5 years from now if the optimists have won and most code is no longer written by humans.