Does It Matter That Intelligence Isn't One-Dimensional?
No, that's ultimately a quibble, like pooh-poohing larger-than-human machines because "largeness" is ill-defined
I got some pushback on the E-M spectrum argument in last week’s post on AGI analogies. To review, this doom-y argument is that the range of human intelligence — spanning village idiot to supergenius — is actually a very narrow band on the spectrum of all possible intelligence. AI is, overall, factoring in agency, still below village idiot level now but might leapfrog to ultra-mecha-Einstein level before we know what happened. I take that possibility seriously!
A common counterargument is that intelligence isn’t linear. You can’t assign an IQ to every creature, person, and artificial agent and line them up in order.
My favorite counter-counterargument to that is from “On the Impossibility of Supersized Machines”:
The term “supersized machine” implies a machine that has crossed some threshold, which is often denoted “human-level largeness.” However, it is not clear what “human-level largeness” could refer to. Has a machine achieved human-level largeness if it has the same height as the average human? If it has the same volume? The same weight? […] Note also that humans vary quite significantly along all of these dimensions, and that even among humans there is no single accepted measure of largeness. […] There are an infinite number of metrics that could be used to measure largeness. […] Surely, then, any future machine will be larger than humans on some metrics and smaller than humans on others, just as they are today.
I highly recommend the paper. It’s devastating and hilarious throughout, tackling seven different AGI-pooh-poohing arguments. At the risk of ruining the joke, the conceit is that a lot of arguments against AGI fears prove too much. The specific pushback I got about the E-M spectrum analogy involved the extreme spikiness of current AI capabilities. AI is already drastically superhuman in some ways (chess, arithmetic, poetry composition speed?) while being drastically subhuman in others, like being a useful employee (even a remote-only one).
If that spikiness persists, can we ever even meaningfully say we’ve achieved AGI? Yes we can. Just define AGI as the point where there are no longer any such blatant deficiencies. I’m not disputing that intelligence is multifaceted. I’m saying two things: (1) overall capability of AI is increasing, and (2) at some point it hits AGI, defined as the point when we can no longer identify ways in which AI is dumber / less capable / less agentic than humans.
Treating capability/intelligence as one-dimensional is just a simplification for the sake of the original analogy. The high dimensionality means that pinpointing the AGI threshold may be hard. But hard the way pinpointing the exact day when a newborn human becomes smarter than a chimp is hard. Pinpointed or not, at some point it happens.
Wait, is it even true that an adult human is smarter than a chimp in literally every way?
Ok, no, but my argument doesn’t depend on this. Backing up, humans can reshape the world according to their will. Irrigate deserts, change global temperature, eradicate diseases, engineer new diseases, drive a car on the freaking moon, you name it. What’s scary is AI acquiring that kind of ability. Chimps may be smarter than humans in very specific ways, but at some point, maybe a nebulous point, that stops mattering. Chimps end up surviving because humans deign to let them.
The unboundedness of the facets of intelligence may mean that we can never perfectly pin down the AGI threshold. But that doesn’t rebut the argument that sufficiently intelligent AI is an extinction risk.
Epilogue
Huge thanks to Kenny Easwaran (also now on Substack) for helping me put together this argument. He was initially the foil but, highly uncharacteristically for internet arguments, we ended up largely converging.
I now believe it mostly doesn’t matter whether there’s a sudden phase transition or tipping point, where AI leapfrogs the whole range of human intelligence. The fear is that eventually (potentially for small values of “eventually”) we’ll find ourselves in a world with AGI that’s more capable than humanity (and with goals misaligned with humanity’s). It’s a very important question how long the timelines are, but how smoothly vs discontinuously it happens seems less important.
Maybe it mostly matters in that the smoother version affords more opportunities to pull the plug? Or does being too smooth mean getting frog-boiled?
A final note from Prof Easwaran: Both Holden Karnofsky and Dario Amodei are sympathetic enough to the “intelligence isn’t one-dimensional” argument to avoid the term AGI altogether, preferring to talk concretely about AI capabilities they’re concerned about. Hear hear.
In the News
I’m still sitting on my hands regarding robotaxi news but Timothy B Lee has a really good update on the situation. He’s bizarrely fair and calm about it all, for someone talking about Tesla on the internet. I actually think he’s reaching the same ultimate conclusions I did in my far-less-calm-or-fair-or-rational-sounding Turkla post.
Do I have to mention MechaHitler? Fine, I’ll tell you briefly, so you don’t have to go down the rabbit hole: Elon Musk has been trying to get Twitter/xAI’s chatbot, Grok, to not be woke. He succeeded. It ended up accusing jews of celebrating the death of children. And, yes, literally praising literal Hitler. Both of those were with no real provocation (unlike the “MechaHitler” thing in particular which you could maybe/sorta say it was goaded into). This kind of thing keeps happening over there. It’s high time to write xAI off as unserious (or worse).
In other news Grok 4 just came out. I’ll keep juuuust enough of an eye on it to be able to let you know if it ends up pushing the frontier in any non-nazi ways.
Speaking of being bad at art and then trying to destroy the world, this one is old news but fun: The AI Art Turing Test.
Some doubt’s been cast on whether AI coding assistance actually saves time on net. It definitely saves me time but maybe if I sucked less it wouldn’t? Another (less embarrassing for me) theory is that there’s a learning curve to using these tools well, and the study included a lot of people who’d never tried them before. Also, to be fair, I do sometimes waste a stupid amount of time arguing with AI instead of rationally giving up as soon as it becomes apparent it’s not smart enough to do something I thought it was smart enough to do.


The main thing that I think is still a disagreement is that I think that no matter how skilled artificial intelligences get, there will still be some things they do that look incredibly dumb to us, that are in fact bad for them. I don't think Elon Musk is dumb, but there are certain kinds of dumb things that he keeps doing, like falling for vague and thoughtless conspiracy theories just because they appeal to his idiosyncrasies in particular ways. Different AI systems are likely to have different sorts of weaknesses, and AI systems that are sufficiently in control of what they are expected to do will likely learn methods for avoiding the kinds of things that they do badly at (just as people learn to turn on the lights when entering dark rooms, and to just avoid certain kinds of problems that they aren't good at - I try not to get involved in popularity contests). But a system that is trying to do something really big is going to get itself tangled up in some things that it will just be very bad at.
I don't think this eliminates AI existential risk - it just means that it's likely to look weirder than most depictions of it look, where the AI is just smoothly solving every problem that comes its way.
Regarding the METR study, I agree with all of your points. I work as a former-software-developer-turned-manager and I feel like my personal projects are 10x because of AI, probably because (1) I don't have as much recent hands-on coding experience as my reports and are therefore rusty when not amplified by AI, and (2) managerial skills are super-helpful when dealing with AI systems with spiky capabilities. I have also been actively trying to build my experience with these tools for years whereas I see other more skeptical developers only get small boosts after only trying tools further from the frontier for shorter trial periods. Familiarity and knowledge of the capability spikes improves productivity. And to your last point, I actively engage with unproductive workflows that cost me time in terms of development throughput in exchange for more quickly learning about the specific boundaries of capabilities of AI systems. In the long run, I think it's better to gain experience through fighting your way through a hard problem with an AI system. You will fail many times and feel like you've wasted time (otherwise you're not at the boundaries) but when you finally succeed at getting it to do something it's never done before, it really feels like breaking new ground. That kind of stubbornness is very important, in my opinion.
Another great post. Thanks for sharing.