Discussion about this post

User's avatar
Kenny Easwaran's avatar

The main thing that I think is still a disagreement is that I think that no matter how skilled artificial intelligences get, there will still be some things they do that look incredibly dumb to us, that are in fact bad for them. I don't think Elon Musk is dumb, but there are certain kinds of dumb things that he keeps doing, like falling for vague and thoughtless conspiracy theories just because they appeal to his idiosyncrasies in particular ways. Different AI systems are likely to have different sorts of weaknesses, and AI systems that are sufficiently in control of what they are expected to do will likely learn methods for avoiding the kinds of things that they do badly at (just as people learn to turn on the lights when entering dark rooms, and to just avoid certain kinds of problems that they aren't good at - I try not to get involved in popularity contests). But a system that is trying to do something really big is going to get itself tangled up in some things that it will just be very bad at.

I don't think this eliminates AI existential risk - it just means that it's likely to look weirder than most depictions of it look, where the AI is just smoothly solving every problem that comes its way.

Matt Heard's avatar

Regarding the METR study, I agree with all of your points. I work as a former-software-developer-turned-manager and I feel like my personal projects are 10x because of AI, probably because (1) I don't have as much recent hands-on coding experience as my reports and are therefore rusty when not amplified by AI, and (2) managerial skills are super-helpful when dealing with AI systems with spiky capabilities. I have also been actively trying to build my experience with these tools for years whereas I see other more skeptical developers only get small boosts after only trying tools further from the frontier for shorter trial periods. Familiarity and knowledge of the capability spikes improves productivity. And to your last point, I actively engage with unproductive workflows that cost me time in terms of development throughput in exchange for more quickly learning about the specific boundaries of capabilities of AI systems. In the long run, I think it's better to gain experience through fighting your way through a hard problem with an AI system. You will fail many times and feel like you've wasted time (otherwise you're not at the boundaries) but when you finally succeed at getting it to do something it's never done before, it really feels like breaking new ground. That kind of stubbornness is very important, in my opinion.

Another great post. Thanks for sharing.

17 more comments...

No posts

Ready for more?