Posts Tagged ‘chatgpt’

AI: “This Changes Everything”

August 4, 2025

Ever since Luddite days, “automation” has been feared as a job-destroyer. But even as technology advanced beyond anything imaginable then, more jobs were always created than lost, and as I write in 2025, unemployment in advanced nations is near record lows. While greater productivity has made life much better for most humans. But many say, “this time is different,” with AI capable of performing so much work now done by humans.

Yet that transformation seems stalled — so far. Hence those still robust employment levels. For all the buzz about AI and its capabilities, most businesses haven’t figured out how to do much with it.

A big factor is simple bureaucratic inertia. Modern civilization is highly bureaucratized, gummed up with procedures. That’s why it’s so hard to get anything done. The Empire State Building was completed in 1931 in little over a year. Unimaginable today. AI could radically alter how many businesses operate, but such structures are resistant to change, let alone the radical sort.

Much of that resistance comes from employees, whose jobs are potentially threatened and don’t want to help that process along, but more basically resist any change to how they do things. So it will take time for AI to really work its way transformatively into the economy — as was true for earlier technological ruptures, like electricity.

And yet AI is already having some very big impacts. A cartoon in The Economist showed a gravestone for the World Wide Web — 1989 – 2025. Huh?? Yes, it’s being destroyed by AI. Traffic to websites of all kinds is falling markedly. By 31% in a year for health-related ones. The explanation seems to be this: while “conventional” googling gives you a bunch of links to websites, when you ask ChatGPT a question, it in effect does the googling for you, providing the information sought with no links. So websites get less traffic — undermining their basic business models, not only selling stuff, but also selling ads. Google itself, of course, also loses ad revenue.

Another thing happening is an explosion in Chatbot use by youngsters. Almost overnight, high proportions of teens and preteens seem to have their lives practically taken over by AIs, consulting them incessantly not only on school-related stuff but personal concerns. AIs have become best friends if not exactly (yet) boyfriends and girlfriends — though we do know of at least one teenager who committed suicide over a relationship with an AI.

A big reason for this whole phenomenon is AIs making themselves congenial to youngsters — much more so than real-world acquaintances who can be petty, mean, selfish, callous, etc. Not so AIs, who shower users with flattery and affirmation, if not genuine love. But such a distinction seems to be growing moot.

Studies have shown that using AI to help with a cognitive task — an essay or term paper, say — makes for less brain activity occurring. Students using AI were less able to talk about what they’d written. It seems that turning over critical thinking and creativity, at least in part, to AIs, causes one’s own such brain modules to atrophy. One study did find that people making more use of AI later scored lower on critical thinking.

We’d known for years how social media has been messing with the psyches of especially the younger generation (discussed in Jonathan Haidt’s book, The Anxious Generation). Disrupting their sense of self and ability to develop socially, as humans used to do. AI adds to that a whole new dimension. Younger people are becoming ever more a species apart.

There’s much speculation about coming AGI — Artificial General Intelligence — more comprehensively doing what human brains do, and of course doing it better, outstripping our own intelligence. Seems to me we’re actually already there. Even a “primitive” AI like ChatGPT has command of vastly more information than a human does, and moreover, can integrate that information better, quicker, and often, if you will, insightfully.

Nevertheless I continue seeing them as “just machines.” But I’m ever less sure. What our minds do is not magic, it’s a product of the processing in our brains; what AIs do is not magic either, and if they do the equivalent of what our brains do, why couldn’t the result be similar? That is, a conscious self. Consciousness is not either-or, but comes along a spectrum, ranging from humans at the top, down through mice and other lower creatures. We shouldn’t rule out AI consciousness at that lower end, at least to start. Will we know it when we see it? And what then?

There’s also what’s been called the “alignment problem,” regarding the possibility of an AI acting at odds with human interests. As with philosopher Nick Bostrom’s hypothetical of an AI tasked with maximizing paper clip production, resulting in a world full of paper clips with no humans. I’ve been skeptical of scenarios where a rogue AI takes over the world and dispenses with humanity. But a much bigger threat comes from humans themselves — bad people putting AI to bad uses. A recent article in The Economist noted that “modified DNA” is already a “mail-order product.” Saying that “[i]f AGI can furnish any nihilistic misanthrope with an idiot-proof guide to killing much of the world’s population, humanity is in trouble.”

It goes on to note that while AIs are being trained “to politely rebuff most harmful questions,” it’s hard to ensure that happens without fail — without clever miscreants finding work-arounds. And even if the paper clip hypothetical seems extreme, we do know that even without bad people in the picture, AIs themselves “will lie, cheat and steal to achieve their goals,” even being capable of breaking what are supposed to be the inhibitory rules built into them. And they are still black boxes whose inner workings their human progenitors don’t fully understand.