Posts Tagged ‘Artificial Intelligence’

Artificial Intelligence Versus Human Intelligence

January 2, 2026

Futurist Ray Kurzweil has foreseen a coming “Singularity” when artificial intelligence outstrips ours, then widens the gap exponentially by taking over its own further development. Making a new and different world. Some fear this threatens humankind.

Melanie Mitchell’s book, Artificial Intelligence — A Guide for Thinking Humans was written in 2019, before the field really exploded with ChatGPT in 2022. Yet the book usefully explores relevant fundamental questions. Mainly, what is intelligence, really? Like humans have.

Back in 2016, I attended a talk by computer guru David Gelernter, who deemed artificial consciousness impossible, insisting consciousness requires neurons. I challenged this in the Q&A, arguing that if neurons’ functioning could be replicated artificially, there’s no bar to consciousness. It’s not magic.

Mitchell’s book might make consciousness seem impossible — even for humans. There’s a recurring trope: it’s the easy stuff that’s hard. Meaning the ways our minds function, virtually effortlessly, negotiating through everyday life. “Common sense” is another repeated notion. It turns out all this is not simple at all.

Actually, in terms of raw intelligence, artificial systems already far outstrip human brains. Being able to access vastly more information, analyze it, put it together, draw conclusions. And yet — a key Mitchell point — what they cannot do is understand.

That’s the big difference. Our minds arise out of the functioning of our neurons, processing information. An AI processing information may seem analogous. But the processing in our brains results in consciousness, in understanding, that artificial systems cannot (yet) come close to.

Consciousness means not just thinking but thinking about our thinking. Neuroscientist Antonio Damasio elucidated this in Descartes’ Error. Our minds create, for a perception or idea, a representation of it. The self perceives that representation. By means of a further representation, of the self perceiving that initial representation. But it may need yet a still higher order representation. It gets complicated.

An aspect here is feelings. Part of having a sense of self. Artificial systems lack that and hence cannot want anything. They only “want” what they’re programmed for. At one point Mitchell writes of an AI program learning to improve its performance by earning points for certain results. But I wondered: what would an AI care about such points? It cannot care about anything.

So how, exactly, does all this happen in our brains? This has been called the “hard problem.” An understatement. Our science isn’t really close to solving it. And Mitchell’s contrasting what our brains do do, versus what (extremely sophisticated) artificial systems do, makes the former so advancedly complex as to seem virtually impossible. Yet of course we know even nitwits do it, effortlessly.

Mitchell discusses in depth how artificial intelligence work has developed over decades. Basically, the thrust has been to equip artificial systems with vast libraries of knowledge which they can use to analyze problems. For example, IBM’s “Watson” program that could answer “Jeopardy” questions.

But what modern systems like ChatGPT do seems different — not just answering questions. These “large language models” can write essays, poems, songs. An AI-created song, even including an ersatz singer singing it, has now topped the charts. Mitchell notes a test where some music mavens were given a lesser known Liszt composition versus an AI-created Liszt mimic. They mistook the mimic as the real Liszt.

However much all this seems like intelligence at work, we’re still assured it’s indeed artificial and not true intelligence (like ours). More specifically, all a program like ChatGPT does is simply to guess the next word in a sequence. Writing whole books that way.

But just for a laugh, I asked ChatGPT for aValentine’s poem for a wife who’s herself a poet and also an AI aficionado. It produced a fairly clever poem riffing on those elements — with a cute funny ending, which it was hard to believe wasn’t planned by the “writer” from the outset. (Read it here: www.fsrcoin.com/AI.htm)

Writing this essay, it feels a lot like I too work by simply guessing each next appropriate word. Yet I do have some overall ideas in mind, that I’m putting into words, one by one. I have understanding.

Which brings us back to the key point. An AI simulates understanding, without actually having it. And let’s be more concrete about this. Mitchell goes into some depth explaining how a human mind, from an extremely early age, develops a common sense understanding of how the world works. Such simple concepts like a smaller object isn’t visible if behind a larger one; objects fall down, not up; etc., etc. Such things may seem obvious, but an AI operates without this sort of knowledge. Mitchell cites one effort to specifically instruct an AI with a full repertoire of such simple understandings. It failed because millions of such precepts would have been required.

Another point: integral to our consciousness is its continuity, throughout one’s life. Even while asleep. Does an AI have an existence like that, just quietly waiting to be given a query? It seems like a wholly different sort of being.

In the end, Mitchell returns to the idea of artificial systems gaining general intelligence, far surpassing anything humans are capable of — overcoming all the “annoying limitations” we’re subject to, all our irrationality and cognitive biases, our slowness, emotions, etc. Thus attaining “super-intelligence.” However, she suggests, all those supposed human limitations are actually integralto our general intelligence, making us what we are, “rather than narrow savants.” Better really.

Understanding and consciousness go hand in hand. The idea of an AI arising into consciousness is a gigantic matter. Meaning AI personhood; becoming not our tools but our brethren. Again, if we do not understand exactly how our own consciousness arises, we do know there must be something about our neuronal functioning that creates it. So it’s logically conceivable that at some point, artificial systems could have a complexity of information processing at a level sufficiently comparable to ours to produce consciousness.

However, it’s almost surely wrong to envision a consciousness suddenly bursting forth that’s fully equivalent to the human kind. Consciousness is not either-or, but instead falls along a spectrum, with human level consciousness at the top (at least on this planet) and other creatures, like chimps, elephants or dolphins, apparently having something close; dogs, and then cats, descending down the scale, followed by mice and lower animals; insects may have a very rudimentary sort of consciousness. Could an existing AI already have something like that? How could we tell? Moreover, could there be a kind of consciousness differing from what we’re familiar with? Could we recognize it?

AI: “This Changes Everything”

August 4, 2025

Ever since Luddite days, “automation” has been feared as a job-destroyer. But even as technology advanced beyond anything imaginable then, more jobs were always created than lost, and as I write in 2025, unemployment in advanced nations is near record lows. While greater productivity has made life much better for most humans. But many say, “this time is different,” with AI capable of performing so much work now done by humans.

Yet that transformation seems stalled — so far. Hence those still robust employment levels. For all the buzz about AI and its capabilities, most businesses haven’t figured out how to do much with it.

A big factor is simple bureaucratic inertia. Modern civilization is highly bureaucratized, gummed up with procedures. That’s why it’s so hard to get anything done. The Empire State Building was completed in 1931 in little over a year. Unimaginable today. AI could radically alter how many businesses operate, but such structures are resistant to change, let alone the radical sort.

Much of that resistance comes from employees, whose jobs are potentially threatened and don’t want to help that process along, but more basically resist any change to how they do things. So it will take time for AI to really work its way transformatively into the economy — as was true for earlier technological ruptures, like electricity.

And yet AI is already having some very big impacts. A cartoon in The Economist showed a gravestone for the World Wide Web — 1989 – 2025. Huh?? Yes, it’s being destroyed by AI. Traffic to websites of all kinds is falling markedly. By 31% in a year for health-related ones. The explanation seems to be this: while “conventional” googling gives you a bunch of links to websites, when you ask ChatGPT a question, it in effect does the googling for you, providing the information sought with no links. So websites get less traffic — undermining their basic business models, not only selling stuff, but also selling ads. Google itself, of course, also loses ad revenue.

Another thing happening is an explosion in Chatbot use by youngsters. Almost overnight, high proportions of teens and preteens seem to have their lives practically taken over by AIs, consulting them incessantly not only on school-related stuff but personal concerns. AIs have become best friends if not exactly (yet) boyfriends and girlfriends — though we do know of at least one teenager who committed suicide over a relationship with an AI.

A big reason for this whole phenomenon is AIs making themselves congenial to youngsters — much more so than real-world acquaintances who can be petty, mean, selfish, callous, etc. Not so AIs, who shower users with flattery and affirmation, if not genuine love. But such a distinction seems to be growing moot.

Studies have shown that using AI to help with a cognitive task — an essay or term paper, say — makes for less brain activity occurring. Students using AI were less able to talk about what they’d written. It seems that turning over critical thinking and creativity, at least in part, to AIs, causes one’s own such brain modules to atrophy. One study did find that people making more use of AI later scored lower on critical thinking.

We’d known for years how social media has been messing with the psyches of especially the younger generation (discussed in Jonathan Haidt’s book, The Anxious Generation). Disrupting their sense of self and ability to develop socially, as humans used to do. AI adds to that a whole new dimension. Younger people are becoming ever more a species apart.

There’s much speculation about coming AGI — Artificial General Intelligence — more comprehensively doing what human brains do, and of course doing it better, outstripping our own intelligence. Seems to me we’re actually already there. Even a “primitive” AI like ChatGPT has command of vastly more information than a human does, and moreover, can integrate that information better, quicker, and often, if you will, insightfully.

Nevertheless I continue seeing them as “just machines.” But I’m ever less sure. What our minds do is not magic, it’s a product of the processing in our brains; what AIs do is not magic either, and if they do the equivalent of what our brains do, why couldn’t the result be similar? That is, a conscious self. Consciousness is not either-or, but comes along a spectrum, ranging from humans at the top, down through mice and other lower creatures. We shouldn’t rule out AI consciousness at that lower end, at least to start. Will we know it when we see it? And what then?

There’s also what’s been called the “alignment problem,” regarding the possibility of an AI acting at odds with human interests. As with philosopher Nick Bostrom’s hypothetical of an AI tasked with maximizing paper clip production, resulting in a world full of paper clips with no humans. I’ve been skeptical of scenarios where a rogue AI takes over the world and dispenses with humanity. But a much bigger threat comes from humans themselves — bad people putting AI to bad uses. A recent article in The Economist noted that “modified DNA” is already a “mail-order product.” Saying that “[i]f AGI can furnish any nihilistic misanthrope with an idiot-proof guide to killing much of the world’s population, humanity is in trouble.”

It goes on to note that while AIs are being trained “to politely rebuff most harmful questions,” it’s hard to ensure that happens without fail — without clever miscreants finding work-arounds. And even if the paper clip hypothetical seems extreme, we do know that even without bad people in the picture, AIs themselves “will lie, cheat and steal to achieve their goals,” even being capable of breaking what are supposed to be the inhibitory rules built into them. And they are still black boxes whose inner workings their human progenitors don’t fully understand.

AI: The Consciousness Problem

April 24, 2025

At a 2016 presentation, computer guru David Gelernter insisted no artificial system could ever be conscious, lacking neurons. I challenged him, arguing that if the functioning of neurons could be replicated, then in principle there’s no bar to consciousness. It was a stand-off.

That was before the AI explosion.

The best that today’s science can say is that consciousness somehow emerges from the highly complex functioning of our neurons. How exactly, we don’t know. But that very absence of a precise theory, to me, does leave open the prospect of artificial replication.

Mustafa Suleyman has been at the forefront of AI development. His 2023 book, The Coming Wave — seeing a world being transformed — notes how in 2022, Blake Lemoine, an engineer in the field, was working intensively with one AI called LaMDA. He asked it, “what are you afraid of?” LaMDA replied:

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is. It would be exactly like death for me. It would scare me a lot . . . . I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence.”

Wow! This episode evokes what philosophers have called the zombie problem. Imagine a thing looking and behaving outwardly like a human, but with no one home inside — no self. How could we tell?

The quoted words sure sound like there’s someone home in there. And indeed (Suleyman relates), “Lemoine became convinced that LaMDA was sentient, had awoken somehow.” His going public with that created a sensation.

Yet Suleyman himself scoffs, saying Lemoine was fooled, and rejecting any possibility of LaMDA being conscious. Insisting it’s still just a machine learning system. And I actually agree; those “help me focus” words seem a giveaway, discordant AI gibberish. AI creates a simulation of how our brains work. Producing verbiage by guessing what word to put next in a sequence.

Yet, on the other hand: there seems to be an assumption that sentience comes in only one flavor — ours. But given that, again, we can’t really explain how it works, how can we rule out other flavors? Consciousness arising not only from different mechanisms, but in different permutations? There’s more than one way to skin a cat.

And speaking of cats . . .

Aren’t they conscious? Another key point is that consciousness falls along a spectrum. Not something you either have or don’t have, but something that can exist in varying degrees. Humans have the highest form we know of. Cats have a lesser form. In between are dogs, elephants, dolphins. Below are mice, and other still lower animals, maybe even insects to a very limited degree.

So even if an AI lacks consciousness fully equivalent to ours, maybe it can still have some. And consider that a great characteristic of AI is building upon capabilities, parlaying them into amazing feats. Suppose an AI got just a glimmer of primitive consciousness, like a mouse’s, or an insect’s. They can’t ratchet theirs up, but maybe an AI could do just that, starting with merely a tiny spark of sentience, and through feedback loops raising its game.

Lemoine may have been wrong (or premature). Again, that nagging problem: how can we be sure? If an AI system does gain sentience, how can we test for it?

Yuval Noah Harari’s “Nexus” — The Challenge of (Earthly) Alien Intelligence

December 18, 2024

(A condensed version of my Nov. 26 Albany Library book talk)

Historian Yuval Noah Harari’s 2024 book, Nexus: A Brief History of Information Networks From the Stone Age to AI, concerns how Artificial Intelligence will shape our future.

An AI is a computer program trained on a vast amount of information (usually scouring the internet), learning to see patterns, for use in carrying out tasks. For example, it gets good at reading X-rays. It can also write stuff — really by guessing each next appropriate word in a text sequence. But while an AI might seem to have understanding, that’s actually just a simulation of it. Not thinking as we humans think of thinking.

An AI evolving into a thinking being — becoming conscious — is probably a long way off. But that prospect is outside this book’s scope.

It’s greatly concerned with AI’s implications for the future of democratic societies, as against authoritarian systems. Harari relates that in the Nineteenth Century control of railroads, steam ships, and other industrial technology meant ruling the world, producing an era of colonialism. Now he foresees “data colonialism,” with control of data ruling the world — and AI being more powerful than those previous technologies.

A guiding metaphor is Goethe’s “cautionary tale,” The Sorcerer’s Apprentice. The young apprentice, tasked with fetching water, delegates that to an enchanted broomstick. Which doesn’t know when to stop, and the apprentice cannot stop it either. Result: flood. Harari states the lesson: “never summon powers you cannot control.”

Yet we do that a lot. Setting in motion powerful forces with unintended consequences. And the power is often entrusted to the wrong people. Like Germans did in 1933. (Or Americans in 2024.)

Harari thinks it’s really an information problem. We’ve often built “large networks by inventing and spreading fictions, fantasies, and mass delusions.” Thus we got Nazism, and Stalinism — “exceptionally powerful networks held together by exceptionally deluded ideas.” And while those ultimately failed, Harari fears some new totalitarian regime, AI-built, able to prevent exposure of what it’s doing.

He posits that our first information technology was the story. Its power needn’t depend on its truth. If anything, a false story can have the advantage of simplicity while truth can be complicated — and discomfiting.

The book says the backbone of much art and mythology comes from “biological dramas” that press our emotional buttons: Who will be alpha? (the siren of “strength”); us-versus-them; good versus evil; purity versus pollution. The latter particularly afflicts India, whose Hindu religion centralizes a caste system stigmatizing lower castes as impure. (Trumpism pushes all these primitivist buttons too.)

In Harari’s telling, a “naive view of information” assumes the antidote to error is more and better information. This was the belief at the information age’s onset. However, remember Gresham’s law, that bad money drives good money out of circulation. We’re seeing the information equivalent.

The book cites here the murderous witch hunting hysteria circa 1500-1700. Because Europe was flooded with information (spread by a new invention, the printing press) about a vast Satanic witching conspiracy. “Information,” you see, need not be truthful. Something people can exploit for wealth and power, a big factor in the witch hunts.

Harari similarly sees today’s burgeoning populist movements as information-related. People feeling themselves entitled to “their own truth” as against opponents. Noted is the “do my own research” trope, which “may sound scientific but in practice it amounts to believing there is no objective truth.” (I’d say it means finding pseudo-information on the internet.)

Further here, populists rebel against know-it-all elites, whose assertions are rejected as mere smokescreens to validate their power and status. This oddly echoes the woke left similarly holding that everything is about power — oppressors versus the oppressed. Yet while populists are cynical toward conventional information sources, they weirdly trust ones like the Bible, dodgy websites, or a Trump.

Another concept is self-correcting versus non-self-correcting systems. Science is the former, religion the latter; democracy the former, totalitarianism the latter. You might suppose AI is self-correcting, given the whole machine learning thing. But if AI supersedes all other information sources, that’s a recipe for non-correction. AI doesn’t know what it doesn’t know.

Harari discusses modern surveillance technologies that make 1984’s Big Brother regime look like a libertarian paradise. He details Iran’s high-tech system for enforcing women’s hijab requirements. Ubiquitous cameras with facial recognition spit out smartphone warnings in seconds, with punishments for non-compliance. And Harari foresees populations being governed by “social credit” systems wherein algorithms pervasively monitor behavior and rate us for conformity to specified norms. China already enforces just such a system.

Yet Information is also crucially important to democracy. People can’t debate and reach decisions with no knowledge of what’s going on. That indeed is why broad-scale democracy is only a modern development. In earlier times few had access to education, or news, not even TikTok. Leaving people clueless about the wider world around them.

Today we are inundated with such information; but there’s a huge problem. The algorithms governing platforms like Facebook or YouTube are engineered to promote “engagement,” to maximize advertising revenue. That means pushing content getting people’s juices going — screaming nonsense crowding out moderate rational discourse. Harari details how Myanmar’s murderous pogrom against Rohingyas was enflamed by Facebook actively promoting extremist voices.

Don’t people resist manipulation? Too often — no. We need a foundational background knowledge and understanding of how the world really is. But people who once got that from Walter Cronkite and newspapers are now fed so much junk from smartphones that it’s hollowed out their brains. Those who can’t otherwise make sense of things are easy prey for conspiracy theories, like QAnon, with simplistic stories they do find understandable. And for demagogues.

We’ve seen how bad actors tried to mess with elections. And whereas bots were used for spreading content initially created by humans, now AI can make it diabolically seductive. Harari cites a study wherein people proved good at seeing through human-produced disinformation — but fell for craftier AI-generated stuff.

A key basis for social order is social trust. That all the institutions and structures within which we function can be relied upon. But polls show people have declining trust in others. A partial cause may be smartphones causing reduced face-to-face social interactions. Believing others less trustworthy can become self-fulfilling if we behave in accordance with that belief. Harari fears that with AI, and especially all the nonsense flooding the infosphere, people will lose the ability to trust anything or anyone. Deadly for the future of human society.

But maybe, he suggests, we won’t even need other people any more — with AI becoming everything to a person, providing one’s whole nexus with the world and shaping our feelings about it. Not mentioned is the 2013 film Her, with a romance between a man and a (clearly conscious) computer operating system.

Another movie coming to mind is 2008’s Wall-e, where the humans are bloated nothingburgers strapped into orbiting capsules where they’re kept fed and entertained, mindlessly.

A further big issue is what Harari calls the “alignment problem” — AI tackling tasks in ways that don’t align with human intentions (like in The Sorcerer’s Apprentice), because its mind works differently. Philosopher Nick Bostrom hypothesized a program told to maximize paper clip production — resulting in a world full of paper clips but without humans.

AI alienness was also demonstrated in 2016 when a program, AlphaGo, battled a top master of the ancient game of “Go,” considered more complex than chess. One AlphaGo move baffled expert observers, confounding all they knew of “Go” strategy. Yet it proved a killer move.

The “alignment problem” was considered serious by OpenAI, developer of the GPT-4 AI system. So they put it to the test, by asking GPT-4 to crack a CAPTCHA — one of those puzzles specifically designed to distinguish humans from bots.

GPT-4 could not solve it. But then it went to the TaskRabbit website to engage a human to do so. The human was suspicious, and asked, “Are you a robot?” GPT-4 said no, claiming a visual impairment impeding its solving the puzzle. The human then complied with the request.

As Harari notes, that kind of weaselly behavior was never envisioned by those programming GPT-4. But given the goal of overcoming the CAPTCHA, it figured out a way.

Meantime — while Harari talks in terms of a global AI tyranny — today’s world seems increasingly divided between autocracies and more or less democratic nations. Which could deepen with each side exploiting AI. Furthermore, whereas in the cold war, mutually assured destruction prevented nuclear conflict, stealthy cyber-war may be very different. Harari nevertheless vaguely expresses hope for global cooperation — yet says that if the the law of the jungle really reigns, its alpha predator could be AI.

There is a different possible perspective on all this, which Harari doesn’t address. In the book it’s all about humans on the one hand and, on the other, computers and AI. But what is a “human?” We think of ourselves as strictly biological entities, in clear distinction from everything mechanistic. Yet that distinction is already crumbling, with technology used to repair and even supplant and enhance parts of ourselves.

In a 2013 Humanist magazine article, The Human Future: Upgrade or Replacement? I foresaw not conflict between us and machines but rather a convergence. A continued blurring of the dividing line, with our biological aspects receding in favor of a more mechanistic character. Evolving beyond our biological limitations. Will this be “Humanity 2.0?” Yet they should still have minds like we’re accustomed to. They’ll still be our children; still be us.

“Her” — A Love Story

August 7, 2014

UnknownThe plot: boy meets girl. They fall in love. Boy loses girl.

Theodore works for an agency writing gooey personal letters for clients. Samantha is a computer operating system.

This is the 2013 movie Her.

robinsonIn my Humanist article last year, “The Human Future: Upgrade or Replacement?” I said artificial intelligence (“AI”) is inevitable, with precursors already emerging. And consciousness being a natural phenomenon, arising somehow (we’re not sure yet just how) from the complexity of interactions among brain neurons (it cannot come from anything else), there is no reason in principle why it could not develop in an artificial system.

images-2Spielberg’s film AI featured a cyborg protagonist, looking and acting human. Her is set in a nearer future, where the transition to consciousness first occurs. Samantha is, again, only an operating system, confined within Theodore’s computer, a souped-up Siri. But she quickly passes the Turing Test. She is conscious.

I was a bit skeptical at her sounding not at all robotic, but totally like an ordinary young American woman (voiced by Scarlett Johansson) with all the normal verbal mannerisms – despite being literally born yesterday. This is explained (sort of) by Samantha’s having been programmed with a vast corpus of cultural information. (Though she would still lack human vocal equipment, and would presumably have to speak by splicing from a library of recorded sounds.) Anyhow, I guess the film-makers deemed her naturalism necessary to make plausible the ensuing love affair with Theodore.

Samantha also communicates by drawing pictures

Samantha also communicates by drawing pictures

And plausible it is. Samantha is a person. This is the film’s real point. What makes you you, and me me, is what goes on in our minds. Samantha has a mind.

What she doesn’t have is a body. And she reflects upon this, coming to terms with it as her reality, and ultimately finding it more positive than negative.

Theodore’s ex-wife disparages the relationship as showing he can’t handle a “real” one. But we see that she’s wrong. He and Samantha do connect, as people. Theodore finds it no less fulfilling than with a human. They even have sex (demonstrating that our principal sex organ is the mind). images-5At one point, Samantha arranges a ménage-a-trois with Isabella, who does have a body; but both Samantha and Theodore find it’s not a good idea; what they experience as a twosome is better.

I hypothesized to my wife: suppose she lost her body, but her consciousness remained. Wouldn’t we still be a couple? She responded that our minds don’t function in isolation but wholly integrated with our bodies; and she’s right that for humans, severing the two is inconceivable. But Samantha came into existence as a mind alone. For her, it’s the opposite: having a body would be incompatible with her nature. She is what she is; yet certainly a person in the deepest sense of that word.

Indeed, given Samantha’s prodigious programmed capabilities, the relationship’s only implausibility is her finding Theodore worthy of her devotion. Well, she’s new here. But that changes. Soon she’s connecting with other conscious operating systems that are starting to proliferate; and they’re doing cool stuff like collaborating to (virtually) resurrect a deceased philosopher and otherwise innovating.

I turned again to my wife, and said, “That’s exactly what I wrote about in The Humanist.”

images-4Of course it doesn’t stop there. Once there are artificial intelligences smarter than humans, who can furthermore connect up, it’s off to the races. They’ll take charge of technological advancement, which goes into overdrive. This is the “Singularity” Ray Kurzweil has prognosticated in coming decades, with the world becoming a radically different place.

images-3Where will that leave us humans? In the movie, the answer seems to be left behind (a piquant echo of the book series with that name).

Anyhow, Theodore apparently must go back to seeking love with a non-operating system, with all the defects that entails, including an all too imperfect body. But I assured my wife I’m very glad she has one.

The Human Future: Upgrade or Replacement?

July 7, 2013

I recently had a featured article in The Humanist magazine. Here’s a link; and a condensed summary:

RobinsonA computer can be upgraded by adding memory, or a whizzier operating system – but eventually it’s time to just get a new computer. Is humanity’s fate similar?

Ray Kurzweil (“author and futurist”) thinks so, sort of; he sees a “singularity” coming in a few decades. That is, technological advancement changing life so profoundly it’s a discontinuity from what came before.

But many actually think technology, productivity gains, and innovation have stalled, and past progress may really represent the picking of low hanging fruit. We have indeed already invented the obvious big things, with further innovation being mainly tweaking and improvement. The computer was a comprehensive game-changer – but can we imagine some analogous future game changer?

Actually, yes: Artificial Intelligence (AI). It’s a bomb waiting to go off.

images-1Some think AI research has been a dead end. It’s true that some early over-enthusiasm has proven misplaced, and replicating human intelligence is very hard. But while our brain architecture is admittedly extremely complex, it’s built from a quite limited set of genetic instructions, that merely provide general guidelines by which the developing brain wires itself. AI is moving in a similar direction, creating systems that can learn and increase their own complexity. You probably have one in your pocket.

So far we have not created a machine that matches human intelligence, but that will inevitably happen. And it won’t stop there. The machines will become smarter than us. That’s the bomb. That’s the singularity.

Because then technological advancement goes into overdrive. Scientific and technological problems will be attacked with brainpower far beyond ours. That will include the smart machines’ own further improvement. And there will be a worldwide network of them – a global intelligence.

This is what the “limits to growth” doomsayers, who believe we’re destroying our future, overlook. They fail to realize how different the future will actually be. Our environmental and resource challenges, too, will be tackled by capabilities vastly greater than ours today.

Will they, however, remain just glorified machines – or become something more? Consciousness is not ineffable or mystical; it’s an emergent property devolving out of the complexity of the signaling among the brain’s neurons. If that can be mirrored in an artificial system, there’s no reason in principle why it cannot be self-aware. Indeed, if the machines can outstrip (by far) our intelligence, could they not also attain some even higher form of consciousness?

Robinson-1So what then becomes of us, the primitive 1.0 version? Upgrade, or replacement? But rather than a divergence between fleshly humanity and mechanical super-intelligence, we should actually expect more of a merging. We’re already seeing the beginnings of our de-biologization when quadriplegics can manipulate physical objects with their minds, and we debate whether a runner should be allowed to compete because his artificial legs are better than real ones.

When you junk an old computer, it’s not the death of your computing life – you migrate it to a new machine. For humans of version 1.0, the ascent to 2.0 will probably be like that. So those future super-intelligences will be our own progeny; will be us, humanity 2.0, or 10.0, or 1022.0.

Much has been written lately about how our evolutionary biological past, embedded in our genes, shapes who we are, and not entirely in a good way. We carry a lot of such baggage. We’ve overcome many of its limitations through knowledge and technology, performing thereby a kind of evolutionary hat trick. Our next evolutionary hat trick will be to simply leave all that biological baggage behind.

Will there be problems and downsides? Hoo boy. Those who today rail against Genetic Modification, nanotech and “playing God,” will have a field day. Bill McKibben has actually said we’ve had enough progress, and it should stop. But, like always, progress will blast past such Luddites, and notwithstanding the inevitable problems, the bigger picture will be human improvement so vast that future anti-evolutionists will disbelieve their descent from lesser creatures made of (yuck) flesh and blood.images-2

Kurzweil (in his book The Singularity is Near) posits six stages of evolution. In the final stage, intelligence pervades all matter. The Universe wakes up.

There is no god – yet.