
Futurist Ray Kurzweil has foreseen a coming “Singularity” when artificial intelligence outstrips ours, then widens the gap exponentially by taking over its own further development. Making a new and different world. Some fear this threatens humankind.
Melanie Mitchell’s book, Artificial Intelligence — A Guide for Thinking Humans was written in 2019, before the field really exploded with ChatGPT in 2022. Yet the book usefully explores relevant fundamental questions. Mainly, what is intelligence, really? Like humans have.

Back in 2016, I attended a talk by computer guru David Gelernter, who deemed artificial consciousness impossible, insisting consciousness requires neurons. I challenged this in the Q&A, arguing that if neurons’ functioning could be replicated artificially, there’s no bar to consciousness. It’s not magic.
Mitchell’s book might make consciousness seem impossible — even for humans. There’s a recurring trope: it’s the easy stuff that’s hard. Meaning the ways our minds function, virtually effortlessly, negotiating through everyday life. “Common sense” is another repeated notion. It turns out all this is not simple at all.
Actually, in terms of raw intelligence, artificial systems already far outstrip human brains. Being able to access vastly more information, analyze it, put it together, draw conclusions. And yet — a key Mitchell point — what they cannot do is understand.
That’s the big difference. Our minds arise out of the functioning of our neurons, processing information. An AI processing information may seem analogous. But the processing in our brains results in consciousness, in understanding, that artificial systems cannot (yet) come close to.

Consciousness means not just thinking but thinking about our thinking. Neuroscientist Antonio Damasio elucidated this in Descartes’ Error. Our minds create, for a perception or idea, a representation of it. The self perceives that representation. By means of a further representation, of the self perceiving that initial representation. But it may need yet a still higher order representation. It gets complicated.
An aspect here is feelings. Part of having a sense of self. Artificial systems lack that and hence cannot want anything. They only “want” what they’re programmed for. At one point Mitchell writes of an AI program learning to improve its performance by earning points for certain results. But I wondered: what would an AI care about such points? It cannot care about anything.

So how, exactly, does all this happen in our brains? This has been called the “hard problem.” An understatement. Our science isn’t really close to solving it. And Mitchell’s contrasting what our brains do do, versus what (extremely sophisticated) artificial systems do, makes the former so advancedly complex as to seem virtually impossible. Yet of course we know even nitwits do it, effortlessly.
Mitchell discusses in depth how artificial intelligence work has developed over decades. Basically, the thrust has been to equip artificial systems with vast libraries of knowledge which they can use to analyze problems. For example, IBM’s “Watson” program that could answer “Jeopardy” questions.

But what modern systems like ChatGPT do seems different — not just answering questions. These “large language models” can write essays, poems, songs. An AI-created song, even including an ersatz singer singing it, has now topped the charts. Mitchell notes a test where some music mavens were given a lesser known Liszt composition versus an AI-created Liszt mimic. They mistook the mimic as the real Liszt.
However much all this seems like intelligence at work, we’re still assured it’s indeed artificial and not true intelligence (like ours). More specifically, all a program like ChatGPT does is simply to guess the next word in a sequence. Writing whole books that way.
But just for a laugh, I asked ChatGPT for aValentine’s poem for a wife who’s herself a poet and also an AI aficionado. It produced a fairly clever poem riffing on those elements — with a cute funny ending, which it was hard to believe wasn’t planned by the “writer” from the outset. (Read it here: www.fsrcoin.com/AI.htm)
Writing this essay, it feels a lot like I too work by simply guessing each next appropriate word. Yet I do have some overall ideas in mind, that I’m putting into words, one by one. I have understanding.
Which brings us back to the key point. An AI simulates understanding, without actually having it. And let’s be more concrete about this. Mitchell goes into some depth explaining how a human mind, from an extremely early age, develops a common sense understanding of how the world works. Such simple concepts like a smaller object isn’t visible if behind a larger one; objects fall down, not up; etc., etc. Such things may seem obvious, but an AI operates without this sort of knowledge. Mitchell cites one effort to specifically instruct an AI with a full repertoire of such simple understandings. It failed because millions of such precepts would have been required.
Another point: integral to our consciousness is its continuity, throughout one’s life. Even while asleep. Does an AI have an existence like that, just quietly waiting to be given a query? It seems like a wholly different sort of being.
In the end, Mitchell returns to the idea of artificial systems gaining general intelligence, far surpassing anything humans are capable of — overcoming all the “annoying limitations” we’re subject to, all our irrationality and cognitive biases, our slowness, emotions, etc. Thus attaining “super-intelligence.” However, she suggests, all those supposed human limitations are actually integralto our general intelligence, making us what we are, “rather than narrow savants.” Better really.
Understanding and consciousness go hand in hand. The idea of an AI arising into consciousness is a gigantic matter. Meaning AI personhood; becoming not our tools but our brethren. Again, if we do not understand exactly how our own consciousness arises, we do know there must be something about our neuronal functioning that creates it. So it’s logically conceivable that at some point, artificial systems could have a complexity of information processing at a level sufficiently comparable to ours to produce consciousness.

However, it’s almost surely wrong to envision a consciousness suddenly bursting forth that’s fully equivalent to the human kind. Consciousness is not either-or, but instead falls along a spectrum, with human level consciousness at the top (at least on this planet) and other creatures, like chimps, elephants or dolphins, apparently having something close; dogs, and then cats, descending down the scale, followed by mice and lower animals; insects may have a very rudimentary sort of consciousness. Could an existing AI already have something like that? How could we tell? Moreover, could there be a kind of consciousness differing from what we’re familiar with? Could we recognize it?












































