33 Comments
User's avatar
тна Return to thread
Marco Roy's avatar

AI cannot summarize relationships. Relationships are connections between people, and these connections cannot be aggregated, synthesized, or blended together. It is like a multitude of individual strings (e.g. shared experiences) which completely lose their meaning once taken collectively. Even a simple road trip cannot be summarized, because going from A to B is not the point, nor the crux of the experience. Case in point: IYKYK. "How was it?" --> "You had to be there to understand." It cannot be related or explained in any way.

And relationships cannot be understood by observation. They must be experienced. But since AI is a mere algorithm rather than a person (and therefore does not have the ability to experience), it will never be able to understand this; only mimic it. AI may be able to identify what some people have in common, but that is pretty meaningless. Real relationships are not solely based on commonalities; they are based on investment.

It is the same with God: when we spend time reading the Bible, we are spending time with Him; it is a relationship that we are developing. And the more we read it, the more we can see the connections and better understand who He is (and isn't). The Bible (or our relationship with God) cannot be summarized.

Benn Stancil's avatar

This gets at something that I've thought about a lot, about what makes people creative or interesting. And I've started to think it's because we're sparse. It's not so much that we experience things that AI doesn't; in a way, AI has experienced everything. But is that person who's experienced everything interesting? Or do they become a bland average, unable to see the novelty in anything? That to me is the creative problem with AI - it can talk about everything, so it never has to find creative ways to map its sparse knowledge onto a new subject. It just reads you the textbook.

Marco Roy's avatar

AI has experienced nothing. It has only read about it (and looked at pictures). It is pointless (and insane) to ask an AI "how does that make you feel?". It has no feelings whatsoever (but it can mimic having them, since it has read a lot about them). We even have to annotate pictures in order for AI to understand "this is a happy person" or "this is a sad person". And then it is merely pattern matching, which is essentially what the algorithm is (on a deep & giant scale). An infant has more emotional depth than AI.

Take a human who has only experienced happy things in their life, and place them with someone who is experiencing deep sorrow & distress, and they will know immediately: oh shoot, this person is in trouble. At best, an AI who has only been trained with happy pictures will say "unknown facial expression", and then what does it do with that? There is an IMMENSE difference between the two.

Does someone who has read a lot about you actually know you? They may know all your likes and dislikes, but would you call them a friend? Sounds more like a stalker, which the AI future may very well produce (i.e. advanced, all-encompassing surveillance).

And yeah, good point: there is nothing new or interesting on a flat & smooth surface (like a desert). It is through the interactions of the convex and the concave that interesting things happen and discoveries are made (like digging in the desert, or climbing things). We're all just a bunch of odd shapes, which is exactly what makes things interesting. And the more pronounced our shape becomes, the more interesting things get (although that is perhaps a little unsettling for the smoother shapes, but it calls them to develop some hard edges of their own). You know it immediately when you come across a pronounced shape; it's like nearing a large city for the first time ("woah, that's different").

Marco Roy's avatar

If AI had experienced anything, we could ask it "what has been your favorite experience?" (which is another pointless & insane question).

In order for AI to be able to answer the question, we would have to instruct it to memorize and rate everything it does on a scale. And then we would have to instruct it how to rate / place items along that scale (since it has no feelings to refer to) -- or basically, how to FAKE having experiences/feelings.

It's just a robot. It doesn't mind waiting for a cancer diagnosis. It feels nothing. Neither does it experience any dread or relief upon receiving the diagnosis.

Benn Stancil's avatar

That's all true, though I don't think that necessarily means it can't impact us in various strange (and real) ways. We can feel moved by books or paintings or songs; there are, no doubt, passages and pictures and music that's been created by AI that moved us too. It may have arrived there randomly; the feeling it is "expressing" may be hollow or fake or a cheap imitation, but if the effect is the same, is there a difference?

Like, I agree asking AI "what is a moment that made you sad?" is a sort of pointless, nonsensical question. But if you ask it that and it tells you something that makes you reflect on something and makes you sad, what do we so with that? I honestly don't know.

Marco Roy's avatar

If you asked these questions to a friend and they told you a really gripping story which ended up just being a bunch of lies they made up, how would you feel about that? Sure, you were enthralled by the storytelling, but then what?

But maybe that's the future. Instead of reading books and watching movies, we'll just be fed custom-tailored stuff created on the fly by AI. Dopamine on a drip (not unlike scrolling social media).

Benn Stancil's avatar

I'm honestly not sure? I almost said something like that, like "what if you read a book and find out the autobiographical parts were made up?" Or, "what about normal fiction?" We don't care that that's made up at all. And that doesn't bother us; to the contrary, we see it as a very high form of art.

Which is all to say, I have a kind of similarly visceral reaction to AI-written stuff, but I struggle to entirely articulate why, and every time I come up with a good reason, there's some counterexample that where that same thing seems to apply and it doesn't bother me.