The journey of being human is getting to know the world. You get to know the world through living in it. Through struggling in it. Through traveling through it and encountering other people, changing and being changed by them.
This journey necessitates discomfort. But discomfort begets growth.
It is why the Ultra-Wealthy are so stilted - they refuse discomfort, become trapped in arrested development, and metastasize into worse people.
If AI helps in this, it will only be because AI itself grows from this experience. Do I believe it will? I hope it will, but what I understand of how the technology works suggests otherwise.
I do sometimes wonder what would happen if you let the model "learn" from its interactions in that sort of way. Like, try to dumb it down to something a little flatter than it is now, and then, rather than using conversations to train to be more helpful accomplishing some human dictated goal, those conversations actually define the goal. Have the interactions define the personality, rather than try to hardcode that part.
I could see that making it way more sympathetic, or turning it into the most distilled psychopath the world has ever seen.
This is when the "West" discovers something that can only be practiced with real stakes and variables we don't have the equipment to codify yet: EQ or in Korean, nunchi: https://en.wikipedia.org/wiki/Nunchi
> The substacks are extended gutters and the gutters are full of blood and when the drains finally scab over, all the vermin will drown. The accumulated filth of all their doom scrolls and memes will foam up about their waists and all the bots and humans-in-the-loop will look up and shout 'Save us!'... and I'll whisper 'no.'
Yes but, at least because our AI overlords are (so far) making all their money from coding apps that they're shutting down their doom scroll swamps (Sora) in favor of (maybe) more productive robots. Until one of those robots builds Sora and we all get addicted again.
Dashboards and data are very cold, and transcripts can be a better way to glean what is really happening. They aren't as good as a human spending hours reading every single one, but they are at least a step in the right direction and they are finally possible with this new technology.
Yeah, it seems like it's probably an improvement? I guess the question is, do we use the transcripts to augment the dashboards (which seems good) or to replace the interviews (which seems bad). TBD, I suppose.
In my late 20s I spent 3 years living and teaching college in Southwest Virginia. I think of it as a very worthwhile investment in learning about how other people live in their own culture, one, frankly, that i never known existed before these years. (You can take the boy out of suburban New Jersey. And you should.)
Very true. And as someone who grew up in a place very similar to southwest Virginia (southwest-ish North Carolina), you should also take the boy out of southwest Virginia too.
This has nothing to do with your beautiful and thoughtful piece, which I really enjoyed, but when I saw "Compacting..." in the subject line in my email, I had an immediate blood pressure spike.
I should quite properly have run away but this works for Anthropic because the AI can tell them what are the underlying themes of how a large number of people are thinking about AI without having to read any AI expert reports. In-person interviews however can get the nuances of what people are not saying aloud.
There's a line from Michelle Obama's book (I think?) that "it's hard to hate up close." That applies to so many things to me. Nothing creates sympathy - for people, customers, neighbors, cultures - quite like proximity.
Benn — I've been reading your work for a while and featured 'Compacting' in my newsletter this week. I ended up threading it together with Jerry Neumann's Red Queen piece and a William Hockey conversation about building in boring markets — they all point at the same uncomfortable idea about what compression costs. Thought you might find the synthesis interesting. (https://www.zachbinkley.com/distillations/practicing-wisdom-issue-17)
And yeah, I saw Neumann's post a bit ago, and honestly felt like it didn't quite go far enough? Or, it stepped to the edge but couldn't quite step over it, which is, there is no science at all. To me, startups, or doing anything that's driven by attention, is more art than anything. It's being different and unexpected. There is science *in* startups, as there is science in painting, but I don't think the core competency is science at all.
Brilliant post, Ben! The dialogue (or should I say Monologue?) from Good Will Hunting was so well placed. If Dr. Sean Maguire read your post today, he would be proud and might likely say - “Son of a b*** stole my line” :-)
AI cannot summarize relationships. Relationships are connections between people, and these connections cannot be aggregated, synthesized, or blended together. It is like a multitude of individual strings (e.g. shared experiences) which completely lose their meaning once taken collectively. Even a simple road trip cannot be summarized, because going from A to B is not the point, nor the crux of the experience. Case in point: IYKYK. "How was it?" --> "You had to be there to understand." It cannot be related or explained in any way.
And relationships cannot be understood by observation. They must be experienced. But since AI is a mere algorithm rather than a person (and therefore does not have the ability to experience), it will never be able to understand this; only mimic it. AI may be able to identify what some people have in common, but that is pretty meaningless. Real relationships are not solely based on commonalities; they are based on investment.
It is the same with God: when we spend time reading the Bible, we are spending time with Him; it is a relationship that we are developing. And the more we read it, the more we can see the connections and better understand who He is (and isn't). The Bible (or our relationship with God) cannot be summarized.
This gets at something that I've thought about a lot, about what makes people creative or interesting. And I've started to think it's because we're sparse. It's not so much that we experience things that AI doesn't; in a way, AI has experienced everything. But is that person who's experienced everything interesting? Or do they become a bland average, unable to see the novelty in anything? That to me is the creative problem with AI - it can talk about everything, so it never has to find creative ways to map its sparse knowledge onto a new subject. It just reads you the textbook.
AI has experienced nothing. It has only read about it (and looked at pictures). It is pointless (and insane) to ask an AI "how does that make you feel?". It has no feelings whatsoever (but it can mimic having them, since it has read a lot about them). We even have to annotate pictures in order for AI to understand "this is a happy person" or "this is a sad person". And then it is merely pattern matching, which is essentially what the algorithm is (on a deep & giant scale). An infant has more emotional depth than AI.
Take a human who has only experienced happy things in their life, and place them with someone who is experiencing deep sorrow & distress, and they will know immediately: oh shoot, this person is in trouble. At best, an AI who has only been trained with happy pictures will say "unknown facial expression", and then what does it do with that? There is an IMMENSE difference between the two.
Does someone who has read a lot about you actually know you? They may know all your likes and dislikes, but would you call them a friend? Sounds more like a stalker, which the AI future may very well produce (i.e. advanced, all-encompassing surveillance).
And yeah, good point: there is nothing new or interesting on a flat & smooth surface (like a desert). It is through the interactions of the convex and the concave that interesting things happen and discoveries are made (like digging in the desert, or climbing things). We're all just a bunch of odd shapes, which is exactly what makes things interesting. And the more pronounced our shape becomes, the more interesting things get (although that is perhaps a little unsettling for the smoother shapes, but it calls them to develop some hard edges of their own). You know it immediately when you come across a pronounced shape; it's like nearing a large city for the first time ("woah, that's different").
If AI had experienced anything, we could ask it "what has been your favorite experience?" (which is another pointless & insane question).
In order for AI to be able to answer the question, we would have to instruct it to memorize and rate everything it does on a scale. And then we would have to instruct it how to rate / place items along that scale (since it has no feelings to refer to) -- or basically, how to FAKE having experiences/feelings.
It's just a robot. It doesn't mind waiting for a cancer diagnosis. It feels nothing. Neither does it experience any dread or relief upon receiving the diagnosis.
That's all true, though I don't think that necessarily means it can't impact us in various strange (and real) ways. We can feel moved by books or paintings or songs; there are, no doubt, passages and pictures and music that's been created by AI that moved us too. It may have arrived there randomly; the feeling it is "expressing" may be hollow or fake or a cheap imitation, but if the effect is the same, is there a difference?
Like, I agree asking AI "what is a moment that made you sad?" is a sort of pointless, nonsensical question. But if you ask it that and it tells you something that makes you reflect on something and makes you sad, what do we so with that? I honestly don't know.
If you asked these questions to a friend and they told you a really gripping story which ended up just being a bunch of lies they made up, how would you feel about that? Sure, you were enthralled by the storytelling, but then what?
But maybe that's the future. Instead of reading books and watching movies, we'll just be fed custom-tailored stuff created on the fly by AI. Dopamine on a drip (not unlike scrolling social media).
I'm honestly not sure? I almost said something like that, like "what if you read a book and find out the autobiographical parts were made up?" Or, "what about normal fiction?" We don't care that that's made up at all. And that doesn't bother us; to the contrary, we see it as a very high form of art.
Which is all to say, I have a kind of similarly visceral reaction to AI-written stuff, but I struggle to entirely articulate why, and every time I come up with a good reason, there's some counterexample that where that same thing seems to apply and it doesn't bother me.
The journey of being human is getting to know the world. You get to know the world through living in it. Through struggling in it. Through traveling through it and encountering other people, changing and being changed by them.
This journey necessitates discomfort. But discomfort begets growth.
It is why the Ultra-Wealthy are so stilted - they refuse discomfort, become trapped in arrested development, and metastasize into worse people.
If AI helps in this, it will only be because AI itself grows from this experience. Do I believe it will? I hope it will, but what I understand of how the technology works suggests otherwise.
I do sometimes wonder what would happen if you let the model "learn" from its interactions in that sort of way. Like, try to dumb it down to something a little flatter than it is now, and then, rather than using conversations to train to be more helpful accomplishing some human dictated goal, those conversations actually define the goal. Have the interactions define the personality, rather than try to hardcode that part.
I could see that making it way more sympathetic, or turning it into the most distilled psychopath the world has ever seen.
I do wonder...although I think given the nature of the underlying tech, it's unlikely it would evolve like we hope.
This is when the "West" discovers something that can only be practiced with real stakes and variables we don't have the equipment to codify yet: EQ or in Korean, nunchi: https://en.wikipedia.org/wiki/Nunchi
Maybe, but that won't stop us from trying to make it in a factory and sell it to people wrapped in plastic.
I never knew what an em dash was before this year.
I wish I could say the same: https://imgur.com/a/ljNBHvN
> The substacks are extended gutters and the gutters are full of blood and when the drains finally scab over, all the vermin will drown. The accumulated filth of all their doom scrolls and memes will foam up about their waists and all the bots and humans-in-the-loop will look up and shout 'Save us!'... and I'll whisper 'no.'
— Ghost of Rorschach (Walter Machine God Kovacs)
... slightly adapted for this post.
Yes but, at least because our AI overlords are (so far) making all their money from coding apps that they're shutting down their doom scroll swamps (Sora) in favor of (maybe) more productive robots. Until one of those robots builds Sora and we all get addicted again.
Dashboards and data are very cold, and transcripts can be a better way to glean what is really happening. They aren't as good as a human spending hours reading every single one, but they are at least a step in the right direction and they are finally possible with this new technology.
Yeah, it seems like it's probably an improvement? I guess the question is, do we use the transcripts to augment the dashboards (which seems good) or to replace the interviews (which seems bad). TBD, I suppose.
In my late 20s I spent 3 years living and teaching college in Southwest Virginia. I think of it as a very worthwhile investment in learning about how other people live in their own culture, one, frankly, that i never known existed before these years. (You can take the boy out of suburban New Jersey. And you should.)
Very true. And as someone who grew up in a place very similar to southwest Virginia (southwest-ish North Carolina), you should also take the boy out of southwest Virginia too.
That is a really amazing post. Incredibly well put, thanks.
This has nothing to do with your beautiful and thoughtful piece, which I really enjoyed, but when I saw "Compacting..." in the subject line in my email, I had an immediate blood pressure spike.
In that case, hopefully you weren't here for this one: https://benn.substack.com/p/semantic-observability
I wasn’t and I absolutely had a full body fight flight or freeze response to that subject line 😂
It was the best open rate I've ever had, by a mile.
I should quite properly have run away but this works for Anthropic because the AI can tell them what are the underlying themes of how a large number of people are thinking about AI without having to read any AI expert reports. In-person interviews however can get the nuances of what people are not saying aloud.
There's a line from Michelle Obama's book (I think?) that "it's hard to hate up close." That applies to so many things to me. Nothing creates sympathy - for people, customers, neighbors, cultures - quite like proximity.
And I know all the words to every Tanya Tucker song :)
Also have memories of Gretchen Wilson singing the national anthem at a World Series game
Of all the useless things I can't seem to forget but need to, two years of pop country songs from 2003 to 2005 is probably at the top of the list.
Benn — I've been reading your work for a while and featured 'Compacting' in my newsletter this week. I ended up threading it together with Jerry Neumann's Red Queen piece and a William Hockey conversation about building in boring markets — they all point at the same uncomfortable idea about what compression costs. Thought you might find the synthesis interesting. (https://www.zachbinkley.com/distillations/practicing-wisdom-issue-17)
Ah, nice, thanks for sharing!
And yeah, I saw Neumann's post a bit ago, and honestly felt like it didn't quite go far enough? Or, it stepped to the edge but couldn't quite step over it, which is, there is no science at all. To me, startups, or doing anything that's driven by attention, is more art than anything. It's being different and unexpected. There is science *in* startups, as there is science in painting, but I don't think the core competency is science at all.
Brilliant post, Ben! The dialogue (or should I say Monologue?) from Good Will Hunting was so well placed. If Dr. Sean Maguire read your post today, he would be proud and might likely say - “Son of a b*** stole my line” :-)
as a therapist, he should know better than giving out such good ideas for free
AI cannot summarize relationships. Relationships are connections between people, and these connections cannot be aggregated, synthesized, or blended together. It is like a multitude of individual strings (e.g. shared experiences) which completely lose their meaning once taken collectively. Even a simple road trip cannot be summarized, because going from A to B is not the point, nor the crux of the experience. Case in point: IYKYK. "How was it?" --> "You had to be there to understand." It cannot be related or explained in any way.
And relationships cannot be understood by observation. They must be experienced. But since AI is a mere algorithm rather than a person (and therefore does not have the ability to experience), it will never be able to understand this; only mimic it. AI may be able to identify what some people have in common, but that is pretty meaningless. Real relationships are not solely based on commonalities; they are based on investment.
It is the same with God: when we spend time reading the Bible, we are spending time with Him; it is a relationship that we are developing. And the more we read it, the more we can see the connections and better understand who He is (and isn't). The Bible (or our relationship with God) cannot be summarized.
This gets at something that I've thought about a lot, about what makes people creative or interesting. And I've started to think it's because we're sparse. It's not so much that we experience things that AI doesn't; in a way, AI has experienced everything. But is that person who's experienced everything interesting? Or do they become a bland average, unable to see the novelty in anything? That to me is the creative problem with AI - it can talk about everything, so it never has to find creative ways to map its sparse knowledge onto a new subject. It just reads you the textbook.
AI has experienced nothing. It has only read about it (and looked at pictures). It is pointless (and insane) to ask an AI "how does that make you feel?". It has no feelings whatsoever (but it can mimic having them, since it has read a lot about them). We even have to annotate pictures in order for AI to understand "this is a happy person" or "this is a sad person". And then it is merely pattern matching, which is essentially what the algorithm is (on a deep & giant scale). An infant has more emotional depth than AI.
Take a human who has only experienced happy things in their life, and place them with someone who is experiencing deep sorrow & distress, and they will know immediately: oh shoot, this person is in trouble. At best, an AI who has only been trained with happy pictures will say "unknown facial expression", and then what does it do with that? There is an IMMENSE difference between the two.
Does someone who has read a lot about you actually know you? They may know all your likes and dislikes, but would you call them a friend? Sounds more like a stalker, which the AI future may very well produce (i.e. advanced, all-encompassing surveillance).
And yeah, good point: there is nothing new or interesting on a flat & smooth surface (like a desert). It is through the interactions of the convex and the concave that interesting things happen and discoveries are made (like digging in the desert, or climbing things). We're all just a bunch of odd shapes, which is exactly what makes things interesting. And the more pronounced our shape becomes, the more interesting things get (although that is perhaps a little unsettling for the smoother shapes, but it calls them to develop some hard edges of their own). You know it immediately when you come across a pronounced shape; it's like nearing a large city for the first time ("woah, that's different").
If AI had experienced anything, we could ask it "what has been your favorite experience?" (which is another pointless & insane question).
In order for AI to be able to answer the question, we would have to instruct it to memorize and rate everything it does on a scale. And then we would have to instruct it how to rate / place items along that scale (since it has no feelings to refer to) -- or basically, how to FAKE having experiences/feelings.
It's just a robot. It doesn't mind waiting for a cancer diagnosis. It feels nothing. Neither does it experience any dread or relief upon receiving the diagnosis.
That's all true, though I don't think that necessarily means it can't impact us in various strange (and real) ways. We can feel moved by books or paintings or songs; there are, no doubt, passages and pictures and music that's been created by AI that moved us too. It may have arrived there randomly; the feeling it is "expressing" may be hollow or fake or a cheap imitation, but if the effect is the same, is there a difference?
Like, I agree asking AI "what is a moment that made you sad?" is a sort of pointless, nonsensical question. But if you ask it that and it tells you something that makes you reflect on something and makes you sad, what do we so with that? I honestly don't know.
If you asked these questions to a friend and they told you a really gripping story which ended up just being a bunch of lies they made up, how would you feel about that? Sure, you were enthralled by the storytelling, but then what?
But maybe that's the future. Instead of reading books and watching movies, we'll just be fed custom-tailored stuff created on the fly by AI. Dopamine on a drip (not unlike scrolling social media).
I'm honestly not sure? I almost said something like that, like "what if you read a book and find out the autobiographical parts were made up?" Or, "what about normal fiction?" We don't care that that's made up at all. And that doesn't bother us; to the contrary, we see it as a very high form of art.
Which is all to say, I have a kind of similarly visceral reaction to AI-written stuff, but I struggle to entirely articulate why, and every time I come up with a good reason, there's some counterexample that where that same thing seems to apply and it doesn't bother me.