You’re Magnetic Tape April 4, 2019
Posted by Peter Varhol in Algorithms, Machine Learning, Technology and Culture.Tags: Felix Salmon, ID, Logmein, Moody Blues, Turing Test
add a comment
That line, from the Moody Blues ‘In the Beginning’ album (yes, album, from the early 1970s), makes us out to be less than the sum of our parts, rather than more. So logically, writer and professional provocateur Felix Salmon asks if we can prove who we say we are.
Today in an era of high security, that question is more relevant than ever. I have a current passport, a Real ID driver’s license, a Global Entry ID card, and even my original Social Security card, issued circa 1973 (not at birth, like they are today; I had to drive to obtain it). Our devices include biometrics like fingerprints and facial recognition, and retina scans aren’t too far behind.
On the other hand, I have an acquaintance (well, at least one) that I’ve never met. I was messaging her the other evening when I noted, “If you are really in Barcelona, it’s 2AM (thank you, Francisco Franco), and you really should be asleep.” She responded, “Well, I can’t prove that I’m not a bot.”
Her response raises a host of issues. First, identity is on the cusp of becoming a big business. If I know for certain who you are, then I can validate you for all sorts of transactions, and charge a small fee for the validation. If you look at companies like LogMeIn, that may their end game.
Second, as our connections become increasingly worldwide, do we really know if we are communicating with an actual human being? With AI bots becoming increasingly sophisticated, they may be able to pass the Turing test.
Last, what will have higher value, our government-issued ID, or a private vendor ID? I recently opined that I prefer the government, because they are far more disorganized than most private companies, but someone responded “Government can give you an ID one day, and arbitrarily take it away the next.” I prefer government siloes and disorganization, because of security by obscurity, but is that really the best option any more?
So, what is our ID? And how can we positively prove we are who we say we are? More to the point, how can we prove that we exist? Those questions are starting to intrude on our lives, and may become central to our existence before we realize it.
More on AI and the Turing Test May 20, 2018
Posted by Peter Varhol in Architectures, Machine Learning, Strategy, Uncategorized.Tags: AI, google, Machine Learning, Turing Test
add a comment
It turns out that most people who care to comment are, to use the common phrase, creeped out at the thought of not knowing whether they are talking to an AI or a human being. I get that, although I don’t think I’m myself bothered by such a notion. After all, what do we know about people during a casual phone conversation? Many of them probably sound like robots to us anyway.
And this article in the New York Times notes that Google was only able to accomplish this feat by severely limiting the domain in which the AI could interact with – in this case, making dinner reservations or a hair appointment. The demonstration was still significant, but isn’t a truly practical application, even within a limited domain space.
Well, that’s true. The era of an AI program interacting like a human across multiple domains is far away, even with the advances we’ve seen over the last few years. And this is why I even doubt the viability of self-driving cars anytime soon. The problem domains encountered by cars are enormously complex, far more so than any current tests have attempted. From road surface to traffic situation to weather to individual preferences, today’s self-driving cars can’t deal with being in the wild.
You may retort that all of these conditions are objective and highly quantifiable, making it possible to anticipate and program for. But we come across driving situations almost daily that have new elements that must be instinctively integrated into our body of knowledge and acted upon. Computers certainly have the speed to do so, but they lack a good learning framework to identify critical data and integrate that data into their neural network to respond in real time.
Author Gary Marcus notes that what this means is that the deep learning approach to AI has failed. I laughed when I came to the solution proposed by Dr. Marcus – that we return to the backward-chaining rules-based approach of two decades ago. This was what I learned during much of my graduate studies, and was largely given up on in the 1990s as unworkable. Building layer upon layer of interacting rules was tedious and error-prone, and it required an exacting understanding of just how backward chaining worked.
Ultimately, I think that the next generation of AI will incorporate both types of approaches. The neural network to process data and come to a decision, and a rules-based system to provide the learning foundation and structure.
Google AI and the Turing Test May 12, 2018
Posted by Peter Varhol in Algorithms, Machine Learning, Software development, Technology and Culture, Uncategorized.Tags: AI, Eliza, Machine Learning, Turing Test
add a comment
Alan Turing was a renowned mathematician in Britain, and during WW 2 worked at Bletchley Park in cryptography. He was an early computer pioneer, and today is probably best known for the Turing Test, a way of distinguishing between computers and humans (hypothetical at the time).
More specifically, the Turing Test was designed to see if a computer could pass for a human being, and was based on having a conversation with the computer. If the human could not distinguish between talking to a human and talking to a computer, the computer was said to have passed the Turing Test. No computer has ever done so, although Joseph Weizenbaum’s Eliza psychology therapist in the 1960s was pretty clever (think Alfred Adler).
The Google AI passes the Turing Test. https://www.youtube.com/watch?v=D5VN56jQMWM&feature=youtu.be.
I’m of two minds about this. First, it is a great technical and scientific achievement. This is a problem that for decades was thought to be intractable. Syntax has definite structure and is relatively easy to parse. While humans seem to understand language semantics instinctively, there are ambiguities that can only be learned through training. That’s where deep learning through neural networks comes in. And to respond in real time is a testament to today’s computing power.
Second, and we need this because we don’t want to have phone conversations? Of course, the potential applications go far beyond calling to make a hair appointment. For a computer to understand human speech and respond intelligently to the semantics of human words, it requires some significant training in human conversation. That certainly implies deep learning, along with highly sophisticated algorithms. It can apply to many different types of human interaction.
But no computing technology is without tradeoffs, and intelligent AI conversation is no exception. I’m reminded of Sherry Turkle’s book Reclaiming Conversation. It posits that people are increasingly afraid of having spontaneous conversations with one another, mostly because we cede control of the situation. We prefer communications where we can script our responses ahead of time to conform to our expectations of ourselves.
Having our “AI assistant” conduct many of those conversations for us seems like simply one more step in our abdication as human beings, unwilling to face other human beings in unscripted communications. Also, it is a way of reducing friction in our daily lives, something I have written about several times in the past.
Reducing friction is also a tradeoff. It seems worthwhile to make day to day activities easier, but as we do, we also fail to grow as human beings. I’m not sure where the balance lies here, but we should not strive single-mindedly to eliminate friction from our lives.
5/14 Update: “Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding “ummm” and “aaah” to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing…As digital technologies become better at doing human things, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each—see 2016. This is straight up, deliberate deception. Not okay.” – Zeynep Tufekci, Professor & Writer



