Driverless Cars Are Decades Away, If At All May 31, 2021
Posted by Peter Varhol in aviation, Machine Learning, Technology and Culture.Tags: autonomous vehicles, driverless cars, Tesla
add a comment
I am not a fan of Elon Musk. While at one level I appreciate his audaciousness, which seems to enable him to accomplish impossible goals through sheer force of will, the arrogance through which that force is delivered tends to cheapen it for me. It is perhaps fortunate that he cares not one whit about what I think.
Nevertheless, one area that we disagree on starkly is the self-driving car. Musk recently released a new version of Autopilot for Tesla, which he is referring to as Vision. He believes that Vision will enable Tesla to achieve full driverless experiences within two years.
Um, no. While Autopilot and Vision might seem a bit like magic, they have serious limitations. And the complexity inherent in fully self-driving cars is far more enormous than we have tackled to date. We tend to liken it to aircraft autopilots, which are charged with mostly maintaining straight and level flight on a given course. Modern autopilots can also successfully land a plane, but that is a well-understood and relatively simple maneuver.
Equating self-driving cars to an autopilot is a bad analogy. Cars travel in three dimensions, with unexpected obstacles and often poor weather. Aircraft have multiple pilots that can take over immediately in case of unexpected events. These pilots are also paying attention to the flight information, rather than sleeping or playing a game.
Effectively, the only way to have one fully self-driving car is to make every car on the highway self-driving. You are not going to stop manual drivers from pulling out in front of you, or cutting you off, or driving more slowly than your self-driving car wants to go. So every single car has to be under positive control. And there may well need to be the equivalent of a staffed control tower to make sure traffic flows smoothly.
So Musk and Tesla will continue releasing incremental upgrades, always claiming that the ultimate breakthrough is only a couple of years away. In reality, it won’t happen during my lifetime.
Will We Have to File a Flight Plan? March 26, 2021
Posted by Peter Varhol in aviation, Machine Learning, Technology and Culture, travel.Tags: autonomous vehicles, Scale AI
1 comment so far
I have been an airplane pilot, although I haven’t commanded an aircraft in years. Depending on where you were going, you could just hop in the plane and go. But if you were flying into controlled airspace, you generally had to file a flight plan, which defined your intentions. Am I flying through, or landing at a controlled airport? What am I proposing as an altitude and course? And, of course, things may adapt based on actual conditions through the controlled airspace.
I am currently watching the Scale AI Transform conference online. A speaker is talking about autonomous vehicles, and about how we (collectively) have spent billions of dollars without yet deploying those vehicles except in very limited tests.
It occurs to me that we may need to file the equivalent of a flight plan in order to get into our car in the future. I wonder if we might have to specify our destination and route that we intended to travel. Today air flight plans are mostly manual, but I will not be surprised if we have to spend some time on the computer in the future just to drive to the supermarket.
Autonomous vehicles represent an exceeding complex technical problem. You need many exacting sensors in the car, real time processing and decision-making in the car, an unambiguous knowledge of road rules, extremely reliable communication between vehicles, and a broker, likely in the cloud, that can manage traffic flow and decisions on a real time basis. Maybe, possibly, we might also have to have the equivalent of a staffed air traffic control to manage traffic.
In most circumstances, it is much more complex than flying an airplane. The pilot is still ultimately in control, and can interact with both human controllers and automated systems to make the best decisions.
When I first started traveling to California, I was nonplussed at the red lights on merging onto freeways. I came to understand that it was about traffic flow, a very primitive method that enabled a slightly better spacing out of cars. While the advantage of autonomous vehicles has the potential to be significant, the sensing, decision-making, and control of autonomous vehicles extend far beyond this.
So I think it’s going to be a while before we get fully autonomous vehicles. We have read stories about how people have accidents because they turn their fate over to self-driving systems. That’s stupid today, and it will likely be a bad choice for years to come.
Will We Have Completely Autonomous Airliners? January 2, 2020
Posted by Peter Varhol in aviation, Machine Learning, Technology and Culture.Tags: AI, automation, aviation, Boeing, Machine Learning
add a comment
This has been the long term trend, and two recent stories have added to the debate. First, the new FAA appropriations bill includes a directive to study single-pilot airliners used for cargo. Second is this story in Wall Street Journal (paywall), discussing how the Boeing 737 MAX crashes has caused the company to advocate even more for fully autonomous airliners.
I have issues with that. First, Boeing’s reasoning is fallacious. The 737 MAX crashes were not pilot error, but rather design and implementation errors, and inadequate information for documentation and training. Boeing as a culture apparently still refuses to acknowledge that.
Second, as I have said many times before, automation is great when used in normal operations. When something goes wrong, automation more often than not does the opposite of the right thing, attempting to continue normal operations in an abnormal situation.
As for a single pilot, when things go wrong, a single pilot is likely to be too focused on the most immediate, rather than carrying out a division of labor. It seems like in an emergency situation, two experienced heads are better than one. And there are instances, albeit rare, where a pilot becomes incapacitated, and a second person is needed.
Boeing is claiming that AI will provide the equivalent of a competent second pilot. That’s not what AI is all about. Despite the ability to learn, a machine learning system would have to have seen the circumstances of the failure before, and have a solution, or at least an approximation of a solution, as a part of its training. This is not black magic, as Boeing seems to think. It is a straightforward process of data and training.
AI does only what it is trained to do. Boeing says that pilot error is the leading cause of airliner incidents. They are correct, but it’s not as simple as that. Pilot error is a catch-phrase that includes a number of different things, including wrong decisions, poor information, or inadequate training, among others. While they can easily be traced back to the pilot, they are the result of several different causes of errors and omissions.
So I have my doubts as to whether full automation is possible or even desirable. And the same applies to a single pilot. Under normal operations, it might be a good approach. But life is full of unexpected surprises.



