A few days ago wrote about Elon Musk's thoughts on how close we are to real self-driving cars.
In a recent interview, Musk said that 2 years ago, based on the level of technology we had, he thought it would take roughly 10 years until we could reach full autonomy. One year later, when he seriously re-examined the question, he concluded that we could now make it in roughly 5 years. A few months ago, he pondered about the same question again, and concluded that technology has so rapidly advanced, that full autonomy would be doable in roughly 2 years(24-36 months).
For a long time I've been highly skeptical about these predictions.
It's not that I don't believe in fast technological progress. I do and I hope that it happens, I consider myself as a transhumanist.
But many times a new technological innovation is so big step forward that people don't understand it fully. Only when it's used for a while they start to realize it might have unseen risks and disadvantages.
The biggest problem for autonomous cars
Autonomous cars are really complex things. How do you prevent hacking of a really complex thing? Well, it's quite hard.
Just recently it was reported that 100 million Volkswagens can be unlocked with $40 Arduno device. Similar security holes are found regularly, and probably most of them are not even told to the public.
In an autonomous car everything is controlled by a computer. Updates are automatically sent by the manufacturer to the cars. There is not much what user can do.
So, what happens when hackers get in and the system is compromised? It's not just a computer that processes information. It's a computer that controls really heavy physical object. If that object is used harmfully, results can be horrible.
Just imagine if a hacker can send updates to thousands of cars. Cars will lock the doors, accelerate as fast as they can and drive straight as long as they can. Some passengers might bump into a building that's front of them and get a few bruises. But some of them will be at a highway and cause serious crashes.
Just imagine if you are in a car that stops completely ignoring your commands and accelerates as fast as it can. If you survive from that, do you think you'll go into autonomous car ever again?
The psychological impact will be huge. People will want to ban self-driving vehicles. Maybe there will be even terrorists who sabotage car factories and blow up autonomous cars because they don't want to see them in their roads.
My prediction
The technology of autonomous cars will advance too fast and it will backfire. Probably due to a hack something really bad happens. After that there will be many barriers preventing self-driving cars becoming common. It will take several years to undo the harm caused by this.
Solutions
There are a couple of ways to prevent this happening.
- Take it easy. To get the security right, it might take a while to identify all the potential problems. If cars are manufactured and sold immediately when the technology is at the stage when it does the job well enough, it will probably have some serious security holes.
- Open up the source code. For me this has been obvious for a long time, but I haven't seen much discussion about it. Closed source code is always more risky than open. All software that is in an autonomous car should be required to be open. This is a question of public safety. Badly behaving self-driving cars do not only threat the lives of their passengers but also everybody else's lives who happen to be near the cars.