Researchers were able to fool AI systems attempting to recognize traffic signs, through minor manipulations. By placing stickers on signs, subtle changes were achieved that were sufficient for autonomous systems to misclassify or improperly read the sign. These modifications could easily be camouflaged or beyond the range of observation for humans. This type of academic study is called adversarial research as it is intended to showcase weaknesses that attackers might use to abuse or undermine the capabilities of AI systems.
The full article can be found here: Researchers hack a self-driving car by putting stickers on street signs
Let’s not allow our imaginations to run wild prematurely. There are two important points to consider.
First, Artificial Intelligence (AI), as incredible as it is, is just a tool. It can be defeated, misused, and manipulated like every other tool in existence. Second, from a cybersecurity perspective, just because Deep Learning inputs could be manipulated for malicious intent, it does not mean it actually would occur in the real world. Is this a vulnerability, yes. But like most, unless it is tied to satiating the objectives of an attacker, it won’t be widely exploited. I think there is a greater chance that autonomous cars are compromised with custom Ransomware, as it would result in direct financial benefits to the criminals, than by modified street signs.
That said, it is important to continue such adversarial research, to see how far this rabbit-hole goes and if there are use-cases compelling to cyber-threats.
Image Source: https://www.autoblog.com/2017/08/04/self-driving-car-sign-hack-stickers/
Interested in more? Follow me on LinkedIn, Twitter (@Matt_Rosenquist), Information Security Strategy, and Steemit to hear insights and what is going on in cybersecurity.