If you look at cinema, you'd see countless examples of how artificial intelligence could one day be the end of humanity. And in many ways it's the same tired old plot that we are all familiar with - AI becomes conscious and self-aware, AI wants optimization and/or survival, AI deems humanity surplus to requirements and/or a danger, AI sets on a mission to destroy humanity (preferably with extremely inefficient human-shaped robots because tanks and missiles don't make good movie villains).
But does this picture make much sense? Is AI likely to be the end of us because it become so smart that it decides it's time?
We are not going to develop conscious AI anytime soon
Let's be realistic about the state of artificial intelligence - we are neither after, nor really have the needed understanding to develop artificial consciousness. Currently examples of artificial intelligence are just algorithms that are better, faster or at least more efficient than humans in taking certain types of the decisions. We don't care if the algorithm we are using develops self-awareness of some sort, what we care about is the outcome - a correct decision. And we judge the decision's correctness or usefulness based on some pretty specific and narrowly-tailored criteria.
taking up a passenger seat...
We don't need a self-driving car to appreciate the beauty of the scenery, to be annoyed with our boring conversations, or to give us life advice as part of small talk. We need it to get us from point A to point B safely. The algorithm doing this doesn't really require self-awareness, it only requires efficiency at its job and this is the only criteria it gets judged on.
And this is true even for learning algorithms. They can make themselves better all they want, this doesn't change their objectives and the criteria they are in a sense hard-coded to judge their own efficiency on. And none of these algorithms are going to be complex enough to start developing consciousness just out of the blue and as a side-product of the simple procedures that allow then to optimize themselves based on a certain objective anytime soon.
The thing with consciousness is that we are not yet sure what it is, how at all it emerges when it does and by the looks of it, the chances for us to stumble into creating it as a side product of something practical are actually minuscule. Yes, it is not impossible for an artificial intelligence to develop consciousness, to use it to determine that humanity needs to be obliterated and to find the means to achieve that new objective somehow (preferably sexy killer robots because any conscious AI would surely have a kink or two). But as poetic as it might be, that's a really round-about way of destroying civilization as we know it, isn't it?
AI wouldn't need consciousness to destroy or harm humanity
As time goes, despite the fact the learning algorithms we are creating are far from being conscious, they continue to become more and more pervasive. As time goes, machines and the smart algorithms that guide them will start taking over more and more crucial assignments, so knowing that they are reliable will become increasingly vital. It's probably a good time to notice what even the most complex artificial intelligence possessing the ability to learn is at the most fundamental level. It's software. And what is the most common problem with software? Bugs.
Well, there you have it. Artificial intelligence doesn't need to be conscious to create huge problems, it just needs to be responsible for something really crucial and it needs to have a bug or two that would allow it to spiral out of control. Keep in mind that there have already been Wall Street crashes because of trading algorithms gone rouge, so it could surely happen elsewhere given enough chances.
The thing that we expect from algorithms is that the more advanced they become, the safer they would be because they will be smarter and therefore supposedly better at it. But that is not necessarily the case. A piece of software is as unintelligent as the oversights by the programmers the created it and their colleagues that didn't manage to catch them. But with algorithms optimizing themselves and dealing with unimaginable amounts of data at unimaginable speeds, our ability to monitor and understand them properly might start diminishing much before the algorithms start hating us.
So with learning and self-optimizing algorithms continuing to grow more complex and continuing to taking more and more responsibility for the way our society operates and our ability to monitor them continueing to decline, the chance of a pesky bug causing a major disaster is inevitably going to rise. I think this is the real danger Elon Musk and Stephen Hawking are trying to warn us about.
It's not that algorithms will become so smart that they will decide to kill us - it's that we'll remain too stupid to keep them bug free.