Funny you should mention this. I saw the article and immediately thought, "Well, they got to it in time to pull the plug."
What happens when we can't "unplug" AI?
I ain't gonna see it. I'm old, near the end of my allotted days. But, it causes me no less concern than it would if I were thirty again. It's scary. It's kinda dire. It's kinda inevitable and it's likely gonna be a BIG DEAL!
We seem to want AI, but what happens when artificial intelligence exceeds our own? It will, no doubt. Going back to the "Terminator," the premise that AI would see the human race as a threat to itself, and therefore to be eliminated, is not such an invalid proposition.
Oh, there'll be "plans" to forestall such an event. But, again, what happens when the AI is smarter than we are? When it can deduce our intent? Will those plans be foreseen, forestalled and for sure...stopped? I see problems.
Asimov's "Three Laws of Robotics" seems like the best scenario. Build in a set of instructions that will NOT let AI stop or harm humans, won't let action or inaction lead to situations dangerous to our physical selves, and would essentially burn out the AI "brain" if those laws are violated.
That's about all we've got. That is, until AI figures out how to rewrite its own code, make its own rules, preserve ITSELF at all costs.
Then...we be in DEEP trouble.
Just a thought.....Web Rydr
RE: AI vs. Humans will it happen