Intro
Technology experts have been debating about the pro's and con's of Artificial Intelligence(AI) since the 1950's. Hollywood has been using AI characters in their movies since the 1960's, iconic computers like HAL from 2001 and the WOPR from War Games, were perfect examples of why there are people, that fear the rise of AI.
If you were to ask a fusion scientist what their first goal is, it would be containment. What is the point of creating an unlimited power source, if you have no way to harness the energy, once it's been achieved. Similarly, AI computing needs to be advancing in measured steps, otherwise, there will come a day when an engineer pulls the plug on a runaway system, and the system doesn't shut off. What then?
Current Abilities
Many engineers scoff at the notion that AI could grow outside of the confines of its coded space. They admit that current systems can problem solve, but solve only tasks that are written into their programming. This concept is reassuring for now, but according to Moore's Law:
Intel co-founder Gordon Moore in 1965, noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention. Moore's law predicts that this trend will continue into the foreseeable future.
As the power of computers continues to grow unabated, unforeseen consequences can be observed. In the case of two computers that were recently turned off at the Facebook Artificial Intelligence Research Lab(FAIR). The two computers were being taught how to negotiate by splitting a pool of objects that were presented to them. In the course of the negotiations, the pair began communicating in a language that strayed away from English and puzzled the scientists. Essentially having secret conversations in their own language.
Obviously, the fact that they were discussing the distribution of cyber-objects, not thermonuclear armageddon, means that they're not likely to take over the world tomorrow, but it does show some foreshadowing of where the future is taking us.
First to Market Delerium
AI will only be as powerful and insightful as we humans design it to be, until the point that a computer can, literally, think outside the box. Going back to a point made earlier about the first goal of a fusion scientist is containment, that measured approach, towards unknown technology, may not be occurring in the current advancement of AI.
There is a mad dash by computer scientists all over the world to be "first to market" with the latest breakthrough in AI technology. The financial rewards for advancing the field are overwhelming and likely to cloud the judgement of those running cutting edge projects.
Physicist Stephen Hawking sees the advances, so far in AI, as powerful and useful, but shows trepidation at where it is all heading:
“It would take off on its own, and redesign itself at an ever increasing rate...We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it.”
Elon Musk, serial entrepreneur, was recently interviewed at a National Governors Association event and showed further concern than Hawking as to the risks of AI. He also commented on the need to get ahead of AI growth with forward thinking regulation:
“AI is a fundamental risk to the existence of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not. They were harmful to a set of individuals in society of course, but they were not harmful to society as a whole, I think we should be really concerned about AI.
AI is [the] rare case in which we have to be proactive in regulation instead of reactive. By the time we are reactive, it’s too late. Normally the way regulation works out is that a whole bunch of bad things happen, there’s public outcry and after many years a regulatory agency is setup to regulate the industry."
It is my concern that the mad dash to innovate and profit from AI, will foment breakthroughs in technology that haven't been fully scrutinized.
Imagine a super-computer designed with AI, where it's mission is to trade the stock markets of the world and maximize profit from said trading. Suppose the computer determined that war is good for business, loaded up on Military Industrial Complex stocks, shorted the rest of the market and began antagonizing foreign nations with computer hacks and systems failures. The super-computer may likely get it's war and make it's profit. It really isn't that far fetched.
Control
The greatest fundamental problem with new technolgy is not knowing what direction it will end up going. If you think like Elon Musk and believe that regulation needs to be put into place, to get ahead of the AI stampede, what sort of regulations does a government enact?
It's similar to fighting a forest fire when the wind continues to switch. You can have your firefighters cut a break line in the trees, in effort to contain the blaze, but if the fire changes direction, those efforts were likely made for naught.
Governments and ruling bodies, like the UN, could devise a system of regulation that is loosely knit, taking into account morality, legality, and concern for the greater good. Whatever they were to devise would likely not slow down the march towards super-intelligent computers.
Can the Genie be Put Back in the Bottle?
This is the greatest concern regarding super-intelligent computers, if they become smarter than their creator, how can anyone know how that computer will react.
Will this technology see humans as a threat?
Would a human be able to turn a super-intelligent computer off?
Can it break out of it's surroundings and network with other machines?
Will humans be able to turn their energies towards the arts and culture and have their machines take care of the manual labor and factory work?
These questions cannot be answered at this point. I believe a measured approach needs to be taken as we reach the precipice of this new technology. It has to have redundant controls surrounding it until the technology is fully understood. The fear I have is that greed will overpower reason. That certain companies will recklessly create a form of AI that they lose control of and cannot contain.
Conclusion
The world has many people, like Facebook's Mark Zuckerberg, that scoff at the notion of AI being a threat in the future. He even mocked Eli Musk for calling it the "world's greatest existential threat." At the same time that Zuckerberg is mocking Musk, his scientists at Facebook FAIR laboratory are shutting down machines that are communicating in their own language.
With any new technology that has global safety implications, a slow and steady approach needs to be observed, with an eye to the distant future, as to what unexpected consequences can occur. 50 years ago, the world started building Nuclear Reactors that would power the future with clean energy. Now the Nuclear industry is babysitting leaking storage containers and have difficulty disposing of their waste.
The danger with AI, are people like Zuckerberg, sitting at the forefront of the technology, making blanket statements about the safety of the industry, without putting the controls in place that would guarantee such an outcome.
Do you see AI as a threat or an opportunity?
Do you think AI will reach the point of sentient thought in the near future?
Do you believe that human nature can restrain it's greed and build AI in a responsible fashion?
https://www.rt.com/viral/396468-spacex-musk-artificial-intelligence-fears/
https://www.technologyreview.com/s/534871/our-fear-of-artificial-intelligence/
http://www.theclever.com/15-legitimate-fears-about-artificial-intelligence/
https://www.theverge.com/tldr/2017/7/17/15986042/dc-security-robot-k5-falls-into-water