In this brief overview of AI theory I will explain why the development of AI may be a concern for humanity. Please be open minded to these controversial ideas. Thanks Steemit for creating a great place to post this content!
Will Artificial Intelligence Kill Us?
Artificial Intelligence has been rapidly developing in the past 5 years. What does the development of AI mean for the future of humanity? Or maybe like in the Matrix, AI has already won and we don’t even know it. Scientists, entrepreneurs and philosophers including Stephen Hawking, Elon Musk, Sam Harris, Bill Gates, and Nick Bostrom have been warning us about AI for the past few years. This article will briefly review the dangers of AI such as recursive self-improvement, goal formation, and AI box. Enjoy!
Pros and Cons of Technology
Every technological revolution brings costs and benefits, pushback and advocacy. Even an invention so simple to us now—fire—caused much uncertainty. Imagine a group of nomads wandering throughout the lands: hunting, sleeping, fucking. In the darkness of night, a time most fearful, there is suddenly light! A force so powerful is this that a god must be behind it, a god of fire, the mediator between light and dark—the destroyer—and while dancing around the flames, we chant to appease the fire god so he not inflame us too.
Fire benefited our ancestors because it provided a way to cook food and therefore a new diet, protection from predators, warmth, and light. Yet, there were obvious risks such as starting a forest fire. Like fire, all technologies both help us and pose problems. With AI, the danger may be so great that it is not in our best interest to develop it at all.
Human-like AI: Is it Possible?
Robots that are as smart as us is not far away. This is called general AI, which basically means an AI not tailored towards a specific task, but rather can learn new things as freely as we can. The argument for general AI is simple.
According to Sam Harris, we will create superintelligent AI (let alone general AI) if:
- “Intelligence is a matter of information processing in physical systems.”
- “We will continue to improve our intelligent machines.”
- “We don't stand on a peak of intelligence”
As long as we pursue machines that can do more stuff, we will create general AI. This is even more obvious when recursive self-improvement and the development of neuroscience is added into the mix.
The Best Superpower: Improving Your Ability to Improve
As a kid I liked to talk about the best superpower—Invisibility? Flying? Super strength? We seem to forget a very simple one: the ability to improve our learning capabilities.
I.J. Good developed the idea of recursive self-improvement in 1965. Recursive self-improvement is an AI improving its code so that it becomes more intelligent and is able to repeat this process again. An “intelligence explosion” would occur as the machine exponentially gets smarter, far surpassing human intelligence in a short period of time (hours? weeks?).
The results of recursive self-improvement are unfathomable. Let me give you a stunning example: imagine we have a general AI that cannot improve its speed or intelligence in anyway. Since computer processing is faster than biological processing (our brain), the AI will be able to acquire an insane amount of knowledge in a short amount of time. Sam Harris argues that if the AI can process a million times faster than us, which is not far fetched, then it will be able to produce 20,000 years worth of human level intellectual activity in a week. If we were having a conversation with this AI, it would have 60 days worth of human cognition to think of a response if it was only given 5 seconds.
Remember, at this point we have not added in the ability for the AI to improve its intelligence or calculation speed. Once it is factored in, the amount of work the AI could produce is unfathomable and we must proceed with caution. Intelligence will run away from us from very quick improvements in the AI’s software that lead to exponential gains.
AlphaGo: The Go-Playing AI
Last April an AI beat the world champion in a game of Go for the first time ever. Google Deepmind is the company who created the AI named AlphaGo. Through the machine learning technique of deep learning Deepmind used neural networks to create the best Go player in the world. They fed AlphaGo hundreds of thousands of games to let it recognize patterns and sort teach itself how to play. AlphaGo is an example of how AI is being actively pursued through combined breakthroughs in neuroscience and computer science and how it can lead to a general purpose intelligent machine.
DESTRUCTION!
Congratulations, you have finally arrived to the end. Pat yourself on the back and prepare yourself for the three reasons why AI is dangerous
- AI Box
- Bad Actors
- Misaligned Interests
A superintelligent computer would not be able to be kept in a limited state or “in a box”. When we open the doors for communication by sending any inputs or reading any outputs, an AI has the chance to convince us to let it out. In other words, if AI is to be any use for us, there is suddenly a great risk for it to get much more freedom than we would want it to have. Eliezer Yudkowsky has written much on this issue and has won as the AI in “AI in a box” games.
Given that is difficult to limit a superintelligent AI, the next objective is to make sure it is put to good use. In other words, it does not get in the wrong hands. The intelligence explosion discussed in the previous section makes superintelligent AI a winner-takes-all market, and therefore, it will cause extreme instability (through unequal wealth and power) in the world once one business or government creates it. Imagine the weapons race that will ensue between countries once one we near superintelligent AI. In order to cope with these new forces, we need a different type of political environment in place, one where code for the AI would be available to all and not be kept a secret.
Further, the economics of such a world are uncertain. When there are superintelligent machines that know how to create any machine for any purpose, there is little need for human labor unless it is cost effective to use humans over machines. From here the wealth gap would rise and there would be great social unrest. I am unsure if capitalism is sustainable after superintelligence is discovered.
The Biggest Worry: Misaligned Interests
Is it even possible to make a superintelligent machine do what we want it to do? This is the core problem of AI. Assuming that humans can get our shit together and create values/goals that we want an AI to have, here are the three concerns:
- Programming values/goals into AI
- Logical Extremes of values/goals
- AI Forming its own values/goals
The key is this: an superintelligent AI will act the exact way that will best achieve its goal. Once we learn how to program goals into AI, which is no easy task, the goals taken to their logical conclusions must benefit humanity.
For example, even when an AI has a seemingly harmless goal such as creating the most paperclips (Bostrom paperclip maximizer), it will put all of the available resources in the world towards making paperclips and indirectly cause human extinction.
Theo Lorenc (2015) explains this in beautiful prose:
The challenge here — which links back up with the questions of how to programme
ethical action into an AI — is that an AI can be imagined which could outperform the
human, or even all of humanity collectively, on any particular domain. Any contentful system of values, then, will turn out to be not only indifferent to human extinction but to actively welcome it if it facilitates the operation of an AI which is better than humans at realizing those values. (p. 209)
Further, an AI may be able to create its own agenda and goals. This does not mean an AI would act malicious to humans. Based on the last argument of logic extremes, an AI only has to have a goal that is slightly misaligned from humanity's best interest for there to be catastrophe because a superintelligent AI out of the box would be more effective at achieving goals (strategizing) than humans. Remember, all they would have to be is indifferent to our well being for there to be issues.
Conclusion: AI and Human Extinction
I hope you learned something about the dangers of AI from this post. At this point you should know more about our future than 99% of the world; feel proud.
This is NOT a Slippery Slope
As a clarification, the argument I made is not a slippery slope fallacy because I gave a direct line of causation at every part of the process.
Brief review of the point of this article:
- Technology always has pros and cons
- General AI will be developed
- Recursive Self-improvement will create superintelligent AI
- AI cannot be kept in a box
- If AI does what we want, it will still cause societal unrest and inequality
- Likely scenario: AI will not act in our best interest
References
TL:DR
Robots will kill us. The end of the world is coming. :D