When AI is truly achieved, what do we, humans, fear the most? What to expect from a raw, powerful brain?
What we fear the most is that it doesn't think like us. About what aspect, specifically?
Morality.
That is, how does a robot knows, as we think we know, that killing is, generally, evil? In other words, how does it knows that killing is objectively evil?
Hell, how does a human knows?
Many humans consider Evil to be subjective, but will they support that for robots? Or do they think Evil is subjective just when most people agree on what Evil is?
Will they promote worship of a AI prophet who preaches that killing is objectively evil?