An article questions whether or not AI can be trained to detect morality. Specifically the article is titled: "We can train AI to identify good and evil, and then use it to teach us morality". The immediate problem with this article is apparent in the title. The concept of good and evil are subjective yet the article appears to be talking about the subject of morality as if there is some objective morality which everyone would agree on which an AI can somehow be trained to discover.
Can AI make the world more moral?
When it comes to tackling the complex questions of humanity and morality, can AI make the world more moral?
This question I think is more appropriate than the question in the title. I absolutely think AI can make the world more moral. In fact I would go so far as to say the world cannot be moral or even approach being moral without AI (machine learning). The question is a matter of what kind of AI are we talking about? Another question is who will control this AI? The problem is we simply do not have an AI which can do this on say a level of Google. I do think we can develop a "moral search engine" and in fact I've got an idea on how to do just that which I'll reveal in future blog postings.
The article highlights the main problem with current technocratic approaches to AI morality:
There are many conversations around the importance of making AI moral or programming morality into AI. For example, how should a self-driving car handle the terrible choice between hitting two different people on the road? These are interesting questions, but they presuppose that we’ve agreed on a clear moral framework.
We simply do not have a universal framework for morality. My opinion is on the self driving car topic we should allow the owner of the car to decide whether to prioritize the car or whether to take the utilitarian sacrifice one to save many. This would put the moral question where it belongs (to the owner of the car rather than the manufacturer). To have car manufacturers override would be to put responsibility on the makers of the software who for better or worse are no more enlightened on morality than anyone else.
Where do I finally reach a point of disagreement with the article writer?
Though some universal maxims exist in most modern cultures (don’t murder, don’t steal, don’t lie), there is no single “perfect” system of morality with which everyone agrees.
But AI could help us create one.
The article writer assumes there is an us, a we, without defining who these people are. Do we all believe murder, stealing, etc is wrong? Apparently not because war happens and in war murder and theft are common. In addition, the circumstances shape right and wrong, for example if you're a mother and your children are starving will you go and steal food? Or do you do what is right and starve so as not to violate the moral absolute of no stealing?
It's simple, there are no moral absolutes in nature. So to have an AI try to create absolute fixed rules is a very naive approach which in my opinion is guaranteed to fail. I do think AI can help a person find the solution which is simultaneously best for their self interest while attempting to minimize harm to others and that is why I call my approach to this problem a "moral search engine" rather than to simply give the AI examples and have the AI merely use some kind of neural net to create solutions. I just don't think that kind of approach will work unless the AI can predict how humans will react to it's solutions (public sentiment).
Morality has a public sentiment component
While you have personal decision making which does not have to be concerned about the moral outrage of people around the world because the decisions are small, you also have bigger decisions where you do have to be concerned about how people around the world will react. Human beings are notoriously not good at predicting the reactions or moral outrage of other human beings because our brains are limited to only being able to manage around 150 relationships. So this means there is a hard limit called Dunbar's number which proves the human brain does not scale and it is by this limit (and others) that I make statements such as human beings can never truly be moral. To put it in short, without AI non of us have a hope in the world of being moral in a hyper connected world.
What a does hyper connected world mean?
A hyper connected world is a world where you have to manage potentially thousands of relationships (beyond Dunbar's number). Facebook creates an illusion allowing people to believe they have thousands of "friends". Twitter creates a similar illusion. The trend toward increasing transparency cannot produce more morality because even if for every person there are 5000 stakeholders who watch their decisions, it is not possible for the person being watched to adapt to the opinions, feelings, morals, norms, of 5000 people from all around the world who may have very different notions of right and wrong. To put it simply, the neocortex cannot handle the moral load which hyper connectivity with transparency inevitably brings.
The article connects morality and law
The article makes another mistake in my opinion by trying to connect morality and law. In my opinion law is amoral. This is to say that what is or isn't a law has nothing to do with morality. It has nothing to do with current moral sentiment as there are laws on the books which most people today view as immoral. It has nothing to do with consequences to society if the goal is to produce positive consequences because there are laws which produce negative consequences to society (such as mass incarceration which then led to fatherless households which then led to a poverty cycle).
Inherent in this theory is the presumption that the law is an extension of consistent moral principles, especially justice and fairness. By extension, Judge Hercules has the ability to apply a consistent morality of justice and fairness to any question before him. In other words, Hercules has the perfect moral compass.
While I agree with the idea put forth that AI can be part of creating a perfect moral compass I do not think AI alone can do it. Nor do I think any moral compass can ever be considered perfect or "optimal". It can produce a better moral compass for the vast majority of people on the earth though. In order to achieve this, in my opinion the question asker must be capable of asking moral questions (querying) both the machines and the people. In order words in order to build a true moral search engine the question must be asked to "the global mind" which is like a super computer which includes both machine computation and human computation.
What if we could collect data on what each and every person thinks is the right thing to do? And what if we could track those opinions as they evolve over time and from generation to generation? What if we could collect data on what goes into moral decisions and their outcomes? With enough inputs, we could utilize AI to analyze these massive data sets—a monumental, if not Herculean, task—and drive ourselves toward a better system of morality.
On this part I agree. The data analytics approach is the correct approach to morality in my opinion. IT's a matter of having access to both human computation and machine computation. It is a matter of knowing public sentiment on any particular moral question at any point in time. It's about using AI to process this sentiment and even use it for predictive analytics. This in my opinion is a viable approach for a moral search engine.
But I do not think this will lead to a unified "system of morality". What is best for me is not going to be what is best for you. What is right for me to do based on my stakeholders, or my crowd, is not going to be what is right for you to do based on your crowd. If we both ask our crowd, depending on who is in our crowd we could get completely different results to the same question.
Conclusion and final thoughts
- There in my opinion is no objective morality. Not enough evidence shows it exists in nature.
- AI will not be able to find objective morality unless it exists in nature.
- Current moral sentiment is not the same as objective morality. It is a mere approximation in the best case for what will upset most people (or upset the least number of people).
- A moral search engine requires an ability to query the full and total global or universal mind which means human and machine computation, or non-human animal computation should the technology evolve to permit their participation.
- A moral search engine in my opinion is a must have because evidence suggests that the neocortex does not scale. Making the world hyper connected and transparent may work when it's only 100 or so people (small town) but it does not appear to scale up to millions or billions of people all who have their own opinions on right and wrong.