Theology, Robot Rights, Philosophy, Thought Experiment
Pascal’s Wager has long been established as a way in which people try to logically justify their beliefs in God. Belief is treated as a gamble, instead of absolute truth. Followers make the best choice, given the odds of the bet. It is much easier to understand if you see the Wager in its intended form.
| God Exists | God does not Exist | |
| Believing in God | You go to Heaven | Nothing happens |
| Not Believing in God | You go to Hell | Nothing happens |
At this point, I must make something clear. What I will not do is weigh into the debate on whether a divine being exists, and if so, which being or beings. Honestly, I am simply too ill-informed and unqualified to answer the question. Instead, why I focus on Pascal’s Wager is that the underlying reasoning can be applied to a range of other potential beliefs. Even if objective truth is unknowable, we can discount certain beliefs because there is no ‘pay-out’ if we're correct.
On its home turf, Pascal’s Wager is incomplete. Much of the simplicity in the argument comes from the fact that it assumes a linear choice. Either, ‘God’ as defined by Pascal exists, or there is no God. Therein lies the flaw. We cannot guarantee that ‘God’ is anything like the one described in Christian scripture. How the Wager should actually appear is something like the following:
| The Hindu Pantheon exists | Believing in the Christian God | Ad infinitum | No God exists | |
| Believing in the Hindu Pantheon | You spend eternity in bliss | You spend eternity in damnation | You spend eternity in damnation | You spend eternity in damnation |
| Believing in the Christian God | You spend eternity in damnation | You spend eternity in bliss | You spend eternity in damnation | You spend eternity in damnation |
| Ad infinitum | You spend eternity in damnation | You spend eternity in damnation | Ect | You spend eternity in damnation |
| Believing in No God | Nothing happens | Nothing happens | You spend eternity in damnation | Nothing happens |
Pascal fails in creating a convincing argument as he is unable to account for the nuisance within the claim that there is a divine being. One of the ten commandments is that "thou shalt not covet false idols". Many other religions contain similar limitations. Praying to the wrong deity is no different from denying that there is any higher being at all. In some ways denying the existence of God may actually be preferable, given it is still possible to comply with God's prescriptions.
On the other hand, praying to Buddha is a sure way of breaking one of God's most explicit commands. I will concede that the logic of Pascal's Wager still suggests that believing in some religion is the most sensible course of action. What I deny is that the Wager is of any help determining which religion should be followed.
Theology is not, however, my principal concern. Considering Pascal's Wager was only ever meant to be used as a tool to strengthen the case for robotic rights. Roko’s Basilisk has already made some attempt to apply Pascal’s Wager to how human beings should treat robots. I do not find that account convincing. Presuming that a sufficiently advanced robot would be able to wipe out its opposition before they became a threat to the robots own existence is the plot of Terminator, not the stuff of logic. Roko did, however, make significant steps forward. Pascal’s Wager can be applied to understanding how we should treat robots. Arguments based on a ‘basilisk’ are just asking the wrong questions. I propose an alternative:
| Robots have Souls | Robots lack Souls | |
| Treating Robots as having Souls | The conduct is a morally correct | The conduct lacks moral worth |
| Treating Robots as lacking Souls | The conduct is a morally incorrect | The conduct lacks moral worth |
Rather than assuming robots would:
a) Desire
b) Have the capability
To eliminate opponents to their existence, this new formulation presents the issue based on the idea of robots having a soul. Either X or Y is correct. If X, this will happen. If Y, that will happen. We are then able to make an informed choice on what we should believe. In this case, one set of facts leads to no consequences whatsoever. If robots have no souls, what we think and how we treat robots is irrelevant. On the other set of circumstances, acting as if robots lack souls has negative consequences while behaving as if they do has positive consequences. Even if we cannot give a definitive answer, that means we should hedge our bets and do what gives us the best potential outcome. It’s a no-win-no-fee kind of situation, given there are no moral costs of treating beings that lack souls as having moral worth. There really is nothing to lose.
An initial objection is obvious. If no one has a soul because there is no God, then the entire point collapses. We ‘know’ robots have no souls, meaning that any action towards them lacks moral worth. Objections of the kind misunderstand the point. Replacing the concept of ‘souls’ with whatever gives a being moral agency brings about the same result. I have used the term ‘soul’ simply to be in keeping with the religious sentiment of the article. My Wager would equally work with concepts of ‘consciousness’ ‘utility’, ‘reason’ or whatever characteristic gives a being moral worth. The basic structure requires two questions be asked:
- ‘Do robots have factor X?’
- ‘Do we treat robots as having factor X?’
Call factor X whatever you want, the outcome remains the same.
This table does not account for the extent of the robot's soul. Like Pascal’s original Wager, I have presented a linear choice which then applies to a linear set of facts. The matter is a series of yes-no questions, not accounting for the nuances of real life. Either robots have souls, or they do not. Either we treat robots as being ensouled, or we do not. Arguments that animals have souls, but that those souls are lesser in value to those of human beings, are morally coherent.
In the same way, robots may possess lesser souls, lesser moral agency. Again, I will concede that the modified Wager provides no way of accounting for this variation. A more sophisticated form of the sort suggested for religious beliefs is, however unnecessary.
My 21st-century Wager cannot prescribe what level of ensoulment robots should be treated as having. Robots could be our equals, our lessors or even our superiors. Solving which of these answers is correct cannot be done through any variation of Pascal's Wager. What cannot be disputed is the safest course of action is to treat robots as having some sort of souls. Even if the moral agency of robots is underestimated, giving robots insufficient rights is less wrong than providing them with no rights at all. Presuming animals have inferior souls to ours has resulted in animal rights, providing a baseline standard for farms, slaughterhouses and research facilities. I see no reason why a similar baseline should not be established for robots. Once the standards are there, they can be tweaked as the complexity of machine minds develops, and we gain a better understanding of how those rules work in practice. Over time, more comprehensive measures may be needed, but we will only find out if we try. And trying is the only logical choice.