My wife did her master's thesis on Artificial Intelligence usage in business environments, which is pretty cool and highly surprising if you knew her, considering the near complete disinterest in technology.
I however, find AI an interesting topic and I think that it is going to play a vital role in how we come together as a global community as it means that we are likely going to have to start programming human moral coding into the systems.
This means that we are going to have to continually agree to find ways to consolidate our positions to agree upon some fundamental moral rights that the AI must adhere to for all as it will likely not be limited by country borders. Often, the differences between us can create large divides in judgement and result as the heuristics we create within cultures are very lazy.
A long time ago, I had a client whose husband was strongly against gay marriage until he gave it some thought. Years earlier he had gone through a bitter divorce and even though the mother was at the time unfit to care for their daughter, she took custody, as is the common practice in Finland. There is a reason for this on average, but it is insensitive to the positions of individuals and what he realized was that once there is gay marriage, there is also going to be gay divorce and instead of "the mother takes custody", the courts are going to have to find a way to evaluate the abilities and fitness of the parents - He voted for gay marriage.
When it comes to coding for group and public interaction, essentially it is a heuristic that is situation based, if this happens, do that and even though AI will eventually work much more autonomously, having a core foundation of rules to begin with is likely vital for an effective result across populations. Us agreeing to anything of the sort isn't likely to be an easy process however, for as we can see here, people can't agree on even basic trials that are reversible in nature.
One day though, the AIs are going to be so effective that they are likely going to be able to create their own heuristics objectively that will find a line of best fit that maximizes experience for as many as possible, but what are the chances of it helping all? What happens if you are in the group that doesn't benefit from a decision that is much better on average for a wider selection of people?
It is an interesting problem I think and one that is hard to approach from our own experience, as we are already pre-selected by chance to favor one position over another at no choice or fault of our own.
We can't choose where, when, to who or how we look, we can't choose our height or intelligence, nor the food we are fed as toddlers, but they all have an effect on our future. If none of us were born yet and were tasked with creating a moral world that would best suit us, but had no idea of what we would be born as later, what we decide upon?
If we also had a view of the current world and the problems faced across race, sex, economic difference, intelligence issues, educational opportunity etc, it would be a very complex problem to solve considering we would still have no choice in which position we would be born into. We would obviously still want to have the best chance of maximizing our position which would mean, increasing the chances of all positions we could end up.
We of course don't have this ability, but what it does do is give an understanding of why arguing from where we currently stand given our life position and history is never going to end up being what is best for all, as we will argue for what will benefit where we ourselves stand.
If hypothetically, after making heuristics we would be guaranteed to lose our current self but be randomized into another, would our decision making and stances change, would we try to maximize the position of our race if tomorrow we were likely to wake up as another?
It might be an interesting thought experiment to think what could happen if after government elections, the same randomization would take place and we would be reborn into a different skin with different experience but still knowing that the decisions made will govern for the next period of time. Would more people vote? Would they vote for parties that maximized some over others, or would they look to support a central position that tries to improve all positions?
I don't believe in reincarnation as it doesn't come with the learning of the past life itself, so is useless. If we truly knew how we ended up here than it might be of interest, but karma is unhelpful if we don't know the length and details of the pipeline. Instant karma is useful though, as the cause and effect is clear.
I think that as we go forward and deeper into the world of artificial intelligence, we are going to have to pull back from individualism somewhat and embrace individualism with civic consciousness, including those we do not know and will never meet, as all decisions made will affect all others in the system much more directly. We will have to question our cultures and beliefs and work out some line of best moral fit to build a framework that maximizes empowerment of individuals and minimizes suffering.
I don't expect it will be an easy nor smooth pathway but, it would be a valuable one as it improves over time. It is possible that in some future the AI itself is so advanced and sensitive to our needs that it could create heuristics that maximize flourishing without us even knowing it is doing so. Would it tell us? Of course, it could choose to do something altogether different and slowly mold us into a form that maximizes itself.
Either way, something to think about.
Taraz
[ Gen1: Hive ]