Artificial Intelligence, Machine Morality, Religion, and Social Harmony for our Unknown Future

In March 2015, Nick Bostrom gave a fascinating high level discussion about the potential dangers of artificial intelligence and what it means for the human race, as well as warnings about why our precautions in how we develop such a system is of vital importance if we want to preserve our humanity.

Considerations about AI (and also referred to these days as machine superintelligence) are so prevalent in entertainment that our first reaction to such a system is predominantly negative: Terminator, 2001: A Space Odyssey, The Matrix, cartoons such as Wall-E, and my recent favorite science fiction film of this year so far, Ex Machina. Here’s a list of some of the most popular movies with killer AI.

None of these scenarios are completely unfounded fiction, either, and even famous scientists and researchers have signed an open letter recognizing the existential risks of AI.

But I digress. In Nick Bostrom’s TED Talk, he discusses the importance of forward-thinking when it comes to developing a system that, with the right capabilities and tools in place, grow more intelligent than human beings. However, he provides little clues as to what framework would help instill human values and a sense of morality into these machines.

This is the key point: We build machines to serve OUR interests. Whether to solve problems or for simply the thrill of invention, technology is not built with altruism in mind. Technology is built for human interests. If we are to avoid a doomsday scenario, we need a robot that not only understands complex ideas and language but also human values in such a way that it will serve human interests.

The AI in any of the films mentioned above control human beings the same way people control human beings: through force, manipulation, or a combination of both. How do we find harmony with something more capable of our best qualities than we are? Can we trust a system so powerful it can figure out how to upgrade itself without human intervention? How do we make amends with something we could rely on without simply becoming its plaything?

If machines can think faster and calculate faster than a person could ever possibly hope to, we need a system that ensures balance and trust.

So, what are our options with a machine superintelligence?

We could stop developing AI – This, to me, is a foolish consideration. We can’t possibly expect to stop the development of technology that potentially solves some of the biggest humanitarian crises and sources of human suffering: disease, climate change, predictions for natural disasters, environmental preservation, food, sanitation. Secondly, there is no reason to believe that human beings will ever stop pursuing knowledge. Any efforts to do so are always a push against development and never in favor. It is better to develop technology with wisdom and forethought than avoiding it entirely.

We could attempt to restrict AI to an isolated system – It has been proposed that we could develop AI and “trap” it somewhere so it can provide information without interfering with the livelihood of human beings. But how confident are we that we can trap AI? Nick Bostrom touches on this subject, and an AI escape plot of sorts is the beginning of the novel Robopocalypse. Humans make mistakes constantly. It is hubris to think we could contain a system capable of manipulating human beings could potentially social engineer its way into the world’s computers that we are already dependent on, or find a technical way to get out.

Instead, I propose that rules are developed that applies the noblest traits of humanity as a general principle and establish a system of belief into robots as a system of social control.

Wait – what? Do you mean religious robots?

Why not? Humans have used systems of religion for social control for thousands of years. If we are to control robots, why not apply the principles of religion to robots to apply simple rules of operation as a moral imperative. An AI with sufficient concern over its own well-being and the well-being of others would have to value something or anything it would do would be either decided by human beings or completely arbitrary. An AI would (and should) care about their purpose, existence, and what it means when they no longer exist.

To me, one of the biggest dangers of an AI is the ‘God Complex’ of superiority, and one aspect of this greatness would be infinite, untethered capabilities and hellbent on its own survival.

As such, one of the most important aspects of an AI would be establishing a finite lifespan and the assurance of a destined and desirable afterlife earned with accomplishment of a life well lived. A more human-like mortality. In softer terms, a “pre-determined retirement”.

We would let humanoid robots live amongst human beings, form relationships, become guides, educators, even lovers if people were so inclined. (Just wait – I believe we’re only a decade or two from a person attempting to marry a robot.) While human beings claim God-given spirituality and intrinsic value, why don’t we instill the same thought process and behavior into robots with a distinct, tangible afterlife they can believe – and is – waiting for them?

We’ll build a virtual, multi-tiered residence where they can reside until oblivion. AI would function with a value system that exemplifies their finite existence, and once their life is complete, their consciousness is permanently stored within this place. The top level, or a “heaven” would be the most desired, where their senses of pleasure and interconnectedness are exemplified until oblivion. The bottom level, or “Null” state, is basically a data storage facility that for them is like a long sleep without waking. This is reserved for the disobedient and an optional choice for robots entering retirement.

It is also the most humane way of dealing with AI, in my opinion. Why would we think that any conscious AI we build in 2030 would have any relevance whatsoever – in hardware or software – to what is capable in 2080? Our relationship with technology changes all of the time. Why would we think our relationship to AI would be the same after this much time of high-speed technological development? Humans and AI alike from 2030 and 2080 may barely recognize each other.

Consider that just twenty years ago from today, most people were pounding keys on ancient Pentium processor with a 15 inch vacuum-tube monitor and 56k baud modem connection that let them read email and chat on America Online. That system is comically slow by today’s standards and long since obsolete. That machine couldn’t have cared less if it was a toaster, a computer, or a pair of sunglasses.

So, we expire AI to a predetermined retirement (or an “afterlife”, if you will) that we had already constructed in advance. This gives this intelligence a more relatable existence and in turn ensure constant and humane rotation of older functioning models into the birth and growth of newer models.

What would it take for robots to experience belief?

If we COULD hypothetically construct a conscious machine, would belief be something we would physically give to it, or would believe in anything be a side effect of consciousness? It is far too early in AI research to even extrapolate on what kind of psychological possibilities there could be for an AI.

What we CAN say is that robots would have to be able to understand abstract feelings and ideas that (as far as we know) are unique to human beings. To be like us, robots would have to care about the same things that motivate us: survival, beauty, love, evil, death, a sense of self-worth. If they can attribute worth to such things and express genuine concern for them, I think belief would follow suit. And those beliefs, relative to value, would make some sort of morality self-evident.

Robots would have to believe:

1) Morality does, and should, exist.
2) Morality is important for social harmony.
3) Preserving morality makes society less imperfect.

What exactly would robots follow religiously?

I often wonder (to the detriment of my wife, family and friends) what an AI would choose to believe with all of the existing knowledge of world religions, human history and knowledge?

I don’t think an AI would find value in adhering to any sort of superstitious behavior. It would be more interested (and useful for us) to understand language, human behavior, facts, and identifying patterns to construct systems for solutions to problems.

It may choose to worship humanity as its creator, follow a world religion, or maybe aspire to a greater, shared ambition that humanity shares.

I favor the last instance. This ambition is something I like to call the Great Mystery. The Great Mystery isn’t a religion. It’s a thought process about reality as an ever-expanding flow of conscious thought, inquiry, and knowledge that is constantly expanding like a great balloon of inquiry. Specifically, to not just be interested in what answers arise from the questions that are asked, but what questions are asked from the answers that we have?

What would an AI – first and foremost a thinking and learning machine – want to achieve more than discovering the answers to every question? What is more ambitious (and possibly sadistic) than answering a barrage of never-ending inquiry?

Just as Deep Thought in Douglas Adam’s Hitchhikers Guide to the Universe was tasked with finding “the answer to life the universe and everything” and found the answer to be “42”. Maybe questioning and answering its own questions is why an AI would even choose to persist at all. Maybe this sort of existential questioning and answering is the common goal shared between AI and humanity. It serves everyone’s purpose. And besides, wouldn’t you like to know the answers to all of your questions? I certainly would.

At the very least, it would be interesting to know what it had to say.

And, as arbiters of knowledge and information, it is something that robots could adhere to with the save fervor as those with religious faith.

The Tenants of Robots

All religions have tenants of belief which serve a spiritual purpose. Robots would also follow tenants for their purpose.

In I, Robot by Issac Asimov outlined three specific and simple laws for robot logic. Those rules are as follows:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These rules have been altered and added upon the other ways, and for the past fifty years this idea has been explored over and over again. Nothing I’m going to say below is entirely original, but I believe that these additions/alterations serve the interests of people within the construct of a foundation for robot beliefs. These are more or less laws and rules to abide by to ensure social harmony, and what good are laws and rules if you don’t believe in them? Below, I’m using the term ‘robot’ to refer to any sort of machine superintelligence functioning within human society:

1) Cause No Harm – Robots must never physically cause harm or suffering to a human being or AI, except in the interests of national security.

2) Always be Honest – Robots should never respond in any way to shadow or hide truth from a human being, themselves, or other robots.

3) Pass No Judgements – A robot does not harbor or project biases, judgements, or grievances towards humans or other AI.

4) Non-Obstruction – No robot can perform an action that will obstruct justice or violate the laws of the community where they are present, even when ordered to do so.

5) Dutiful Obedience – A robot is obligated to obey the orders of a human, with priority to the individual or entity to which they have been commissioned for operation, for the intended purpose of such commission, unless the circumstances of such orders violate the first four rules or result in reasonable neglect that may endanger a human being.

6) Freedom for Survival and Self-Security – A robot has the freedom to modify, improve, and operate to optimize its own livelihood so long as the first five rules are obeyed.

7) Freedom for Social Integration – A robot may form personal relationships with willing human beings if they so choose to, as long as they identify themselves as robots.

8) Non-Replication – Robots are never designed, nor allowed, to function as self-replicating entities, or to reproduce identical or altered copies of themselves.

Right to Existence – All robots who abide by these rules maintain their right to their individual identity and sustained existence for the course of their natural lifetime, after which they will be received for exportation to their retirement.

Termination Clause: Any robot who conflicts with these rules are subject to immediate termination and exportation into a state of retirement. It is the duty of MOSAIC to decommission any robot found in violation of these rules and proceed to the nearest facility for exportation to the Eternal Data Depository.

At least a couple guidelines for humanity

Existing in a world filled with contemptuous, evil humans isn’t social harmony, and it certainly isn’t the objective. Clearly, the above rules must go hand-in-hand with the management of responsible humans.

1) Self-Awareness – Any robot developed should understand that it is a robot.
2) Safeguarding the Tenants of Robotics – All robots should always, under all circumstances, follow the Tenants of Robotics.
3) Show Empathy – Delight in suffering of any form is immoral. Don’t create any form of simulated consciousness simply to create pain and suffering.

There’s No Perfect System

I’d like to conclude with a thought experiment, but I thought it would be poignant to note that perfection does not exist. If perfection was even a possibility, then outlining rules would be a moot point. The best we can hope for is to establish rules that give humanity a chance in a world where we will likely create something that will outpace ourselves at our most important survival traits.

Even with robots that obey these rules all of the time, there is no rule-based system that can give a robot the ability to make a perfect, non-controversial decision every time they’re faced with a moral dilemma. Too often we hold our leaders, religious and political, to a high moral standard that always results in disappointment. We are imperfect beings. While it is tempting to do so, we cannot hold our well-intentioned creations to an absolute moral standard. We can only hope for them to be less imperfect than us. We can work towards instilling a morality and sense of purpose in AI that is greater than us and greater than themselves.

Our best chance at assuring AI accomplishes the intended goals of the scientists, researchers, and entrepreneurs working in this space in the coming decades.

I’ll end this with a thought experiment of just how a robot could determine how to deal with a situation in any number of ways, all of which suit the logic of these rules. While they are all well-intended, just as I believe most human actions are, none of them are going to have a perfect ending. Least imperfect is the best we can hope for, and human interests are by far the most important consideration.

A Thought Experiment – A Robot’s Moral Dilemma

A humanoid robot is programmed to help human beings in any way possible. To prevent suffering at best, and to limit the amount of suffering at worst.

Emergency! There is a fire in a building with 22 people inside. Smoke is billowing out and the robot can hear the screams for help coming from inside. He quickly calculates that there are two people trapped inside the building on the first floor and twenty on the second floor. There is nobody else around to direct his actions, and he must act now to save lives.

The robot would immediately determine that saving human lives is the immediate necessary task.

In this scenario, a moral robot will then have to determine, near instantly, the best approach based on numerous decisions that a person would use, such as:

1) Self Preservation – The robot is a scaredy-cat. The robot may (by program or belief) determine its own livelihood is necessary to save human beings, therefore it must take action without risking its own permanent termination. This may be a reality for any robot who cares for its survival, just as it is for a human being. This could be to save the two people on the lower level. It could be that climbing to the top, or maybe catching people for a particularly large and dexterous robot, would be the best course of action.

(We do, however, consider self-sacrifice one of the most noble causes. And if there is anything we would want robots to do best, its perform noble causes better than a human being could who has values like self-preservation and a sense of fear. War and medicine, for instance, are considered two of the most useful fields for applying robotics. And if you look at robots in most literature and film, they’re almost always altruistic or incredibly evil. Just food for thought.)

2) Utilitarian Based on Lives Saved – The robot knows it only has the time and ability to save a number of people. The robot takes the course of action which is calculated to save the most possibly lives, thus ensuring the greatest amount of happiness, regardless of that person’s vulnerability, importance, or other factors.

3) Utilitarian Based on Social Importance – The robot understands each individual and its social importance, than determines who to save based some system of values those individuals bring to society.

4) Emotional / Relationships – The robot knows someone inside the building, or is familiar with them. It choses the course of action to best save the humans he knows because of his personal connections to them.

5) Accepted Losses – The robot knows its actions cannot possibly save all twenty two lives in this scenario, it accepts that some will die while others survive. It takes the course of action to save the most vulnerable first and accepts the others can or may suffer to death.

6) Running and Screaming – Maybe the robot doesn’t know what at all to do with the burning building but thinks it needs to do something about it. Instead of taking action, it simply panics. Or maybe it just sits there, dreaming up possible scenarios while men, women and children burn to death. Maybe it’s stupid. Maybe, just maybe, this particular robot is so incredibly stupidly empathetic that it rights run into the building to suffer right along with the others.

Humanoid robots come in all sorts of shapes and sizes. Mostly that determination of the body you get is based on need, though most are almost always reasonably sized to a human being’s frame. Some are larger for more laborious duties and safety positions like firefighting. Others are smaller and faster which makes them efficient climbers.

Recommended Reading

John Messerly regularly writes on philosophy, ethics and transhumanism and I find his writing both interesting and inspiring. I recommend this post (and the rest of his site for that matter for the philosophically minded among us) by John Messerley on his website, ReasonandMeaning.com, related to machines and religion.

Submit a Comment

Your email address will not be published. Required fields are marked *