Can Computers Be Moral?
hould a driverless vehicle, an iCar by Apple for example, be programmed to make moral decisions? That’s the question Robert Newman sets out to answer in his 2015 article ‘Can Robots Be Ethical?’ in Philosophy Now Magazine. Could an iCar decide, if only two options were available, between slamming into a broken down minivan filled with a family of six, or swerving to avoid them and consequently running over an individual pedestrian?
The article is a response to claims made by some computer scientists that morality can be reduced to coding. This reminds me of “Three Laws of Robotics” developed by science fiction author Isaac Asimov, who felt that most stories about breakthroughs in artificial intelligence ended either in the demise of humanity or the destruction of machines. It’s a dead end one way or another. His three laws are thought to be a path towards avoiding such an apocalyptic fate.
If you’re not familiar with the three laws they are: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
There are lots of problems with these laws, as many have pointed out, mainly that ethical decisions are far more complex and nuanced than a few basic parameters will allow for. Consider an example, what if injuring a human allows a robot to save the lives of countless other humans? Though it would violate the first law, would it not make sense for a law enforcement robot (think Robo Cop) to take out a gunner attempting mass murder? But if we allow the robot to violate the first law then what standards could be put in place to keep the machine from eliminating any person whose death would somehow benefit civilization?
In a comical piece about Asimov’s three laws, author David Langford suggests that government is more likely to program the following operating rules into their machinary: “(1.) A robot will not harm authorised Government personnel but will terminate intruders with extreme prejudice. (2.) A robot will obey the orders of authorised personnel except where such orders conflict with the Third Law. (3.) A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is [expletive] expensive.”
From a Christian perspective, morality cannot be reduced to an algorithm. We don’t believe there will ever be an iChristian, for example. That’s because Scripture teaches that humans are uniquely moral in the fact that we are created in the image of God (Genesis 1:27), we are witnesses of his eternal power and divine nature in creation (Romans 1), we have his moral law written on our hearts (Romans 2), and we are recipients of his revelation that instructs us in all issues related to life and godliness (2 Peter 1:3).
Robert Newman’s article ends by concluding that the phrase ‘ethical robots’ is a contradiction in terms. It’s like calling something a round square. While Apple may have introduced us to Siri, I don’t think anyone would trust her to determine whom they should marry, or any other number of significant decisions that involve moral reasoning for that matter. Morality is central to what it means to be human. And to understand that we look, not to Apple, but to our moral source, to God himself.