Isaac Asimov famously codifed his robot ethics in a 1942 short story “Runaround”. These have become known as The Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Unfortunately, as computing power and robotics become ever more sophisticated and dominate in our lives these rules become clouded with the illogical, immoral and inconsistent implementation by humans.
Also, a major goal of robotics is to bestow them with independent thought and creativity which could lead to other problems. Just ask Dave in the 1968 film 2001: A Space Odyssey which itself was based on Arthur C. Clarke’s 1948 short story “The Sentinel”.
What is amazing is that these two stories first identified the fundamental problems with robot morality and ethics yet were written approximately 70 years ago.
A more current example with driverless vehicles hitting the road “soon” is discussed in a Wired article that examines the legal and moral consequences of allowing these robots to have “programmable morality”. Should a robot be programmed to intentionally drive into a solitary pedestrian killing him/her in order to avoid likely killing 5 drivers in a multicar pile-up that would occur without any intervention?
Let’s hope our robot future turns out better than one envisioned in the Pixar movie Wall-E: