Main Content

Why Asimov's Laws of Robotics should be updated for the 21st century

Many EU-funded projects are working towards advancing robotics to assist people with overcoming societal challenges, such as providing care for the elderly or providing disaster relief. An academic who worked on one such project has now argued that author Isaac Asimov’s Laws of Robotics are not the moral guidelines that they appear and should be updated. Isaac Asimov is one of the most celebrated sci-fi writers and arguably his most famous creation is the ‘Three Laws of Robotics.’ In a nutshell, these are: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given to it by human beings except where such orders would conflict with the First Law and, a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Prof Tom Sorell of the University of Warwick, UK, has recently argued that Asimov’s three laws seem a natural response to the idea that robots will one day be commonplace and need internal programming to prevent them from harming people. However, he argues that whilst Asimov’s laws are organised around the moral value of preventing harm to humans, they are not as easy to interpret. Prof Sorell, an expert on robot ethics, worked on the EU-funded ACCOMPANY project that developed a robotic companion to help elderly people live independent lives.”

Link to article