“Many EU-funded projects are working towards advancing robotics to assist people with overcoming societal challenges, such as providing care for the elderly or providing disaster relief. An academic who worked on one such project has now argued that author Isaac Asimov’s Laws of Robotics are not the moral guidelines that they appear and should be updated. Isaac Asimov is one of the most celebrated sci-fi writers and arguably his most famous creation is the ‘Three Laws of Robotics.’ In a nutshell, these are: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given to it by human beings except where such orders would conflict with the First Law and, a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Prof Tom Sorell of the University of Warwick, UK, has recently argued that Asimov’s three laws seem a natural response to the idea that robots will one day be commonplace and need internal programming to prevent them from harming people. However, he argues that whilst Asimov’s laws are organised around the moral value of preventing harm to humans, they are not as easy to interpret. Prof Sorell, an expert on robot ethics, worked on the EU-funded ACCOMPANY project that developed a robotic companion to help elderly people live independent lives.”
Related Content
Related Posts:
- Why doesn’t aluminium rust?
- 3D printing stem cells to transform neuroscience
- Light-emitting silicon: no longer a “holy grail” for computing
- A step towards fully electric ferries
- A low-cost, durable device to harness wave energy
- Graphene and 2D materials on track to innovative applications
- Graphene’s spectacular performance in high-speed optical communications
- Graphene makes low-dimensional spintronics viable at room temperature
- Novel graphene-based materials
- New cutting-edge technology bridges the digital divide