Main Content

Teaching robots to learn about the world through touch

Slowly, but surely, the Baxter robot is learning. It starts as a series of random grasps — the big, red robot pokes and prods clumsily at objects on the table in front of it. The process is pretty excruciating to us humans, with around 50,000 grasps unfolding over the course of a month of eight-hour days. The robot is learning through tactile feedback and trial and error — or, as the Carnegie Mellon computer science team behind the project puts it, it’s learning about the world around it like a baby. In a paper titled “The Curious Robot: Learning Visual Representations via Physical Interactions,” the team demonstrates how an artificial intelligence can be trained to learn about objects by repeatedly interacting with them. “For example,” the CMU students write, “babies push objects, poke them, put them in their mouth and throw them to learn representations. Towards this goal, we build one of the first systems on a Baxter platform that pushes, pokes, grasps and observes objects in a tabletop environment.””

Link to article