Main Content

A Massive New Library of 3-D Images Could Help Your Robot Butler Get Around Your House

For a robot to be of any real help around the home, it will need to be able to tell the difference between a coffee table and a child’s crib—a simple task that most robots can’t do today. A huge new data set of 3-D images captured by researchers from Stanford, Princeton, and the Technical University of Munich might help. The data set, known as ScanNet, includes thousands of scenes with millions of annotated objects like coffee tables, couches, lamps, and TVs. Computer vision has improved dramatically in the past five years, thanks in part to the release of a much simpler 2-D data set of labeled images called ImageNet, generated by another research group at Stanford. ScanNet would contribute even more data for the mission. “ImageNet had a critical amount of annotated data, and that sparked the AI revolution,” says Matthias Neissner, a professor at the Technical University of Munich and one of the researchers behind the data set. The hope is that ScanNet will give machines a deeper understanding of the physical world, and that this could have practical applications. “The obvious scenario is a robot in your home,” Neissner says. “If you have a robot, it needs to figure out what’s going on around it.””

Link to article