“For a robot to be of any real help around the home, it will need to be able to tell the difference between a coffee table and a child’s crib—a simple task that most robots can’t do today. A huge new data set of 3-D images captured by researchers from Stanford, Princeton, and the Technical University of Munich might help. The data set, known as ScanNet, includes thousands of scenes with millions of annotated objects like coffee tables, couches, lamps, and TVs. Computer vision has improved dramatically in the past five years, thanks in part to the release of a much simpler 2-D data set of labeled images called ImageNet, generated by another research group at Stanford. ScanNet would contribute even more data for the mission. “ImageNet had a critical amount of annotated data, and that sparked the AI revolution,” says Matthias Neissner, a professor at the Technical University of Munich and one of the researchers behind the data set. The hope is that ScanNet will give machines a deeper understanding of the physical world, and that this could have practical applications. “The obvious scenario is a robot in your home,” Neissner says. “If you have a robot, it needs to figure out what’s going on around it.””
Related Content
Related Posts:
- Self-assembly of complex systems: hexagonal building blocks are better
- Keeping time with an atomic nucleus
- Bacteria: radioactive elements replace essential rare earth metals
- Second stage for quantum simulation project PASQuanS
- Nanophysics: the right twist
- Artificial intelligence for rapid simulation of data
- Quantum physics: Microwaves direct the interplay of waltzing molecules
- Quantum physics: simulation of superconductivity
- More control over plasma accelerators
- Light-driven molecular swing