Main Content

The research hardware in your video-game system

Motion sensors don’t just drive gameplay. With the right software, they can scan dinosaur skulls, monitor glaciers and help robots to see.

A man with a black rectangular bar strapped to his chest walks a careful circuit around the skull of a Tyrannosaurus rex. It’s not performance art. The black rectangle is a motion sensor called Kinect, and its wearer is using it at the Field Museum in Chicago, Illinois, in to digitally capture the precise 3D shape of the dinosaur’s skull.

That’s a far cry from its developer’s intended application. Microsoft designed it for use in video games, enabling Xbox users to control their characters using movements and gestures rather than a handheld controller. But from the moment it was released, scientists and clinicians have been adapting the device, and other sensors including the Nintendo Wii Remote, PlayStation EyeToy and Leap Motion, to aid research in areas from robotics to glaciology to health care. They were quick to realize that the data the devices gather can be used for studies that involve measuring body movements, manipulating 3D objects or observing or building models of 3D spaces.

The sensors come with a number of perks for scientists: they are affordable (most cost US$80–100), portable and compatible with free and easy-to-learn software. That makes them a nimble choice for many projects.

But they do have significant limitations. Their specifications, such as resolution, tend to pale by comparison with industrial hardware, for instance, and the systems work better in living rooms than in the field. And their usefulness depends heavily on the type of research being performed.

Dino dentistry

Denise Murmann’s experience with Kinect as a research tool began in 2016, when she visited the Field Museum with her family. While scrutinizing SUE, one of the world’s most complete T. rex skeletons, her nephew noticed an exhibit explaining that the dinosaur’s skull was riddled with tiny holes of unknown origin. Were they bite marks? The vestiges of an infection? Murmann thought it would be fun to examine the skull the way she investigates forensic bite-mark cases in her work as a forensic dentist.

But her usual tools just weren’t up to the job. SUE’s skull is about 1.5 metres long and weighs 272 kilograms — far too large for highly accurate 3D dentistry scanners. So Murmann turned to the Camera Culture group at the Massachusetts Institute of Technology’s Media Lab in Cambridge, where imaging researcher Anshuman Das suggested using a Kinect connected to a laptop. The resolution would be about ten times less than achieved with the industrial scanner, Das says, but the Kinect could handle the specimen’s dimensions.

So Das strapped the Kinect to his chest and walked slowly around the skull. The 3D scan revealed that not all the holes entered the skull at the same angle, so they probably weren’t from a single bite. But they also tapered inwards, suggesting they were not the result of infection. The team published its findings in July (A. J. Das et al. PLoS ONE 12, e0179264; 2017). Although Murmann’s project is not the first time that Sue’s skull has been scanned, the previous instance involved 500 hours in a computed tomography scanner normally used to inspect space shuttle components. The Kinect scan took a matter of minutes in the museum itself.

Glaciers, gaits and robots

Palaeontology is not the only field to benefit from game controllers. Ken Mankoff, a glaciologist with the Geological Survey of Denmark and Greenland, has used the Kinect to model glacier beds and the meltwater channels underneath them at 1-millimetre resolution. Such data can help glaciologists better understand how glacial melt influences sea levels. Usually, the data are collected using a LiDAR (light detection and ranging) system, Mankoff says, which can cost upwards of $10,000.

Off-the-shelf video-game motion sensors also make convenient vision systems for robots. Robotics researchers Ashutosh Saxena of Stanford University in California and Chenxia Wu, then at Cornell University in Ithaca, New York, turned to the Kinect to design a robot that could learn a task just from ‘watching’ people. Their WatchBot comprises a computer and a laser pointer with a Kinect mounted on a tripod as its ‘eyes’. WatchBot was able to learn what steps constituted a task, such as fetching food from an oven, well enough to identify a missed step 60% of the time — sufficiently accurate to give it potential applications in manufacturing and safety monitoring.

Other video-game sensors have proved useful in research as well. The controller made by Leap Motion in San Francisco, California, is designed to track fine hand and finger movements, and virtual-reality headsets such as the Daydream (by Google in Mountain View, California; about $80) and Rift (by Oculus VR in Menlo Park, California; $400–500) provide more immersive experiences. Hydrologist Willem Luxemburg at Delft University of Technology in the Netherlands used the Wii Remote to measure reservoir evaporation rates to better than millimetre accuracy. (The Wii is no longer in production, but used systems are available online, as is the case for the Kinect, which Microsoft stopped manufacturing in October. Microsoft’s newer HoloLens, augmented-reality glasses that are in limited production as their development continues, uses the same core sensor that powered Kinect.)

Video-game sensors are also increasingly used in health care. Marjorie Skubic, an engineer at the University of Missouri in Columbia, began using the Kinect as soon as it was released in 2010 as a way to monitor seniors’ gait and predict their risk of falling. “It was right before Christmas,” she recalls. “We went around town and bought them all up. I’m afraid we might have broken some kids’ hearts.” The Kinect was a major improvement on her team’s previous monitoring system: a webcam and a large desktop computer, she says. The computer hogged space and generated so much heat that it required noisy fans, which felt intrusive. The Kinect eliminated both these issues, requiring a much smaller computer while accurately capturing seniors’ silhouettes as they moved.

Kinect the dots

To capture objects in 3D, the Kinect takes a digital image just as an ordinary digital camera does, but also measures depth using infrared light. It then combines these two data sets to create a ‘depth image’, in which each pixel of the image is mapped relative to its distance from the sensor. From there, the system can create a 3D model or reconstruct a skeletal representation.

Little expertise or equipment is required to exploit those data. All that’s needed is an adapter (available online for about $50) that links the Kinect to a laptop, plus a good graphical processing unit to handle the Kinect’s real-time 3D constructions, Das says. “Some of these gaming laptops are perfect.”

For those interested in playing with the platform, a large hacker community is ready to help. Microsoft also makes a software development kit that can be used to build custom applications that use Kinect data, and 3D Scan, a software package for object scanning, can be downloaded from the Microsoft app store. Skubic’s team started using the Kinect before either of these were available, so the researchers used an open-source programming library called libfreenect from the OpenKinect project.

Tiffany Tang, a researcher at Wenzhou-Kean University in China, developed a Kinect-based system to help people to read the emotions of children with autism. She has found the software — in her team’s case, Microsoft’s Kinect software development kit and Visual Studio — easy to get to grips with. “My student just learned this on his own in a week,” she says.

That ease of adoption can come in handy, because researchers may need to change platforms to keep up with developments in the fast-paced gaming industry. At Ulster University near Belfast, UK, rehabilitation researcher Suzanne McDonough and computer scientist Darryl Charles pair video-game sensors with custom software to monitor patients’ physical-therapy exercises at home and assign new ones as they progress. Over the years, McDonough and Charles have migrated from the EyeToy and Wii to webcams built for virtual-reality games, then through two versions of the Kinect to track arm and hand movements, and finally to virtual-reality headsets from Oculus and Google to provide a more immersive experience. They also use the Leap Motion sensor. “It’s very good at being able to recognize gestures and natural movements of the hand,” says Charles.

These tools do have substantial limitations, however. One issue with the Kinect is distance: because it was designed for living rooms, it can measure only a few metres from the sensor, Mankoff says. New algorithms, including Kinituous and ElasticFusion, allow researchers to ‘stitch’ data together and overcome that limitation, but other hurdles remain, especially when it comes to fieldwork. “Anything wet is a problem. Direct sunlight is a problem,” Mankoff says. “Fortunately my work is in caves, but if it weren’t I would have to work at night or on very cloudy days.” Other issues include battery life and difficulty tracking people with unusual postures or loose clothing.

And yet, scientists continue to find creative uses for the sensors. Since Das published the T. rex results, he has received multiple requests from the museum and palaeontology communities to use or adapt his scanner to analyse other fossils, art and artefacts. The tool is so simple that he has used it for a face-scanning exercise at a primary school in New Hampshire, where he volunteers. “You’re not going to be matching an industrial scanner, but since it’s so cheap and it’s easy to share data, it will encourage collaboration,” Das says.”

Link to article