Main Content

Deep reinforcement learning makes basketball video games look more realistic

Basketball players need lots of practice before they master the dribble, and it turns out that’s true for computer-animated players as well. By using deep reinforcement learning, players in video basketball games can glean insights from motion capture data to sharpen their dribbling skills.

Researchers at Carnegie Mellon University and DeepMotion Inc., a California company that develops smart avatars, have for the first time developed a physics-based, real-time method for controlling animated characters that can learn dribbling skills from experience. In this case, the system learns from motion capture of the movements performed by people dribbling basketballs.

This trial-and-error learning process is time consuming, requiring millions of trials, but the results are arm movements that are closely coordinated with physically plausible ball movement. Players learn to dribble between their legs, dribble behind their backs and do crossover moves, as well as how to transition from one skill to another.

Once the skills are learned, new motions can be simulated much faster than real-time,” said Jessica Hodgins, Carnegie Mellon professor of computer science and robotics.

Hodgins and Libin Liu, chief scientist at DeepMotion, will present the method at SIGGRAPH 2018, the Conference on Computer Graphics and Interactive Techniques, Aug. 12-18, in Vancouver.

This research opens the door to simulating sports with skilled virtual avatars,” said Liu, the report’s first author. “The technology can be applied beyond sport simulation to create more interactive characters for gaming, animation, motion analysis, and in the future, robotics.”

Motion capture data already add realism to state-of-the-art video games. But these games also include disconcerting artifacts, Liu noted, such as balls that follow impossible trajectories or that seem to stick to a player’s hand.

A physics-based method has the potential to create more realistic games, but getting the subtle details right is difficult. That’s especially so for dribbling a basketball because player contact with the ball is brief and finger position is critical. Some details, such as the way a ball may continue spinning briefly when it makes light contact with the player’s hands, are tough to reproduce. And once the ball is released, the player has to anticipate when and where the ball will return.

Liu and Hodgins opted to use deep reinforcement learning to enable the model to pick up these important details. Artificial intelligence programs have used this form of deep learning to figure out a variety of video games and the AlphaGo program famously employed it to master the board game Go.

The motion capture data used as input was of people doing things such as rotating the ball around the waist, dribbling while running and dribbling in place both with the right hand and while switching hands. This capture data did not include the ball movement, which Liu explained is difficult to record accurately. Instead, they used trajectory optimization to calculate the ball’s most likely paths for a given hand motion.

The program learned the skills in two stages — first it mastered locomotion and then learned how to control the arms and hands and, through them, the motion of the ball. This decoupled approach is sufficient for actions such as dribbling or perhaps juggling, where the interaction between the character and the object doesn’t have an effect on the character’s balance. Further work is required to address sports, such as soccer, where balance is tightly coupled with game maneuvers, Liu said.”

Link to article