Main Content

Applying Machine Learning to the Universe's Mysteries

Computers can beat chess champions, simulate star explosions, and forecast global climate. We are even teaching them to be infallible problem-solvers and fast learners.

And now, physicists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and their collaborators have demonstrated that computers are ready to tackle the universe’s greatest mysteries. The team fed thousands of images from simulated high-energy particle collisions to train computer networks to identify important features.

The researchers programmed powerful arrays known as neural networks to serve as a sort of hivelike digital brain in analyzing and interpreting the images of the simulated particle debris left over from the collisions. During this test run the researchers found that the neural networks had up to a 95 percent success rate in recognizing important features in a sampling of about 18,000 images.

The study was published Jan. 15 in the journal Nature Communications.

The next step will be to apply the same machine learning process to actual experimental data.

Powerful machine learning algorithms allow these networks to improve in their analysis as they process more images. The underlying technology is used in facial recognition and other types of image-based object recognition applications.

The images used in this study – relevant to particle-collider nuclear physics experiments at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider and CERN’s Large Hadron Collider – recreate the conditions of a subatomic particle “soup,” which is a superhot fluid state known as the quark-gluon plasma believed to exist just millionths of a second after the birth of the universe. Berkeley Lab physicists participate in experiments at both of these sites.

“We are trying to learn about the most important properties of the quark-gluon plasma,” said Xin-Nian Wang, a nuclear physicist in the Nuclear Science Division at Berkeley Lab who is a member of the team. Some of these properties are so short-lived and occur at such tiny scales that they remain shrouded in mystery.

In experiments, nuclear physicists use particle colliders to smash together heavy nuclei, like gold or lead atoms that are stripped of electrons. These collisions are believed to liberate particles inside the atoms’ nuclei, forming a fleeting, subatomic-scale fireball that breaks down even protons and neutrons into a free-floating form of their typically bound-up building blocks: quarks and gluons.

Researchers hope that by learning the precise conditions under which this quark-gluon plasma forms, such as how much energy is packed in, and its temperature and pressure as it transitions into a fluid state, they will gain new insights about its component particles of matter and their properties, and about the universe’s formative stages.

But exacting measurements of these properties – the so-called “equation of state” involved as matter changes from one phase to another in these collisions – have proven challenging. The initial conditions in the experiments can influence the outcome, so it’s challenging to extract equation-of-state measurements that are independent of these conditions.

“In the nuclear physics community, the holy grail is to see phase transitions in these high-energy interactions, and then determine the equation of state from the experimental data,” Wang said. “This is the most important property of the quark-gluon plasma we have yet to learn from experiments.”

Researchers also seek insight about the fundamental forces that govern the interactions between quarks and gluons, what physicists refer to as quantum chromodynamics.

Long-Gang Pang, the lead author of the latest study and a Berkeley Lab-affiliated postdoctoral researcher at UC Berkeley, said that in 2016, while he was a postdoctoral fellow at the Frankfurt Institute for Advanced Studies, he became interested in the potential for artificial intelligence (AI) to help solve challenging science problems.

He saw that one form of AI, known as a deep convolutional neural network – with architecture inspired by the image-handling processes in animal brains – appeared to be a good fit for analyzing science-related images.

“These networks can recognize patterns and evaluate board positions and selected movements in the game of Go,” Pang said. “We thought, ‘If we have some visual scientific data, maybe we can get an abstract concept or valuable physical information from this.’”

Wang added, “With this type of machine learning, we are trying to identify a certain pattern or correlation of patterns that is a unique signature of the equation of state.” So after training, the network can pinpoint on its own the portions of and correlations in an image, if any exist, that are most relevant to the problem scientists are trying to solve.

Accumulation of data needed for the analysis can be very computationally intensive, Pang said,”

Link to article