Google Play icon

A New Look at One of the Most Abundant Particles in the Universe

Share
Posted June 4, 2019

Deep learning helps researchers understand the elusive neutrino.

Neutrinos are constantly bombarding the surface of the earth. They are one of the most abundant particles in the universe; an estimated 400 trillion zip through your very body every second. Additionally, particle accelerators fire neutrinos through hundreds of kilometers at their detector targets through significant distances in the earth.  But, despite this, they are extremely difficult to detect.

On Sept. 22, 2017, the IceCube Neutrino Observatory at the South Pole, represented in this illustration by strings of sensors under the ice, detected a high-energy neutrino that appeared to come from deep space. NASA's Fermi Gamma-ray Space Telescope (center left) pinpointed the source as a supermassive black hole in a galaxy about 4 billion light-years away. It is the first high-energy neutrino source identified from outside our galaxy. Credits: NASA/Fermi and Aurore Simonnet, Sonoma State University

On Sept. 22, 2017, the IceCube Neutrino Observatory at the South Pole, represented in this illustration by strings of sensors under the ice, detected a high-energy neutrino that appeared to come from deep space. NASA’s Fermi Gamma-ray Space Telescope (center left) pinpointed the source as a supermassive black hole in a galaxy about 4 billion light-years away. It is the first high-energy neutrino source identified from outside our galaxy. Credits: NASA/Fermi and Aurore Simonnet, Sonoma State University

Researchers at PNNL are applying deep learning techniques to learn more about neutrinos, part of a worldwide network of researchers trying to understand one of the universe’s most elusive particles.

Their expertise is aimed at the glut of data generated in experiments by massive liquid argon time projection chambers known as LArTPCs. Scientists use these to study neutrinos and their role in our cosmos, but the enormous amounts of data they generate for scientists to sift through can be overwhelming due to the tens of thousands of channels and very high data rates in these high-fidelity detectors.

PNNL researchers recently used the national laboratory complex’s leadership-class supercomputer, known as Summit, to tap the power of deep learning to delve into the data generated by these neutrino physics experiments.

Alex Hagen discussed his team’s research at the recent International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2019) in Switzerland. It’s one of the first applications – perhaps the first – of a form of deep learning called deep convolutional neural networks in particle physics research using this powerful hardware and software.

The core of deep learning is training computer networks to learn patterns from data, to enable the network to make decisions based on future data. But in neutrino physics, the outpouring of data makes training times very long; it’s extremely difficult to train a network quickly enough to make sense of all the data. One solution is to crop the data – to limit the amount of data flowing into the network for analysis – but that approach can lead to the loss of critical information.

The PNNL team – which also includes Eric Church, Jan Strube, Kolahal Bhattacharya, and Vinay Amatya – tackled a mountain of simulated data from the MicroBooNE experiment at Fermilab. The scientists addressed the problem of data overload by training convolutional neural networks on PNNL’s research computing cluster, known as Marianas, and scaling up the problem to multiple nodes. Using tools such as PyTorch, Horovod, and SparseConvNet—mainly developed by Uber and Facebook— the scientists slashed data loss by more than 80 percent when they scaled the system from one to 14 NVIDIA P100 GPUs.

Then the team tested its data on 128+ NVIDIA P100 GPUs on the SummitDev computer at Oak Ridge, where the PNNL team achieved additional significant reductions in training time and data loss. They applied SparseConvNet to speed up training time even further. Training time is reduced almost linearly with GPU count, and the lowest losses achieved with a large number of GPUs are sometimes lower than achievable with only a few GPUs, due to the technical issue that the effective batch size is large enough to allow more optimal learning.

Source: PNNL

Featured news from related categories:

Technology Org App
Google Play icon
84,049 science & technology articles

Most Popular Articles

  1. Efficiency of solar panels could be improved without changing them at all (September 2, 2019)
  2. Diesel is saved? Volkswagen found a way to reduce NOx emissions by 80% (September 3, 2019)
  3. The famous old Titanic is disappearing into time - a new expedition observed the corrosion (September 2, 2019)
  4. The Time Is Now for Precision Patient Monitoring (July 3, 2019)
  5. Europe and US are Going to Try and Deflect an Asteroid (September 6, 2019)

Follow us

Facebook   Twitter   Pinterest   Tumblr   RSS   Newsletter via Email