Google Play icon

Trillion particle simulation on hopper honored with best paper

Share
Posted June 4, 2013
Credit: Oliver Rubel, Berkeley Lab

Credit: Oliver Rubel, Berkeley Lab

An unprecedented trillion-particle simulation, which utilized more than 120,000 processors and generated approximately 350 terabytes of data, pushed the performance capability of the National Energy Research Scientific Computing Center’s (NERSC’s) Cray XE6 “Hopper” supercomputer to its limits.

 

In addition to shedding new light on a long-standing astrophysics mystery, the successful run also allowed a team of computational researchers from the Lawrence Berkeley National Laboratory (Berkeley Lab) and Cray Inc. to glean valuable insights that will help thousands of scientists worldwide make the most of current petascale systems like Hopper, which are capable of computing quadrillions of calculations per second, and future exascale supercomputers, which will compute quintillions of calculations per second.

The team described their findings in “Trillion Particles, 120,000 cores, and 350 TBs: Lessons Learned From a Hero I/O Run on Hopper,” which won best paper at the 2013 Cray User Group conference in Napa Valley, California.

“When production applications use a significant portion of a supercomputing system, they push its computation, memory, network, and parallel I/O (input/output) subsystems to their limits. Successful execution of these apps requires careful planning and tuning,” says Surendra Byna, a research scientist in Berkeley Lab’s Scientific Data Management Group and the paper’s lead author. “Our goal with this project was to identify parameters that would make apps of this scale successful for a broad base of science users.”

For this particular run, the team simulated more than two trillion particles for nearly 23,000 time steps with VPIC, a large-scale plasma physics application. The simulation used approximately 80 percent of Hopper’s computing resources, 90 percent of the available memory on each node, and 50 percent of the Lustre scratch file system. In total, 10 separate trillion-particle datasets, each ranging between 30 to 42 terabytes in size, were written as HDF5 files on the scratch file system at a sustained rate of approximately 27 gigabytes per second.

Read more at: Phys.org

Featured news from related categories:

Technology Org App
Google Play icon
85,465 science & technology articles

Most Popular Articles

  1. New treatment may reverse celiac disease (October 22, 2019)
  2. "Helical Engine" Proposed by NASA Engineer could Reach 99% the Speed of Light. But could it, really? (October 17, 2019)
  3. The World's Energy Storage Powerhouse (November 1, 2019)
  4. Plastic waste may be headed for the microwave (October 18, 2019)
  5. Universe is a Sphere and Not Flat After All According to a New Research (November 7, 2019)

Follow us

Facebook   Twitter   Pinterest   Tumblr   RSS   Newsletter via Email