4/15/2020 Debra Levey Larson
Written by Debra Levey Larson
This month, NSF approved allocations of supercomputing time on Frontera to 49 science projects for 2020-2021, based on the project's need for very large scale computing to make science and engineering discoveries, and the ability to efficiently use a supercomputer on the scale of Frontera.
Of the 49 projects selected, two are from researchers in the Department of Aerospace Engineering at the University of Illinois at Urbana-Champaign. Daniel Bodony and Deborah Levin are in the first cohort of Frontera users selected by the Large Resource Allocation Committee—a peer-review committee of computational science experts convened annually to assess the readiness and appropriateness of projects for time on Frontera.
To be considered for an allocation, researchers needed to justify the scientific need for the request, and be able to use at least 250,000 node hours (with 56 cores per node) annually, with a maximum award of 5 million node hours per project.
Blue Waters Associate Professor Daniel Bodony will use his allocation of 5 million node hours on Frontera to study fluid-thermal-structure interactions, one of the principal challenges that inhibits hypersonic vehicle design."Our allocated time on Frontera will enable us to understand how hypersonic vehicles interact with the very fast, hot, and turbulent flows they generate," Bodony said. "We are especially interested in predicting and modeling how the vehicle's external surface responds – deforms and heats-up – to the high-speed flow, as well as how the surface changes impact the flow itself."
Bodony’s project is entitled, “Direct Numerical Simulation of Mach 6 Flow Over a 35 Degree Compression Ramp.”
AE Professor Deborah Levin’s projects, “Formulation of a general collisional-radiative model to study nonequilibrium flows” and “Multi-scale Modeling of Unsteady Shock-Boundary Layer Hypersonic Flow Instabilities, “ were also selected to receive supercomputing time on Fronter“Modeling three dimensional complex flows, require petascale computations arising from the need to simulate a large number of particles to satisfy numerical requirements,” Levin said. “Therefore, to solve such computationally demanding cases in an efficient manner, we have developed an in-house code, Scalable Unstructured Gas-dynamics Adaptive mesh-Refinement, that scales demonstrably well on thousands of cores even for the challenging flow over a double wedge.”
“In this work, we are applying the code to investigate the self-excited spanwise homogeneous perturbations arising by imposing spanwise periodic boundaries,” Levin said. “To simulate such demanding cases, the Cascade Lake configuration on Frontera is particularly attractive because of the larger RAM of 192 GB per node and availability of large nodes per queue.