Already in May 2019, Amin Totounferoush from the Institute for Paralleled and Distributed Systems as well as Neda Ebrahimi Pour, Juri Schroder, Sabine Roller and Miriam Mehl received the Best Paper Award for their paper on “A New Load Balancing Approach for Coupled Multi-Physics Simulations”.
The authors were awarded the prize at the 20th International Workshop on Parallel and Distributed Scientific and Engineering Computing (PDSEC 2019) which was held in Rio de Janeiro, Brazil, in conjunction with the 33rd IEEE International and Parallel and Distributed Processing Symposium (IPDPS 2019).
The simulation of multi-physics and multi-scale problems requires very highly scalable approaches to be efficient on todays supercomputers. Thus, the need of scalable approaches to realise large problems in a feasible time has to be addressed respectively. In this paper, we present how we can run such complex simulations more efficiently by decomposing the simulation domain according to the occurring physics, by means of a small test case, a fluid-acoustics coupled simulation. This decomposition allows the different treatment of each subdomain, hence the best-suited configuration for each of them. We show how well our approach is scalable by running scalability measurements on SuperMUC supercomputer at LRZ supercomputing center. One of the important issues, that must be addressed in coupled multi-physics simulations, is the inter-solvers load imbalance. We present a new method to efficiently distribute the total number of requested cores between the solvers to reduce or ideally remove the time that one solver needs to wait until the other one finishes its computation. To demonstrate the effectiveness of our proposed method, we use a simple Gauss-pulse cube example. The domain is decomposed to inner and outer sub-domains. In the inner domain, we use a discontinuous galerkin solver to solve the Euler equations while in the outer domain a linearized set of Euler equations are solved. For communication and data-exchange between the subdomains, we use the preCICE coupling library. Numerical results show that the presented framework can scale up to (at least) 560 cores. In addition, the proposed load balancing method is able to almost remove the load imbalance between the coupling partners. The effect of using this method is significant and in most cases that we simulated, run-time is reduced by more than 40 percents in comparison to the results of the old load-balancing scheme.