ML Session: Neural networks with physical constraints, domain decomposition-based training strategies, and model order reduction

January 11, 2023, 2:00 p.m. (CET)

Time: January 11, 2023, 2:00 p.m. (CET)
Download as iCal:

We are happy to announce the next presentation in the ML-Session series:

Prof. Dr. Alexander Heinlein (TU Delft) will present on Wednesday 11. Jan. 2023 at 2pm in PWR 57, 8.122 a lecture IN PERSON (no Webex) on "Neural networks with physical constraints, domain decomposition-based training strategies, and model order reduction"

Abstract: Scientific machine learning (SciML) is a rapidly evolving field of research that combines techniques from scientific computing and machine learning. One major branch of SciML is the approximation of the solutions of partial differential equations (PDEs) using machine learning models and, in particular, neural networks. The network models be can trained in a data-driven or physics-informed way, that is, using reference data (from simulations or measurements) or a loss function based on the PDE, respectively. In this talk, two approaches for approximating the solutions of PDEs using neural networks are discussed: physics-informed neural networks (PINNs) and surrogate models based on convolutional neural networks (CNNs).

In PINNs, simple feedforward neural networks are employed to discretize the PDEs, and a single network is trained to approximate the solution of one specific boundary value problem. The loss function may include a combination of reference data and the residual of the PDE. Challenging applications, such as multiscale problems, require the use of neural networks with high capacity, and the training of the models is often not robust and may take large iteration counts. Therefore, domain decomposition-based training strategies improving the training performance using the finite basis physics-informed neural network (FBPINN) approach will be discussed.

In the second part of the talk, surrogate models for computational fluid dynamics (CFD) simulations based on CNNs are discussed. In particular, the network is trained to approximate a solution operator, taking a representation of the geometry as input and the solution field(s) as output. In contrast to the classical PINN approach and similar to other operator learning approaches, a single network is therefore trained to approximate a variety of boundary value problems. This makes the surrogate modeling approach potentially very efficient. As in the PINN approach, data as well as physics may be used in the loss function for training the network.

Events


To the top of the page