ML Session: Neural networks with physical constraints, domain decomposition-based training strategies, and model order reduction

January 11, 2023, 2:00 p.m. (CET)

Time: January 11, 2023, 2:00 p.m. (CET)
Download as iCal:

We are happy to announce the next presentation in the ML-Session series:

Prof. Dr. Alexander Heinlein (TU Delft) will present on Wednesday 11. Jan. 2023 at 2pm in PWR 57, 8.122 a lecture IN PERSON (no Webex) on "Neural networks with physical constraints, domain decomposition-based training strategies, and model order reduction"

Abstract: Scientific machine learning (SciML) is a rapidly evolving field of research that combines techniques from scientific computing and machine learning. One major branch of SciML is the approximation of the solutions of partial differential equations (PDEs) using machine learning models and, in particular, neural networks. The network models be can trained in a data-driven or physics-informed way, that is, using reference data (from simulations or measurements) or a loss function based on the PDE, respectively. In this talk, two approaches for approximating the solutions of PDEs using neural networks are discussed: physics-informed neural networks (PINNs) and surrogate models based on convolutional neural networks (CNNs).

In PINNs, simple feedforward neural networks are employed to discretize the PDEs, and a single network is trained to approximate the solution of one specific boundary value problem. The loss function may include a combination of reference data and the residual of the PDE. Challenging applications, such as multiscale problems, require the use of neural networks with high capacity, and the training of the models is often not robust and may take large iteration counts. Therefore, domain decomposition-based training strategies improving the training performance using the finite basis physics-informed neural network (FBPINN) approach will be discussed.

In the second part of the talk, surrogate models for computational fluid dynamics (CFD) simulations based on CNNs are discussed. In particular, the network is trained to approximate a solution operator, taking a representation of the geometry as input and the solution field(s) as output. In contrast to the classical PINN approach and similar to other operator learning approaches, a single network is therefore trained to approximate a variety of boundary value problems. This makes the surrogate modeling approach potentially very efficient. As in the PINN approach, data as well as physics may be used in the loss function for training the network.

April 2023

March 2023

February 2023

January 2023

December 2022

November 2022

October 2022

September 2022

July 2022

June 2022

May 2022

April 2022

March 2022

February 2022

January 2022

December 2021

November 2021

October 2021

September 2021

July 2021

June 2021

May 2021

April 2021

March 2021

February 2021

January 2021

December 2020

November 2020

October 2020

August 2020

July 2020

June 2020

May 2020

March 2020

February 2020

January 2020

December 2019

November 2019

October 2019

September 2019

July 2019

June 2019

May 2019

June 2019

November 2019

To the top of the page