Argyris Lecture

Once a year, we award an Argyris Visiting Professorship to a leading personality in the field of simulation technology. With this award, we honor internationally renowned scientists from Germany and abroad, who are outstanding representatives of their disciplines in the field of simulation technology. A public Argyris Lecture at the end of the summer semester gives the visiting professor the opportunity to present his or her research work to the general public.

2023 (II)

Prof. Inga Berre
Professor at the Department of Mathematics and director of the Center for Modeling of Coupled Subsurface Dynamics | University of Bergen

Process-structure interaction in geothermal reservoirs: modeling and simulation

Complex coupled processes occur in natural geothermal systems and result from injection and production of fluid in geothermal reservoirs. Fluid flow and heat transfer interact with rock mechanical deformations and shear-dilation, opening and propagation of fractures. Extension of fractures and changes in fracture aperture is again strongly coupled to flow and transport. This process-structure interaction occurs both related to natural convection processes in deep geothermal systems and during the development and operational phase of a reservoir when fluids are injected and produced. The coupled processes and their interaction with the fractured structure of the formation has a strong impact on the heat transfer in deep geothermal systems as well as on the outcome of fluid injection and production operations. In this talk, we discuss mathematical models and numerical approaches for coupled processes and process-structure interaction in geothermal reservoirs. Modeling examples will be discussed considering both natural and engineered systems.

 

Honrary Argyis Lecture

2023 (I)

Prof. Daniel Tartakovsky
Professor in Energy Science and Engineering Department | Institute for Computational Mathematics and Engineering, and Bio-X | Stanford University

Use and Abuse of Machine Learning in Scientific Discovery

My talk focuses on the limitations and potential of deep learning in the context of science-based predictions of dynamic phenomena. In this context, neural networks (NNs) are often used as surrogates or emulators of partial differential equations (PDEs) that describe the dynamics of complex systems. A virtually negligible computational cost of such surrogates renders them an attractive tool for ensemble-based computation, which requires a large number of repeated PDE solves. Since the latter are also needed to generate sufficient data for NN training, the usefulness of NN-based surrogates hinges on the balance between the training cost and the computational gain stemming from their deployment. We rely on multi-fidelity simulations to reduce the cost of data generation for subsequent training of a deep convolutional NN (CNN) using transfer learning. High- and low-fidelity images are generated by solving PDEs on fine and coarse meshes, respectively. We use theoretical results for multilevel Monte Carlo to guide our choice of the numbers of images of each kind. We demonstrate the performance of this multi-fidelity training strategy on the problem of estimation of the distribution of a quantity of interest, whose dynamics is governed by a system of nonlinear PDEs (parabolic PDEs of multi-phase flow in heterogeneous porous media) with uncertain/random parameters. Our numerical experiments demonstrate that a mixture of a comparatively large number of low-fidelity data and smaller numbers of high- and low-fidelity data provides an optimal balance of computational speed-up and prediction accuracy.

2022

Prof. Serkan Gugerin
Class of 1950 Professor of Mathematics | Deputy Director, Division of Compuational Modeling and Data Analytics | Affiliated Faculty, Department of Mechanical Engineering | Virginia Polytechnic Institute and State Universit

Modeling dynamical systems from data: A systems-theoretic perspective

Dynamical systems are a principal tool in the modeling, prediction, and control of physical phenomena with applications ranging from structural health monitoring to electrical power network dynamics, from heat dissipation in complex microelectronic devices  to vibration suppression in large wind turbines. Direct numerical simulation of these mathematical models may be the only possibility for accurate prediction or control of such complex phenomena.  However, in many instances, a high-fidelity mathematical model describing the dynamics is not readily available. Instead, one has access to an abundant amount of input/output data via either experimental measurements or a black-box simulation. The goal of data-driven modeling is, then, to accurately model the underlying dynamics using input/output data only. In this talk, we will investigate various approaches to data-driven modeling of dynamical systems using systems-theoretical concepts. We will consider both frequency-domain and time-domain measurements of a dynamical system. In some instances we will have true experimental data, and in others we will have access to simulation data. We will illustrate these concepts in various examples ranging from structural dynamics to electrical power networks to microelectromechanical systems.

2021

Prof. Gábor Csányi
Professor of Molecular Modelling | Engineering Laboratory | University of Cambridge 

First principles molecular dynamics on a large scale

Over the past decade a revolution has taken place in how we do large scale molecular dynamics. While previously first principles accuracy was solely the purview of explicit, and very expensive, electronic structure methods such as density functional theory, the new approaches have allowed the extension of highly accurate, first principles simulations to the atomic scale, where electrons are not treated explicitly any more, and therefore hundreds of thousands of atoms can be simulated. These quantum mechanically accurate force fields and interatomic potentials are fitted to electronic structure data and at first used techniques inspired by those used in machine learning and artificial intelligence research: neural networks, kernel regression, etc. It is a quickly moving field, and - having learned key lessons about representation, symmetry and regularisation - there appears to be some semblance of convergence between the diverse methods, which now also include polynomial expansions carried to high dimension.

2020

Prof. Michael Ortiz
Frank and Ora-Lee Marble Professor Emeritus of Aeronautics and Mechanical Engineering from California Institute of Technology

Model-Free Data-Driven Science: Cutting out the Middleman

We have developed a new computing paradigm, which we refer to as Data-Driven Computing, according to which calculations are carried out directly from experimental material data and pertinent kinematic constraints and conservation laws, such as compatibility and equilibrium, thus bypassing entirely the empirical material modeling step of conventional computing altogether. Data-driven solvers seek to assign to each material point the state from a prespecified data set that is closest to satisfying the conservation laws. Equivalently, data-driven solvers aim to find the state satisfying the conservation laws that is closest to the data set. The resulting data-driven problem thus consists of the minimization of a distance function to the data set in phase space subject to constraints introduced by the conservation laws. We demonstrate the data-driven paradigm and investigate the performance of data-driven solvers by means of several examples of application, including statics and dynamics of nonlinear three-dimensional trusses, linear and nonlinear elasticity, dynamics and plasticity, including scattered data and stochastic behavior. In these tests, the data-driven solvers exhibit good convergence properties both with respect to the number of data points and with regard to local data assignment, including noisy material data sets containing outliers. The variational structure of the data-driven problem also renders it amenable to analysis. We find that the classical solutions are recovered as a special case of Data-Driven solutions. We identify conditions for convergence of Data-Driven solutions corresponding to sequences of approximating material data sets. Specialization to constant material data set sequences in turn establishes an appropriate notion of relaxation. We find that relaxation within the Data-Driven framework is fundamentally different from the classical relaxation of energy functions. For instance, we show that in the Data-Driven framework the relaxation of a bistable material leads to effective material data sets that are not graphs. I will finish my presentation with highlights on work in progress, including experimental material data mining and identification, material data generation through multiscale analysis and fast search and data structure algorithms as a form of ansatz-free learning.

Prof. George Biros
University of Texas at Austin, USA

Towards direct numerical simulation of blood flow in microcirculation

Microcirculation (blood flow submillimeter vessels) plays a key role in cardiovascular physiology. Numerical simulations can help the understanding of complex phenomena like transport, blood rheology, thrombosis, inflammation, and the mechanobiology of the vascular system. Standard viscous Navier-Stokes models can represent with good engineering accuracy the flow in large vessels. But at scales near the size of a red blood cell (about 10 microns) they are not as accurate and more complex models are needed, for example, viscoelastic fluids or direct numerical simulations that track the motion of individual red blood cells using fluid-structure interaction algorithms. In the first part of my talk, I will describe recent advances in the numerical simulations of such flows. I will review the literature and summarize the work of several groups. In the second part of my talk, I will give some details on integral-equation based formulations and their scalability to large High-Performance Computing (HPC) clusters. In the third part of my talk, I will focus on the design of a deterministic lateral displacement (DLD) device for sorting normal and abnormal red blood cells (RBCs). A DLD device optimized for efficient cell sorting enables rapid medical diagnoses of several diseases such as malaria since infected cells are stiffer than their healthy counterparts. I will present recent results that integrate computational fluid mechanics and machine learning for the efficient design of DLD devices.

Prof. Dr. Peter Knabner
University of Erlangen-Nürnberg

Micro-Macro Models for Reactive Flow and Transport Problems in Complex Media

In porous media and other complex media with different length scales, (periodic) homogenization has been successfully applied for several decades to arrive at macroscopic, upscaled models, which only keep the microscopic information by means of a decoupled computation of “effective” parameters on a reference cell. The derivation of Darcy’s law for flow in porous media is a prominent example. Numerical methods for this kind of macroscopic models have been intensively discussed and in general are considered to be favourable compared to a direct microscale computation. On the other hand, if the interplay of processes becomes too complex, e.g. the scale seperation does not act in a proper way, the porous medium itself is evolving, ..., the upscaled models obtained may be micro-macro models in the sense, that the coupling of the macroscopic equations and the equations at the reference cell is both ways, i.e. at each macroscopic point a reference cell is attached and the solution in the reference cell depends on the macroscopic solution (at that point) and the macroscopic solution depends on the microscopic solutions in the reference cells. At the first glance such models seem to be numerically infeasible due to their enormous complexity ( in d+d spatial variables). If on the other hand this barrier can be overcome, micro-macro models are no longer a burden but a chance by allowing more general interaction of processes (evolving porous media, multiphase flow, general chemical reactions, ...), where the microscopic processes “compute” the constitutive laws, which need longer be assumed (similar to the concept of heterogeneous homogenization). We will discuss various examples and in particular numerical approaches to keep the numerical complexity in the range of pure macroscopic models.

Prof. Ronaldo Borja
Stanford University, USA

Multiscale Poromechanics: Fluid flow, solid deformation, and anisotropic thermoplasticity

Natural geomaterials often exhibit pore size distributions with two dominant porosity scales. Examples include fractured rocks where the dominant porosities are those of the fractures and rock matrix, and aggregated soils where the dominant porosities are those of the micropores and macropores. I will present a framework for this type of materials that covers both steady-state and transient fluid flow responses. The framework relies on a thermodynamically consistent effective stress previously developed for porous media with two dominant porosity scales. I will show that this effective stress is equivalent to the weighted sum of the individual effective stresses in the micropores and macropores, with the weighting done according to the pore fractions. Apart from this feature, some geomaterials such as shale exhibit pronounced anisotropy in their hydromechanical behavior due to the presence of distinct bedding planes. In this talk I will also present a thermo-plastic framework for transversely isotropic materials incorporating anisotropy and thermal effects in both elastic and plastic responses. Computational stress-point simulations under isothermal and adiabatic conditions reveal the importance of anisotropy and thermal effects on the inception of a deformation band. I will show that anisotropy promotes the formation of dilation band across a wide range of bedding plane orientations relative to the direction of loading.

Dr. Dorival M. Pedroso
University of Queensland, Australien

Consistent Implementation of FEM Solutions for the Theory of Porous Media

The Theory of Porous Media (TPM) is a rational and convenient mathematical framework to represent the macroscopic behaviour of porous media including interactions between multiple constituents. The resulting system of equations is usually known as the hydro-mechanical problem and seldom possesses analytical solutions with few exceptions; however many successful applications take advantage of numerical solutions based on the finite element method (FEM). A way to update primary and state variables in the FEM is to use implicit schemes that are unconditionally stable. These schemes nonetheless require a number of (consistent) derivatives for achieving (quadratic) convergence when using Newton’s method. Furthermore, all state variables must be initialised with consistent initial conditions. Therefore, overall consistency of the numerical solver must be followed in order to obtain accurate results under feasible computing times. An additional challenge during the solution of the multiconstituent flow problem in porous media is the treatment of unilateral boundary conditions that arise when liquid may escape from the porous domain through a region prone to changes in saturation. These kind of boundary conditions greatly increases the difficulty especially in coupled simulations and hence requires a proper method to treat them. This presentation aims to clarify the aforementioned challenges and to suggest a couple of algorithms that were published in to their solution. Focus will be given to:
(a) the innovation around the derivation of all consistent operators and correct setting up of initial conditions;
(b) new method to handle unilateral boundary conditions;
(c) the concept of references and a hysteretic liquid retention model derived from it;
(d) computer implementation aspects and the convenient use
of the Go language to develop a general purpose FE solver with parallel computing capabilities (Gofem).

Prof. Dr. Michael Celia
Princeton University

Modeling Approaches for CO2 Sequestration in Conventional and Unconventional Reservoirs

Carbon capture and sequestration (CCS) is the only currently available technology that can significantly reduce atmospheric carbon emissions while allowing continued use of fossil fuels for electric power and industrial production.  CCS involves capturing the CO2 before it is emitted to the atmosphere, and injecting it into deep subsurface formations, thereby keeping it out of the atmosphere for centuries to millennia or longer.  While conventional, high-permeability formations have traditionally been considered as injection targets, recent proposals suggest possible injection of captured CO2 into unconventional reservoirs with low permeability, specifically depleted shale-gas reservoirs.  Analysis of injection into both types of formations involves computational challenges, in part because of the need for comprehensive environmental risk assessments and associated analysis of possible leakage scenarios.  A range of computational models can be developed to answer the most important practical questions associated with both of these injection options.  In this presentation, different modeling approaches will be discussed and important practical questions related to injection of CO2 into both conventional and unconventional formations will be addressed. 

Prof. Dr. René de Borst
University of Glasgow, UK

Multi-scales, Multi-physics, and Evolving Discontinuities in Computational Mechanics

Multi-scale methods are quickly becoming a new paradigm in many branching of science, including simulation-based engineering, where multi-scale approaches can further our understanding of the behaviour of man-made and natural materials. In multi-scale analyses a greater resolution is sought at ever smaller scales. In this manner it is possible to incorporate the physics more properly and therefore, to construct models that are more reliable and have a greater range of validity at the macroscale.

When resolving smaller and smaller scales, discontinuities become more and more prominent. In addition to cracks, faults and shear bands observed at the macroscopic scale, discontinuities like grain boundaries, solid-solid boundaries such as in phase transformations, and discrete dislocation movement now also come in consideration.

In this lecture, we will start by a concise classification of multi-scale computational methods. Next, we will focus on evolving discontinuities that arise at different scales, and discuss methods that can describe them. Examples will be given at the macroscopic scale, the mesoscopic scale, and within a multi-scale framework. Also, examples will be given of multi-scale analyses where coupling of evolving discontinuities is considered with non-mechanical fields.

To the top of the page