MOR Seminar: Optimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regression (Behzad Azmi)

November 16, 2023, 1:00 p.m. (CET)

Time: November 16, 2023, 1:00 p.m. (CET)
Meeting mode: in presence
Venue:
Download as iCal:

Since 2009, this seminar represents a general platform for talks and exchange in the field of surrogate modelling, in particular Model Order Reduction (MOR) as well as novel data-based techniques in simulation science. Both methodological as well as application oriented presentations highlight the various aspects and the relevance of surrogate modelling in mathematics, technical mechanics, material science, control theory and other fields. We aim both at university members, as well as external persons from science and industry. The seminar is organized by four research groups and represents an activity of the SimTech Cluster of Excellence.

Date: 16 November
Time:
1 am
Venue:
PWR5a, room 0.009
Topic:
Optimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regression
Speaker: Behzad Azmi (Universität Konstanz)

Abstract: In this talk, our objective is to approximate the solution of Hamilton-Jacobi-Bellman (HJB) equations associated with a class of optimal control problems using machine learning techniques. As known in optimal control theory, the solution to HJB equations provides the optimal values for associated optimal control problems with different initial data, and its gradient is used to formulate the optimal feedback control policy. Due to the so-called "curse of dimensionality," directly solving HJB equations becomes numerically intractable for high-dimensional optimal control problems.

We present a data-driven approach for approximating the solution of HJB equations for general nonlinear problems and computing the related optimal feedback controls. This approach leverages the control-theoretical connection between HJB and first-order optimality conditions through Pontryagin's Maximum Principle. It is based on the following key elements: generating a random dataset consisting of different state-value pairs, approximating the value function sparsely using orthogonal polynomials, and recovering sparsity through a (weighted LASSO) $\ell_1$-minimization decoder. We also provide numerical experiments that demonstrate how enriching the dataset with gradient information reduces the required number of training samples, and how sparse polynomial regression consistently results in a feedback law with lower complexity.

MOR Seminar 

To the top of the page