Combining first principles and neural network models for interpretable, high-precision multi-step predictions (InMotion)

PN 4-7

Combining First Principles and Neural Network Models for Interpretable, High-Precision Multi-Step Predictions

01:18

Project description

Motion of vehicles, such as vessels or drones, may be predicted based on first principles models that formalize the underlying physics in differential equations. These models are limited when not all physical parameters can be known, as they may result from components that are hard, if not impossible to measure, such as hydrodynamic mass, or from external disturbances. In such cases powerful machine learning models, e.g. deep neural networks, may improve predictions, but deep neural networks lack interpretability  that is attributed to physical models and that can be used to understand critical system states, e.g. problematic oscillations. In order to combine high-precision predictions with high interpretability we target (Objective O1) hybrid models combining differential equations and deep neural networks (O2) formalizations of the notion of “interpretability”, (O3) methods that operationalize interpretability in hybrid models, and (O4) transfer learning as a quantitatively assessable case that uses insights from interpretability modeling in order to transfer prediction models from one vehicle to the next.

Project title Combining first principles and neural network models for interpretable, high-precision multi-step predictions (InMotion)
Project leaders Steffen Staab (Daniel Weiskopf, Carsten Scherer)
Project staff Daniel Frank, doctoral researcher
Project duration January 2021 - June 2024
Project number PN 4-7

Publications PN 4-7

  1. 2023

    1. A. Baier, D. Aspandi Latif, and S. Staab, “Supplements for ‘ReLiNet: Stable and Explainable Multistep Prediction with Recurrent Linear Parameter Varying Networks’".” 2023. doi: 10.18419/darus-3457.
To the top of the page