Combining First Principles and Neural Network Models for Interpretable, High-Precision Multi-Step Predictions
Motion of vehicles, such as vessels or drones, may be predicted based on first principles models that formalize the underlying physics in differential equations. These models are limited when not all physical parameters can be known, as they may result from components that are hard, if not impossible to measure, such as hydrodynamic mass, or from external disturbances. In such cases powerful machine learning models, e.g. deep neural networks, may improve predictions, but deep neural networks lack interpretability that is attributed to physical models and that can be used to understand critical system states, e.g. problematic oscillations. In order to combine high-precision predictions with high interpretability we target (Objective O1) hybrid models combining differential equations and deep neural networks (O2) formalizations of the notion of “interpretability”, (O3) methods that operationalize interpretability in hybrid models, and (O4) transfer learning as a quantitatively assessable case that uses insights from interpretability modeling in order to transfer prediction models from one vehicle to the next.
|Project Name||Combining First Principles and Neural Network Models for Interpretable, High-Precision Multi-Step Predictions (InMotion)|
|Project Duration||January 2021 - June 2024|
|Project Leader||Steffen Staab|
|Project Members||Daniel Frank, PhD Researcher
Alex Baier, PhD Researcher