New SimTech publication on benign overfitting is accepted at NeurIPS

October 18, 2023 / ts

[Picture: JJ Ying auf Unsplash]

A paper from a collaboration between the Universities of Stuttgart and Tübingen has been accepted at NeurIPS, the world’s largest machine learning conference. The paper by shared first authors Moritz Haas (University of Tübingen) and Dr. David Holzmüller (University of Stuttgart, SimTech) as well as their supervisors Prof. Dr. Ulrike von Luxburg (University of Tübingen) and Prof. Dr. Ingo Steinwart (University of Stuttgart) is titled “Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension”. The paper analyzes mathematically when certain neural networks and kernel methods can achieve good results despite overfitting on their training data. The researchers will present their work as a poster.

NeurIPS Conference

With over 3,000 accepted papers and an acceptance rate of 26%, the NeurIPS conference is probably the largest and most prestigious machine learning conference in the world. In 2023, it takes place in New Orleans from the 10th to 16th of December 2023. The conference consists of invited talks, demonstrations, symposia as well as oral and poster presentations of refereed papers. Along with the conference comes a professional exposition focusing on machine learning in practice, a series of tutorials, and topical workshops that provide a less formal setting for the exchange of ideas.

What is benign overfitting?

The paper studies regression, an ubiquitous task in machine learning, where a function is fitted to training data points. This is for example relevant for creating surrogate models or predicting continuous quantities like house prices or energy demand. When the training data is corrupted with random noise, machine learning methods may overfit by learning not only the underlying function but also the noise. Empirical research has found that overfitting neural networks can perform surprisingly well on new data. In contrast, theoretical research has predicted that the error caused by the noise does not vanish even with infinite training data when the input dimension is fixed, which is formally known as inconsistency. The Stuttgart-Tübingen collaboration unveiled two sides to the story: On the one hand, the inconsistency of standard methods holds in more cases than previously known. On the other hand, the researchers found that modifications to the kernel or the activation function in neural networks can allow benign overfitting, or more specifically, consistency of interpolation with noisy data.

Moritz Haas*, David Holzmüller*, Ulrike von Luxburg, and Ingo Steinwart. Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension, arXiv:2305.14077, 2023. To appear at NeurIPS 2023.

 

To the top of the page