Junior Research Group Leader Paul-Christian Bürkner has acquired DFG funding for his project on “Intuitive Joint Priors for Bayesian Multilevel Models”. The project is being carried out by PhD Researcher Javier Aguilar.
The DFG’s decision to fund the project was not a difficult one as the project proposal was said to be of very high quality. The topic of incorporating expert information via prior distributions into relatively simple probabilistic models is already very challenging. Paul-Christian Bürkner is determined to extend this line of work to much more complicated models, with results expected to have widespread impact.
“I’m very happy to have acquired this DFG grant, which enables me and my team to make significant contributions in one of the key areas of probabilistic modelling”, says Paul-Christian Bürkner.
One of the key aspects of probabilistic modelling is the incorporation of expert knowledge via prior distributions. This is already very challenging for simple models and becomes even more difficult as the model complexity increases. In this project, will fill this gap by developing prior distributions for high-dimensional probabilistic models that are both flexible and intuitive for the user.
Abstract: Regression models are ubiquitous in the quantitative sciences making up a big part of all statistical analysis performed on data. In the quantitative sciences, data often contains multilevel structure, for example, because of natural groupings of individuals or repeated measurement of the same individuals. Multilevel models (MLMs) are designed specifically to account for the nested structure in multilevel data and are a widely applied class of regression models. From a Bayesian perspective, the widespread success of MLMs can be explained by the fact that they impose joint priors over a set of parameters with shared hyper-parameters, rather than separate independent priors for each parameter. However, in almost all state-of-the-art approaches, different additive regression terms in MLMs, corresponding to different parameter sets, still receive mutually independent priors. As more and more terms are being added to the model while the number of observations remains constant, such models will overfit the data. This is highly problematic as it leads to unreliable or uninterpretable estimates, bad out-of-sample predictions, and inflated Type I error rates. The primary objective of our project is thus to develop, evaluate, implement, and apply intuitive joint priors for Bayesian MLMs. We hypothesize that our developed priors will enable the reliable and interpretable estimation of much more complex Bayesian MLMs than was previously possible.