Bayesian Machine Learning by Egor Kraev

In classical machine learning, one looks for the ‘best’ vector of model parameters to fit the data, leading to overfitting if one is not careful and requiring extra effort to understand model uncertainty. In contrast, in Bayesian machine learning one considers all possible parameter vectors compatible with the data observed so far, giving one a firm handle on model uncertainty and allowing one to gently inject prior knowledge.

This session will describe the benefits and drawbacks of doing that, and show how to use this approach in practice.

Supported by