arxivst stuff from arxiv that you should probably bookmark

On Generalization and Regularization in Deep Learning

Abstract · Apr 5, 2017 08:48 ·

stat-ml cs-lg math-st stat-th

Arxiv Abstract

  • Pirmin Lemberger

Why do large neural network generalize so well on complex tasks such as image classification or speech recognition? What exactly is the role regularization for them? These are arguably among the most important open questions in machine learning today. In a recent and thought provoking paper [C. Zhang et al.] several authors performed a number of numerical experiments that hint at the need for novel theoretical concepts to account for this phenomenon. The paper stirred quit a lot of excitement among the machine learning community but at the same time it created some confusion as discussions on OpenReview.net testifies. The aim of this pedagogical paper is to make this debate accessible to a wider audience of data scientists without advanced theoretical knowledge in statistical learning. The focus here is on explicit mathematical definitions and on a discussion of relevant concepts, not on proofs for which we provide references.

Read the paper (pdf) »