arxivst stuff from arxiv that you should probably bookmark

Effects of the optimisation of the margin distribution on generalisation in deep architectures

Abstract · Apr 19, 2017 08:31 ·

nmv generalisation maximisation zhou schapire maximising loss variance margin cs-lg

Arxiv Abstract

  • Lech Szymanski
  • Brendan McCane
  • Wei Gao
  • Zhi-Hua Zhou

Despite being so vital to success of Support Vector Machines, the principle of separating margin maximisation is not used in deep learning. We show that minimisation of margin variance and not maximisation of the margin is more suitable for improving generalisation in deep architectures. We propose the Halfway loss function that minimises the Normalised Margin Variance (NMV) at the output of a deep learning models and evaluate its performance against the Softmax Cross-Entropy loss on the MNIST, smallNORB and CIFAR-10 datasets.

Read the paper (pdf) »