arxivst stuff from arxiv that you should probably bookmark

Robustly Learning a Gaussian: Getting Optimal Error, Efficiently

Abstract · Apr 12, 2017 17:55 ·

usc csail career nsf research fellowship award faculty sloan mit cs-ds cs-it cs-lg math-it math-st stat-ml stat-th

Arxiv Abstract

  • Ilias Diakonikolas
  • Gautam Kamath
  • Daniel M. Kane
  • Jerry Li
  • Ankur Moitra
  • Alistair Stewart

We study the fundamental problem of learning the parameters of a high-dimensional Gaussian in the presence of noise – where an $\varepsilon$-fraction of our samples were chosen by an adversary. We give robust estimators that achieve estimation error $O(\varepsilon)$ in the total variation distance, which is optimal up to a universal constant that is independent of the dimension. In the case where just the mean is unknown, our robustness guarantee is optimal up to a factor of $\sqrt{2}$ and the running time is polynomial in $d$ and $1/\epsilon$. When both the mean and covariance are unknown, the running time is polynomial in $d$ and quasipolynomial in $1/\varepsilon$. Moreover all of our algorithms require only a polynomial number of samples. Our work shows that the same sorts of error guarantees that were established over fifty years ago in the one-dimensional setting can also be achieved by efficient algorithms in high-dimensional settings.

Read the paper (pdf) »