arxivst stuff from arxiv that you should probably bookmark

On the Importance of Super-Gaussian Speech Priors for Pre-Trained Speech Enhancement

Abstract · Mar 15, 2017 08:33 ·

cs-sd

Arxiv Abstract

  • Robert Rehr
  • Timo Gerkmann

For enhancing noisy signals, pre-trained single-channel speech enhancement schemes exploit prior knowledge about the shape of typical speech structures. This knowledge is obtained from training data for which methods from machine learning are used, e.g., Mixtures of Gaussians, nonnegative matrix factorization, and deep neural networks. If only speech envelopes are employed as prior speech knowledge, e.g., to meet requirements in terms of computational complexity and memory consumption, Wiener-like enhancement filters will not be able to reduce noise components between speech spectral harmonics. In this paper, we highlight the role of clean speech estimators that employ super-Gaussian speech priors in particular for pre- trained approaches when spectral envelope models are used. In the 2000s, such estimators have been considered by many researchers for improving non-trained enhancement schemes. However, while the benefit of super-Gaussian clean speech estimators in non-trained enhancement schemes is limited, we point out that these estimators make a much larger difference for enhancement schemes that employ pre-trained envelope models. We show that for such pre-trained enhancements schemes super- Gaussian estimators allow for a suppression of annoying residual noises which are not reduced using Gaussian filters such as the Wiener filter. As a consequence, considerable improvements in terms of Perceptual Evaluation of Speech Quality and segmental signal-to-noise ratios are achieved.

Read the paper (pdf) »