arxivst stuff from arxiv that you should probably bookmark

Learning Proximal Operators: Using Denoising Networks for Regularizing Inverse Imaging Problems

Abstract · Apr 11, 2017 18:33 ·

regularization linear denoising retraining demosaicking fidelity cs-cv

Arxiv Abstract

  • Tim Meinhardt
  • Michael Möller
  • Caner Hazirbas
  • Daniel Cremers

While variational methods have been among the most powerful tools for solving linear inverse problems in imaging, deep (convolutional) neural networks have recently taken the lead in many challenging benchmarks. A remaining drawback of deep learning approaches is that they require an expensive retraining whenever the specific problem, the noise level, noise type, or desired measure of fidelity changes. On the contrary, variational methods have a plug-and-play nature as they usually consist of separate data fidelity and regularization terms. In this paper we study the possibility of replacing the proximal operator of the regularization used in many convex energy minimization algorithms by a denoising neural network. The latter therefore serves as an implicit natural image prior, while the data term can still be chosen arbitrarily. Using a fixed denoising neural network in exemplary problems of image deconvolution with different blur kernels and image demosaicking, we obtain state-of-the-art results. Additionally, we discuss novel results on the analysis of possible convex optimization algorithms to incorporate the network into, as well as the choices of algorithm parameters and their relation to the noise level the neural network is trained on.

Read the paper (pdf) »