【正文】
mar . Conference on Signals, Systems, and Computers, 1993.[9] J Portilla, V Strela and MJ Wainwright et al. Image denoising using scale mixtures of gaussians in thewavelet doma[J]. IEEE Trans. Image Process. 2003, 1(12): 1338–135.[10] JL Starck, EJ Candes and DL Donoho. The curvelet transform for image denoising[J]. IEEE Trans. Image ,1(11): 670–684.[11] R Eslami and H Radha. Translationinvariant contourlet transform and its application to image denoising[J]. IEEE Trans. Image Process. 2006, 1(15): 3362–3374.[12] B Matalon, M Elad and M denoising of images using modeling of the redundant contourlet transform[C]. The SPIE Conf. Wavelets, 2005.[13] OG Guleryuz. Weighted overplete denoising[C]. The Asilomar Conf. Signals and Systems, Pacific Grove, CA, 2003.[14] OG Guleryuz. Nonlinear approximation based image recovery using adaptive sparse reconstructions and iterated denoising: PartI—Theory[J]. IEEE Trans. Image Process. 2005, 1(15):539–553.[15] OG Guleryuz. Nonlinear approximation based image recovery using adaptive sparse reconstructions and iterated denoising: PartII—Adaptive algorithms[J]. IEEE Trans. Image Process. 2005, 1(15):554–571.附 錄外國文獻(xiàn)翻譯:Image Denoising Via Sparse and Redundant Representations Over Learned DictionariesMichael Elad and Michal AharonAbstract—We address the image denoising problem, where zeromean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained the KSVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of highquality image database. Since the KSVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a stateoftheart denoising performance,equivalent and sometimes surpassing recently published leadingalternative denoising methods.Index Terms—Bayesian reconstruction, dictionary learning, discrete cosine transform (DCT), image denoising, KSVD, matching pursuit, maximum a posteriori (MAP) estimation, redundancy,sparse representations.In this paper, we address the classic image denoising problem: An ideal image is measured in the presence of an additive zeromean white and homogeneous Gaussian noise, , with standard deviation . The measured image is, thus (1)We desire to design an algorithm that can remove the noise from, getting as close as possible to the original image, .The image denoising problem is important, not only because of the evident applications it serves. Being the simplest possible inverse problem, it provides a convenient platform over which image processing ideas and techniques can be assessed. Indeed,numerous contributions in the past 50 years or so addressed this problem from many and diverse points of view. Statistical estimators of all sorts, spatial adaptive filters, stochastic analysis,partial differential equations, transformdomain methods,splines and other approximation theory methods, morphological analysis, order statistics, and more, are some of the many directions explored in studying this problem. In this paper, we haveno intention to provide a survey of this vast activity. Instead,we intend to concentrate on one specific approach towards the,image denoising problem that we find to be highly effective and promising: the use of sparse and redundant representations overtrained dictionaries.Using redundant representations and sparsity as driving forces for denoising of signals has drawn a lot of research attention in the past decade or so. At first, sparsity of the unitarywavelet coefficients was considered, leading to the celebrated shrinkage algorithm [1]–[9]. One reason to turn to redundant representations was the desire to have the shift invarianceproperty [10]. Also, with the growing realization that regular separable 1D wavelets are inappropriate for handling images,several new tailored multiscale and directional redundanttransforms were introduced, including the curvelet [11], [12],contourlet [13], [14], wedgelet [15], bandlet [16], [17], and the steerable wavelet [18], [19]. In parallel, the introduction of the matching pursuit [20], [21] and the basis pursuit denoising [22] gave rise to the ability to address the image denoising problem as a direct sparse deposition technique over redundant dictionaries. All these lead to what is considered today as some of the best available image denoising methods (see [23]–[26]for few representative works).While the work reported here is also built on the very same sparsity and redundancy concepts, it is adopting a different point of view, drawing from yet another recent line of work that studies examplebased restoration. In addressing general inverse problems in image processing using the Bayesian approach, an image prior is necessary. Traditionally, this has been handled by choosing a prior based on some simplifying assumptions, such as spatial smoothness, low/maxentropy,or sparsity in some transform domain. While these mon approaches lean on a guess of a mathematical expression for the image prior, the examplebased techniques suggest to learn the prior from images somehow. For example, assuming a spatial smoothnessbased Markov random field prior of a specific structure, one can still question (and, thus, train) the derivative filters to apply on the image, and the robust function to use in weighting these filters’ oute [27]–[29].When this priorlearning idea is merged with sparsity and redundancy,it is the dictionary to be used that we target as the learned set of parameters. Instead of