freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內容

基于稀疏表達的圖像恢復算法研究畢業(yè)論文-全文預覽

2025-08-04 11:34 上一頁面

下一頁面
  

【正文】 sing problem as a direct sparse deposition technique over redundant dictionaries. All these lead to what is considered today as some of the best available image denoising methods (see [23]–[26]for few representative works). While the work reported here is also built on the very same sparsity and redundancy concepts, it is adopting a different point of view, drawing from yet another recent line of work that studies examplebased restoration. In addressing general inverse problems in image processing using the Bayesian approach, an image prior is necessary. Traditionally, this has been handled by choosing a prior based on some simplifying assumptions, such as spatial smoothness, low/maxentropy,or sparsity in some transform domain. While these mon approaches lean on a guess of a mathematical expression for the image prior, the examplebased techniques suggest to learn the prior from images somehow. For example, assuming a spatial smoothnessbased Markov random field prior of a specific structure, one can still question (and, thus, train) the derivative filters to apply on the image, and the robust function to use in weighting these filters’ oute [27]–[29]. When this priorlearning idea is merged with sparsity and redundancy,it is the dictionary to be used that we target as the learned set of parameters. Instead of the deployment of a prechosen set of basis functions as the curvelet or contourlet would do, we propose to learn the dictionary from examples. In this work we consider two training options: 1) training the dictionary using patches from the corrupted image itself or 2) training on a corpus of patches taken from a highquality set of images. This idea of learning a dictionary that yields sparse representations for a set of training imagepatches has been studied in a sequence of works [30]–[37]. In this paper, we propose the KSVD algorithm [36], [37] because of its simplicity and efficiency for this task. Also, due to its structure, we shall see how the training and the denoising fuse together naturally into one coherent and iterated process, when training is done on the given image directly. Since dictionary learning is limited in handling small image patches, a natural difficulty arises: How can we use it for general images of arbitrary size? In this work, we propose a global image prior that forces sparsity over patches in every location in the image (with overlaps). This aligns with a similar idea, appearing in [29], for turning a local MRFbased prior into a global one. We define a maximum a posteriori probability (MAP) estimator as the minimizer of a welldefined global penalty numerical solution leads to a simple iterated patchbypatch sparse coding and averaging algorithm that is closely related to the ideas explored in [38]–[40] and generalizes them. When considering the available global and multiscale alternative denoising schemes (., based on curvelet, contourlet,and steerable wavelet), it looks like there is much to be lost in working on small patches. Is there any chance of getting a parable denoising performance with a localsparsity based method? In that respect, the image denoising work reported in[23] is of great importance. Beyond the specific novel and highly effective algorithm described in that paper, Portilla and his coauthors posed a clear set of parative experiments that standardize how image denoising algorithms should be assessed and pared one versus the other. We make use of these exact experiments and showthat the newly proposed algorithm performs similarly, and, often, better, pared to the denoising performance reported in their work. To summarize, the novelty of this paper includes the way we use local sparsity and 西安交通大學本科畢業(yè)設計(論文) 14 redundancy as ingredients in a global Bayesian objective—this part is described in Section II, along with its emerging iterated numerical solver. Also novel in this work is the idea to train dictionaries for the denoising task, rather than use prechosen ones. As already mentioned earlier, when training is done on the corrupted image directly, the overall trainingdenoising algorithm bees fused into one iterative procedure that prises of steps of denoising of the image, followed by an update of the dictionary. This is described in Section III in detail. In Section IV, we show some experimental results that demonstrate the effectiveness of this algorithm. II. FROM 。 進一步的工作包括以下幾個方面: ( 1)在處理椒鹽噪聲的帶權的稀疏表達模型數(shù)值求解過程中,我們可以加入基元組 D 學習階段,如高斯去噪模型求解時學習基元組那樣,希望獲得更好的效果; ( 2)更多圖像和不同噪聲水平下的測試比較,盡量得出客觀有效的比較結果; ( 3)進一步學習與研究圖像去噪與稀疏表達的相關內容,尋求更合理的去噪模型及更優(yōu)的優(yōu)化方法。改進過程中, 引進了對圖像像素點的噪聲可能性的權重函數(shù),并建立帶權的稀疏表達模型,減少噪聲點對稀疏表達模型的影響。 (a)加入椒鹽噪聲“ boat”圖 (b)高斯噪聲稀疏表達模型去噪結果 (c)我 們的去噪結果 圖 4:對“ boat”圖像加入椒鹽噪聲并分別采用 DCT 基元組的經(jīng)典去噪模型和改進模型去噪結果 4 實驗 9 (a)加入椒鹽噪聲的“ lena”圖 (b)高斯噪聲稀疏表達模型去噪結果 (c)我們的去噪結果 圖 5:對“ lena”圖像加入椒鹽噪聲并分別基于 DCT 基元組采用經(jīng)典去噪模型和改進模型去噪結果 表 42:含椒鹽噪聲圖像及兩種模型去噪結果 PSNR 值比較 PSNR(dB) boat lena 加入 椒鹽噪聲圖像 經(jīng)典高斯去噪模型 改進的去噪模型 表 42 中所得的數(shù)據(jù)表明,對加入椒鹽噪聲的兩張樣例“ boat”和“ lena”分別使用基于 DCT 基元組的經(jīng)典模型和改進模型對其去噪處理,經(jīng)典模型得出的結果仍留有不少噪聲點,去噪效果差強人意,而改進的去噪模型去噪效果較為令人滿意,明顯好于原經(jīng)典模型。第一部分我們將展示對兩個樣例圖片 lena 和 barbara 加上高斯噪聲,然后分別使用基于 DCT 基元組、全局基元組和自適應基元組的經(jīng)典稀疏表達模型對圖片去噪;在第二部分中我們對 boat 和 lena 兩個樣例圖片加入椒鹽噪聲,先分別使用基于 DCT 基元組的經(jīng)典稀疏表達模型去噪,然后再使用基于 DCT 基元組的改進模型對其進行去噪處理,這樣做可以方便地比較兩種去噪模型對椒鹽噪聲的實際去噪效果。)(39。步驟如下: 任務:對加入了椒鹽噪聲的含噪圖像 Y 進行去噪。 p 為中心的圖像塊中點 p 的灰度值,)(39。 )(39。 39。 39。39。 )]1([)(pNp pppppp txwyxwpx ? = ?? ????)(39。39。 對問題( 1) ? ? ? ? ?????????? ????l li iillll bxwl 022)(mi na r g ???? ? , ( 35) 與經(jīng)典模型的區(qū)別在于在第一個懲罰項中加入了權重向量 lw ,將上式寫為 ? ? ? ? ?????????? ????l li ilillll bwxwl 022)()(m i na r g ???? ? , ( 36) 問題的求解同經(jīng)典稀疏表達模型類似,
點擊復制文檔內容
研究報告相關推薦
文庫吧 www.dybbs8.com
備案圖鄂ICP備17016276號-1