freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

利用離散余弦變換進(jìn)行圖像壓縮畢業(yè)論文外文翻譯(編輯修改稿)

2025-04-01 07:17 本頁面
 

【文章內(nèi)容簡介】 7 to some specified precision takes many bits. Rounding the number to the nearest integer gives a quantity that can be represented by just three bits. x = Random[Real, {0, 7}] Round[x] 3 In this process, we reduce the number of possible values of the quantity (and thus the number of bits needed to represent it) at the cost of losing information. A finer quantization, that allows more values and loses less information, can be obtained by dividing the number by a weight factor before rounding: w = 1/4。 Round[x/w] 11 Taking a larger value for the weight gives a coarser quantization. Dequantization, which maps the quantized value back into its original range (but not its original precision) is acheived by multiplying the value by the weight: w * % // N The quantization error is the change in a quantity after quantization and dequantization. The largest possible quantization error is half the value of the quantization weight. In the JPEG image pression standard, each DCT coefficient is quantized using a weight that depends on the frequencies for that coefficient. The coefficients in each 8 x 8 block are divided by a corresponding entry of an 8 x 8 quantization matrix, and the result is rounded to the nearest integer. In general, higher spatial frequencies are less visible to the human eye than low frequencies. Therefore, the quantization factors are usually chosen to be larger for the higher frequencies. The following quantization matrix is widely used for monochrome images and for the luminance ponent of a color image. It is given in the JPEG standards documents, yet is not part of the standard, so we call it the de fac to matrix: qLum = {{16, 11, 10, 16, 24, 40, 51, 61}, {12, 12, 14, 19, 26, 58, 60, 55}, {14, 13, 16, 24, 40, 57, 69, 56}, {14, 17, 22, 29, 51, 87, 80, 62}, {18, 22, 37, 56, 68,109,103, 77}, {24, 35, 55, 64, 81,104,113, 92}, {49, 64, 78, 87,103,121,120,101}, {72, 92, 95, 98,112,100,103, 99}}。 Displaying the matrix as a grayscale image shows the dependence of the quantization factors on the frequencies: ShowImage[qLum]。 To implement the quantization process, we must partition the transfor med image into 8 x 8 blocks: BlockImage[image_, blocksize_:{8, 8}] := Partition[image, blocksize] /。 And @@ IntegerQ /@ (Dimensions[image]/blocksize) The function UnBlockImage reassembles the blocks into a single image: UnBlockImage[blocks_] := Partition[ Flatten[Transpose[blocks, {1, 3, 2}]], {Times @@ Dimensions[blocks][[{2, 4}]]}] For example: Table[i + 8(j1), {j, 4}, {i, 6}] // MatrixForm 1 2 3 4 5 6 9 10 11 12 13 14 17 18 19 20 21 22 25 26 27 28 29 30 BlockImage[%, {2, 3}] // MatrixForm 1 2 3 4 5 6 9 10 11 12 13 14 17 18 19 20 21 22 25 26 27 28 29 30 UnBlockImage[%] // MatrixForm 1 2 3 4 5 6 9 10 11 12 13 14 17 18 19 20 21 22 25 26 27 28 29 30 Our quantization function blocks the image, divides each block (elementbyelement) by the quantization matrix, reassembles the blocks, and then rounds the entries to the nearest integer: DCTQ[image_, qMatrix_] := Map[(/qMatrix)amp。, BlockImage[image, Dimensions[qMatrix]], {2}] // UnBlockImage // Round The dequantization function blocks the matrix, multiplies each block by the quantization factors, and reassembles the matrix: IDCTQ[image_, qMatrix_] :=Map[( qMatrix)amp。, BlockImage[image, Dimensions[qMatrix]], {2}] // UnBlockImage To show the effect of quantization, we will transform, quantize, and reconstruct our image of the shuttle using the quantization matrix introduced above: qshuttle = shuttle // DCT // DCTQ[, qLum]amp。 // IDCTQ[, qLum]amp。 // IDCT。 For parison, we show the original image together with the quantized version: Show[GraphicsArray[ GraphicsImage[, {0, 255}]amp。 /@ {shuttle, qshuttle}]]。 Note that some artifacts are visible, particularly around highcontrast edges. In the next section, we will pare the visual effects and the amount of pression obtained from different degrees of quantization. Entropy To measure how much pression is obtained from a quantization matrix, we use a famous theorem of Claude Shannon [Shannon and Weaver 1949]. The theorem states that for a sequence of symbols with no correlations beyond first order, no code can be devised to represent the sequence that uses fewer bits per symbol than the firstorder entropy, which is given by 2log ( )iiih p p??? where p is the relative frequency of the ith symbol. To pute the firstorder entropy of a list of numbers, we use the function Frequencies, from the standard package Statistics`DataManipulation`. This function putes the relative frequencies of elements in a list: (shac poisson) In[1]:= (7/12/94 at 8:58:26 AM) Frequencies[list_List] := Map[{Count[list, ], }amp。, Union[list]] Characters[mississippi]{m, i, s, s, i, s, s, i, p, p, i} Frequencies[%]{{4, i}, {1, m}, {2, p}, {4, s}} Calculating the firstorder entropy is straightforward: Entropy[list_] := Plus @@ N[ Log[2, ]]amp。 @ (First[Transpose[Frequencies[list]]]/Length[list]) For example, the entropy of a list of four distinct symbols is 2, so 2 bits are required to code each symbol: Entropy[{a, b, c, d}]2. Similarly, bits are required for this longer list with four symbols: Entropy[Characters[mississippi]] A list with more symbols and fewer repetitions requires more bits per symbol: Entropy[Characters[california]] The appearance of fractional bits may be puzzling to some readers, since we think of a bit as a minimal, indivisible unit of information. Fractional bits are a natural oute of the use of what are called variable wordlength codes. Consider an image containing 63 pixels with a greylevel of 255, and one pixel with a g
點擊復(fù)制文檔內(nèi)容
黨政相關(guān)相關(guān)推薦
文庫吧 www.dybbs8.com
備案圖片鄂ICP備17016276號-1