【文章內(nèi)容簡介】
2, 3}, where the index j indicates the jth branch and index m the mth bit in that branch. Correspondingly, we define { jmr , j=1, 2, 3。 m=1, 2, 3} as the output of the demodulator. If the decoder performs harddecision decoding, the detector output for each transmitted bit is either 0 or 1. On the other hand, if softdecision decoding is employed and the coded sequence is transmitted by binary coherent PSK, the input to the decoder is jmjmcjm ncr ??? )12(? () where jmn represents the additive noise and c? is the transmitted signal energy for each code bit. A metric is defined for the jth branch of the ith path through the trellis as the logarithm of the joint probability of the sequence { jmr , m=1, 2, 3} conditioned on the transmitted sequence { )(ijmc , m=1, 2, 3} for the ith path. That is, )|(l o g )()( ijjij CYp?? , j=1, 2, 3, … () Furthermore, a metric for the ith path consisting of B branches through the trellis is defined as ???BjijiPM1)()( ? () The criterion for deciding between two paths through the trellis is to select the one having the larger metric. This rule maximizes the probability of a correct decision or, equivalently, it minimizes the probability of error for the sequence of information bits. For example, suppose that harddecision decoding is performed by the demodulator, yielding the received sequence {101 000 100}. Let i=0 denote the threebranch allzero path and i=1 the second threebranch path that begins in the initial state a and remerges with the allzero path at state a after three transitions. The metrics for these two paths are 11 ppPMppPMl og5)1l og (4l og3)1l og (6)1()0(?????? () where p is the probability of a bit error. Assuming that p1/2, we find that the metric )1(PM . This result is consistent with the observation that the allzero path is at Hamming distance d=3 from the received sequence, while the i=1 path is at Hamming distance d=5 from the received path .Thus the Hamming distance is an equivalent metric for harddecision decoding. Similarly, suppose that softdecision decoding is employed and the channel adds white Gaussian noise to the signal. Then the demodulator output is described statistically by the probability density function }2/)]12([e xp {2 1|( 22)()( ???? ???? ijmcjmijmjm crcrp () where 02 2/1 N?? is the variance of the additive Gaussian noise. If we neglect the terms that are mon to all branch metrics, the branch metric for the jth branch of the ith path may be expressed as )12( )(1)( ?? ??ijmnm jmij cr? () where, in our example, n=3. Thus the correlation metrics for the two paths under consideration are )12()12()1(3131)1()0(3131)0(????? ?? ?? ?? ?jmj mjmjmj mjmcrCMcrCM () Having defined the branch metrics and path metrics puted by the decoder, we now consider the use of the Viterbi algorithm for optimum decoding of the convolutionally encoded information sequence. We consider the two paths described above, which merge at state a after three transitions. Note that any particular path through path through the trellis that stems from this node will add identical terms to the path metrics )0(CM and )1(CM . Consequently, if )0(CM )1(CM at the merged node a after three transition, )0(CM will continue to be larger than )1(CM for any path that stems from node a. This means that the path corresponding to )1(CM can be discarded from further consideration. The path 12 corresponding to the metric )0(CM is the survivor. Similarly, one of the two paths that merge at state b can be eliminated on the basis of the two corresponding metrics. This procedure is repeated at state c and d. As a result, after the first three transitions, there are four surviving paths, one terminating at each state, and a corresponding metric for each survivor. This procedure is repeated at each state of the trellis as new signals are received in subsequent time intervals. In general, when a binary convolutional code with k=1 and constraint length K is decoded by means of the Viterbi algorithm, there are 12?K states. Hence, there are 12?K surviving paths at each state and 12?K metrics, one for each surviving path. Furthermore, a binary convolutional code in which k bits at a time are shifted into an encoder that consists of K (kbit) shiftregister stages generates a trellis that has )1(2 ?Kk states. Consequently, the decoding of such a code by means of the Viterbi algorithm requires keeping track of )1(2 ?Kk surviving paths and )1(2 ?Kk metrics. At each state of the trellis, there are k2 paths that merge at each node. Since each path that converges at a mon node requires the putation of a metric, there are k2 metrics puted foe each node. Of the k2 paths and that merge at each node, only one survives, and this is the mostprobable (minimumdistance) path. Thus , the number of putations in decoding performed at each state increases exponentially with k and K. the exponential increase in putational burden limits the use of the Viterbi algorithm to relatively small values of K and k. The decoding delay in decoding a long information sequence that has been convolutionally encoded is usually too long for most practical applications. Moreover, the memory required to store the entire length of surviving sequences is large and expensive. As indicated in Section a solution to this problem is to modify the Viterbi algorithm in a way which results in a fixed decoding delay without significantly affecting the optimal performance of the algorithm. Recall that the modification is to retain at any