【正文】
ployed:c H c=1 (16)For minimization of (14) under the constraint (16), the Lagrange cost function (17)can be defined, where η is the Lagrange multiplier. The cost function JL(c) of (17) does not depend on the channel phase θ. Hence, it may be considered as a noncoherent cost function. Stationary Points of the Cost Function The stationary points of the cost function can be obtained by using the method for plex differentiation described in Appendix B of [11]: (18)and setting the result equal to zero. This yields the eigenvalue problem (19)where λ denotes the eigenvalue. Since H is a Hermitian positive semi definite matrix , there are Lc vectors , 0Lc1, of unit length1: which solve(19): corresponding to the Lc real, non negative eigenvalues , 0Lc1[11]. In [10], it is shown that the eigenvector copt corresponding to the minimum eigenvalue minimizes the cost function JL(c) and the error variance : whereas the eigenvector corresponding to the maximum eigenvalue maximizes JL(c) . The remaining eigenvectors correspond to saddle points. The minimum error variance is given by (20)(19) shows that the noise variance has no influence on the optimum equalizer coefficients copt。 it only influences the minimum error variance (cf. (20) ). This implies that the resulting equalizer is related to a ZF equalizer. However, in contrast to ZF equalization , in general, NMIE forces the coefficients , ≠k0: of the overall impulse response not to zero: but it minimizes them in the mean square sense (cf. (10), (17) ). Infinite Length Equalizer The error variance corresponding to copt bees minimum for =0, . , if H is a singular matrix. In this case (21)results from (20). From (10), it can be seen that (21) is obtained, if and only if =0,≠k0. This corresponds to a transfer function of the resulting equalizer given by (22)where and p denote the transfer function2 of the discrete time channel and a plex constant, respectively. Up to the constant : infinite tap NMIE is identical with coherent ZF equalization [1, 2]. From (16): (23)follows. Using this and (22) : |р| can be calculated to . (24)Note, that the phase of р is arbitrary. The transfer function of the bination of equalizer and overall channel is given by (25)Since is the Fourier transform of gk : (26)follows. Performance for N→∞ In the following : we derive the limiting performance of the infinite length equalizer for N→∞. In this case: (27)holds in the mean square [10] and the decision variable of the infinite length equalizer is . (28)From (28) it can be seen that the signal to noise ratio (SNR) of NMIE can be expressed as (29)where (16) is used. Applying (24) and (26) in (29) yields. (30)The same expression can be obtained for a coherent infinite length ZF equalizer for MAPSK, . , if no differential encoding is employed [2]. Note that we assumed for our analysis [kk0]=s[kk0], . , all feedback symbols are correct. This explains, why the resulting equalizer (which is not implementable, of course) is lower bounded by a coherent infinite length ZF equalizer for MAPSK. The simulations in Section 5 show that realizable NMIE (. , without genie) is lower bounded by coherent infinite length ZF equalization for MDAPSK. 4. LMS Algorithm Derivation A gradient algorithm is given by (31)and (32)where δLMS is the adaptation step size parameter( demotes the L2 norm of a vector). is the equalizer coefficient vector which is now time variant. (32) ensures that the constraint given by (16) is fulfilled. e[k] is given by (33)with (34)Note that qref[k1] depends only on c[k], 0, but not on c[k]. Therefore, it has to be treated like a constant for differentiation with respect to c[k] (cf. [3, 5]). Hence, (35)The resulting modified LMS algorithm consists of (32), (33) and (35). It is shown in [10] that the modified LMS algorithm enjoys global convergence and minimizes the noncoherent cost function (17) for all practical relevant cases. Convergence Speed of the Modified LMS Algorithm Fig. 2 shows learning curves for proposed modified LMS algorithm (δLMS = ) for a QDSK constellation (M = 4). The impulse response of the channel adapted from [2] is h0 = , h1 = , h2 = (Lh = 3) , and 10log10(Eb/N0) = 10dB (Eb = Es/log2(M) = Es/2 is the mean received energy per bit) is valid. The equalizer length is chosen to Lc = 7 and the decision delay is k0 = 4 . In order to demonstrate that the convergence speed of the proposed algorithm is similar to that of the conventional LMS algorithm, we also included the learning curves for a conventional LMS (δLMS = ) [2, 11]. However, it has to be mentioned that the error signals of modified and conventional algorithm are pletely different. Therefore, a direct parison of the steady state error is not possible. For the modified LMS algorithm={|e[k]|2} is valid, whereas for the conventional LMS algorithm, the definition proposed in [11] is used. In all cases, averaging is done over 1000 adaptation processes。 c[0]is initialized with [0]=, [0]=0, ≠k0 , and a training sequence is used. It can be seen from Fig. 2 that the steady state error of the modified LMS algorithm decreases as N increases. The dashed lines correspond to the theoretical steady state error variance of infinite length NMIE calculated from (21) and (39). There is a good agreement between theory and simulation. The simulated steady state error is slightly higher since finite length equalizer is used and because of gradient noise. It can be seen that th