Es(k)=Ea'(k-2RW+W'RoW By taking the derivative of the above expression,we have the gradient of the mean- square error function: 7(网=TE{e2 [2E{e…0ec2(7 =-2Rxd+2RxxW 16
16 By taking the derivative of the above expression, we have the gradient of the meansquare error function:
Let v()-0,the obtained optimum weight Coefficient vector Mopt =RixRxd It is Wiener-Hopf standard equation. RxxW =Rxd Therefore,optimum weight coefficient vector is also called as Wiener weight coefficient vector. 17
17 Let ,the obtained optimum weight Coefficient vector It is Wiener- Hopf standard equation. Therefore, optimum weight coefficient vector is also called as Wiener weight coefficient vector
The minimum mean-square error: {(k》n={d(}-RWw ·Analyzing the expression形t=RxRxe In order to obtain the optimum weight coefficients: (1)prior information (e.g.Rxd) (2)the inverse operation. 18
18 • The minimum mean-square error: • Analyzing the expression In order to obtain the optimum weight coefficients: (1) prior information (e.g. Rxd) (2) the inverse operation
2.The adaptive process Widrow and Hopf(1960)presented a method for the solution of the optimum weight coefficients.This method has the following advantages: (1)Simplicity. (2)The prior information is not required. (3)The matrix inversion is not required. The method principle is Widrow and Hopf LMS Algorithm.That is, 19
19 2. The adaptive process Widrow and Hopf (1960) presented a method for the solution of the optimum weight coefficients. This method has the following advantages: (1) Simplicity. (2) The prior information is not required. (3) The matrix inversion is not required. The method principle is Widrow and Hopf LMS Algorithm. That is
F〔k+1)=P〔)- where, the convergence parameter, which controls convergent speed and the stability of algorithm. Two keys for LMS algorithm: (1)Computing the gradient of the mean- square error function (2)Choosing the convergent parameter 20
20 where, the convergence parameter, which controls convergent speed and the stability of algorithm. Two keys for LMS algorithm: (1) Computing the gradient of the meansquare error function (2) Choosing the convergent parameter