The process: state= position, velocity, acceleration Ax11+B2+ k( state at time is k size=nxn)=state e tran sition matⅨ (SIze=n×n )nput gain 1) input Wisize-nxd) =state transitio n noise, prob(w)=N(o, 2) ***米***米****米***米****米********米*** k where, k(size=mxl) measurements (sIze=mxn state measurement relation (siZe=m×1) measurement noise, prob(v)=N(o, r SFMKalman vga
SFM Kalman V9a 11 The process : state= position, velocity, acceleration • v measurement prob(v) N ( ,R) H measurement where z measurement s z Hx v w prob(w) N Q u B A x k x Ax Bu w m m n k m k k k n k n n n n n k n k k k k noise, 0 state & relation , (2) ********************************** state transitio n noise, (0, ) input input gain state_tran sition matrix state at time is (1) (size 1) (size ) (size 1) (size 1) (size 1) (size ) (size ) (size 1) 1 1 = = = = = + − − − − − − − − − − − − − − − − − − = = = = = = = + + − − − − − − − − − − − − = = = = = = = = − −
Noise Since expectation of matrices w(sizenxl)=process noise E xmEn] E[km].E[ (size=mx1)= measurement noise el]is the expected value(see wikipeida If Wk ifi=k E 0.ifi≠k WAwA 0=the covariance of process noise Eww:1- Ev22] Because wk, w are independent, So Ep」=0, for all k and i Similarl Eloi R,ifi=k Similarly,=IVu, Vx2,,vem m) 0.ifi≠k R Rk = covariance of measurement noise Er,- SFMKalman vga 12 https://en.wikipediaorg/wiki/expectedvalue
SFM Kalman V9a 12 Noise • covariance of measurement noise 0,if ,if Similarly 0,for all and Because , are independent,so the covariance of process noise 0,if ,if [ ]is the expected value (see wikipeida) v noise process noise (size 1) (size n 1) = = = = = = = = = = = k T k k i T k i k i k T k k i m R i k R i k E v v E w v k i w w Q i k Q i k E w w E measurement w ( ) 2 1 ( 1) 1 2 ( ) 2 1 ( ) 2 2 1 1 ( 1) 2 1 ,1 ,2 , 2,1 2,2 2, 1,1 1,2 1, ,1 ,2 , 2,1 2,2 2, 1,1 1,2 1, : Similarly if , ,.., : : : If .. : : : : .. .. .. : : : : .. .. Since expectation of matrices m m m T k k m T k k k km n n n T k k kn kn n n k k k k T k k kn n k k k m m m n n n m m m n n n R R R E v v v v v vQ Q Q E W W E W W E W W E W W E W W w w w W E x E x E x E x E x E x E x E x E x x x x x x x x x x E X E = = = = = = = https://en.wikipedia.org/wiki/Expected_value
Kalman Computation: Using the current information(a priori) to estimate the predicted future information(a posteriori) x is a state(e.g. position, velocity acceleration, etc) z is measurement (image position /u, v/etc) k is time step index posterior state =xk a pi ek xx -ik, where =estimate, =a priori a posterior est. state xk a priori est. state =xk a posterior err. cov. P I a priori err. cov. P P-=El a priori estimate state error covariance Pk=Leek]=a posteriori estimate state error covariance a posteriori state estimate is related to a priori estimate as x+K(二k-Hx) Meaning of equ. (3): Current est. state= previous est. state Kalman gain(current measurement-H' previous est. state) (=k-Hxk)is called measurement innovation(difference between =k& prediction from(2), =k=Hx, +vg, where xr is the current real state, put in(3) xk=xr+Kk(Hxk+vk-Hxk--=(4) Kk(sisernxm)=Kalman gain or blending factor SFMKalman vga
SFM Kalman V9a 13 Kalman Computation: Using the current information (a priori) to estimate the predicted future information (a posteriori) • x is a state (e.g. position, velocity & acceleration, etc) • z is measurement (image position [u,v]T etc) • k is time step index Kalman gain or blending factor ˆ ˆ ( ˆ ) (4) from (2), , where is the current real state, put in (3) ( ˆ ) is called innovation ( between & prediction ) Kalman_gain (current - * previous est.state) Meaning of equ.(3) :Current est.state previous est.state ˆ ˆ ( ˆ ) (3) A posteriori state estimate is related to a priori estimate as a posteriori estimate state error covariance a priori estimate state error covariance ˆ ˆ ,whereˆ estimate, a priori ( ) = = + + − − − − = + − = + = + − − − − − − − = = = = − − = = = − − − − − − − − − − k size n m k k k k k k k k k k k k k k k k k k T k k k T k k k k k k - k k k K x x K Hx v Hx z Hx v x z Hx measurement difference z measurement H x x K z Hx P E e e P E e e e x x e x x − − − = = = k k k k k k P P x x x x a posterior err. cov. | a priori err. cov. a posterior est.state ˆ | a priori est.state ˆ a posterior state | a priori state
How good is the prediction? We judge it by looking at the error covariance: the smaller pk the better a posterior state =xr I a priori state =xk n=E(x-x-8月 a posterior est. state x I a prori est. state =xk Put(4)into(5) a posterior err. cov. P a priori err. cov. P P=E{x4-x-k(hx4+-压川x4--K(Hx+-压)} P=E{(x-x)-K(Hx-)-Kv(x-x)-K4(Hx4-B)-Kv]} P=E-KHx-xk)-Kv-K,Hox-Xk-Kvi P=E((-KKHOx-xK(xk-xk)(I-k h)'-elfunctiono*(xx -xk*v3)) +Ke(Kvk Kk for Kand H, are constant, SoE(Kk=KE(H)=H P=(-KHECxk -xR(-xw-kh)-etfuntiono*(xk-xrv) +Ke(VK Kk EI( -xk(xx-xk)=P,E(vvk=Rx (xk -xk and v has no correlation, so Function(*((xk-xk*v3))=0 P=()P(I-KH)+KRkKk SFMKalman vga
SFM Kalman V9a 14 How good is the prediction? We judge it by looking at the error covariance: the smaller Pk the better. • ( ˆ )( ˆ ) (5) Put (4) into (5) {[ ( )][ ( )] } ˆ ˆ ˆ ˆ {[( ) ( ) ][( ) ( ) ] } ˆ ˆ ˆ ˆ {[( )( ) ][( ˆ T k k k k k T k k k k k k k k k k k k k T k k k k k k k k k k k k k k k k k k k k k P E x x x x P E x x K Hx v Hx x x K Hx v Hx P E x x K Hx Hx K v x x K Hx Hx K v P E I K H x x K v I − − − − − − − = − − − − − − − − − − = − − + − − − + − = − − − − − − − − = − − − − )( ) ] } ˆ {( )( )( ) ( ) { () * ( ) * )} ˆ ˆ ˆ ( ) for and are constant, so ( ) , ( ) ( ) {( )( ) }( ) { () * ˆ ˆ T k k k k k T T k k k k k k k k k k T T k k k k k k k k T T k k k k k k k K H x x K v P E I K H x x x x I K H E function x x v K E v v K K H E K K E H H P I K H E x x x x I K H E funtion − − − − − − − − = − − − − − − + = = = − − − − − ( ) * )} ˆ ( ) {( )( ) } , ( ) ˆ ˆ ( ) and has no correlation, so { () * (( ) * ) ˆ ˆ } 0 ( ) ( ) (6) k k k T T k k k k T T k k k k k k k K k k k k k k T T k k k k k K k x x v K E v v K E x x x x P E v v R x x v E function x x v P I K H P I K H K R K − − − − − − − − + − − = = − − = = − − + − − − − − − − − − − − − − − = = = k k k k k k P P x x x x a posterior err. cov. | a priori err. cov. a posterior est.state ˆ | a priori est.state ˆ a posterior state | a priori state
How to make Pk small? Find optimal gain kk to make Pk small From(6), and tr= trace Fromhttp://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html P=(-KH)P(I-KB)+KRKk dy' =d(y)=(dy); d(trly)=trdy) P=(PK-K HPK(-(K,H))+KR Kk Fromhttp://en.wikipediaorg/wiki/matrixdifferentiation P=P-P(KH)-KAHP+K1HP(KH)y+KARKa(146)a4=B,Bmustesure P=P-P(KH)-KHPK+K, Kk +K,rkKk d(tr(ACa))/dA=2AC, C must be symmetric P=P-KHPK-P(KH)+K(HPKH+RkKk--(7) Since trace of a matrix= trace of its transpose tr(P)=tr(P)-2trKHP)+trK (HPH+Rkk d(1)=-2P)y+2K(HPH+R)=0 resu Kk=(PH( H+R) This K, is called optimal gain Note: For Linear Kalman Filter, K(Kalman gain) and P(covariance) depend on Q(system noise setting) and r(measurement noise setting) only and not Z (measurement). So If Q, R are given, K and P can be pre-calculated SFMKalman
SFM Kalman V9a 15 How to make Pk small? Find optimal gain Kk to make Pk small From http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html ( ) ( ) ; ( { }) { } From http://en.wikipedia.org/wiki/Matrix_differentiation ( { })/ , must be sqaure ( { })/ 2 , T T T T T dy d y dy d tr y tr dy d tr AB dA B AB d tr ACA dA AC = = = = = C must be symmetric (6), and {} {} ( ) ( ) (6) ( )( ( ) ) ( ) ( ) ( ) ( T T k k k k k K k T T k k k k k k K k T T T k k k k k k k k k k K k T T T T k k k k k k k k k k K k k k k k k k From tr trace P I K H P I K H K R K P P K HP I K H K R K P P P K H K HP K HP K H K R K P P P K H K HP K HP H K K R K P P K HP P K − − − − − − − − − − − − − − = = − − + − − − = − − + = − − + + = − − + + = − − 1 ) ( ) (7) Since trace of a matrix = trace of its transpose { } { } 2 { } { ( ) } so ( { }) 2( ) 2 ( ) 0 ( )( ) ( T T T k k k k T T k k k k k k k k k T T k k k k k T T k k k H K HP H R K tr P tr P tr K HP tr K HP H R K d tr P HP K HP H R dK result K P H HP H R − − − − − − − − − + + − − = − + + = − + + = = + − − − − − − − − − 8) This is called optimal gain. Kk Note: For Linear Kalman Filter, K(Kalman gain) and P (covariance) depend on Q (system noise setting) and R(measurement noise setting) only and not Z (measurement). So If Q ,R are given, K and P can be pre-calculated