12.540 Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring Estimation First-order Gauss markov Processes Kalman filters-Estimation in which the parameters to be estimated are changing with time
03/31/03 12.540 Lec 13 1 12.540 Principles of the Global Positioning System Lecture 13 Prof. Thomas Herring 03/31/03 12.540 Lec 13 2 – First-order Gauss Markov Processes – to be estimated are changing with time Estimation • Summary Kalman filters – Estimation in which the parameters 1
Specific common processes White-noise: Autocorrelation is Dirac-delta function; PSD is flat; integral of power under PSD is variance of process (true in general) First-order Gauss-Markov process(one of most models common in Kalman filtering) qx()=a2e叫 is correlation time -+ β2B 03/13 12540Lec13 Other characteristics of FoGm Excitation function=-x(B+w(t) dt Solution x(t+At)=e-P x(r)+e-w Je"fw(t+u)du White Noise Excitation Variance over interval T 07(T)=01+ (1 2TB Variance of change int D([)=20(1-e-is) White noise, w(t), variance o'=p/(2B) 12540Lec13
† † 03/31/03 12.540 Lec 13 3 Specific common processes jxx (t) = s 2 e - Fxx (w) = 2bs2 w 2 + b2 1 b is correlation time • White-noise: Autocorrelation is Dirac-delta function; PSD is flat; integral of power under PSD is variance of process (true in general) • First-order Gauss-Markov process (one of most models common in Kalman filtering) b t 03/31/03 12.540 Lec 13 4 Other characteristics of FOGM dx dt x(t)b + w(t) x(t t) = e -Dtb x(t) + e -Dtb eub w(t + u)du 0 Dt Ú 1 2 3 Variance over interval T sT 2 (T) = s 2 1 + (1 - e -2Tb ) 2Tb Ê Ë Á ˆ ¯ ˜ Variance of change in t D(t) = 2s 2 1- e - t b ( ) w(t s 2 /(2b) Excitation function = - Solution + D White Noise Excitation 444 444 White noise, ), variance = F 2
Characteristics of Fogm This process noise model is very useful because as B, inverse correlation time, goes to infinity(zero correlation time), the process is white noise When the correlation time goes to infinity(8->0), process becomes random walk(ie, sum of white noise) NOTE: Random walk is not a stationary process because its variance tends to infinity as time goes to infinity In the FoGM solution equation, note the damping term e-atp x which keeps the process bounded Formulation of Kalman filter a Kalman filter is an implementation of a Bayes estimator Basic concept behind filter is that some of the parameters being estimated are random processes and as data are added to the filte the parameter estimates depend on new data and the changes in the process noise between measurements Parameters with no process noise are called deterministic
03/31/03 12.540 Lec 13 5 Characteristics of FOGM • This process noise model is very useful because as b, inverse correlation time, goes to infinity (zero correlation time), the process is white noise • When the correlation time goes to infinity (b process becomes random walk (ie, sum of white noise). • NOTE: Random walk is not a stationary process because its variance tends to infinity as time goes to infinity • In the FOGM solution equation, note the damping term e-Dtbx which keeps the process bounded 03/31/03 12.540 Lec 13 6 Formulation of Kalman filter measurements. deterministic. –>0), • A Kalman filter is an implementation of a Bayes estimator. • Basic concept behind filter is that some of the parameters being estimated are random processes and as data are added to the filter, the parameter estimates depend on new data and the changes in the process noise between • Parameters with no process noise are called 3
Formulation For a Kalman filter, you have measurements y(t)with noise v(t)and a state vector (parameter list)which have specified statistical properties y,=A.x,+v, Observation equation at time t xu=s,x,+w, State transition equation <vv >=V <ww > W Covariance matrices Basic Kalman filter steps Kalman filter can be broken into three basic steps Prediction Using process noise model, predict" parameters at next data epoch Subscript is time quantity refers to, superscript is data ihl=s, r s, is state transition matrix +1 S CIS,+w, W, is process noise covariance matri 12540Lec13
† † 03/31/03 12.540 Lec 13 7 properties. yt = A t xt + vt xt +1 = St xt + wt < vt vt T >= Vt < wt wt T >= Wt Formulation • For a Kalman filter, you have measurements y(t) with noise v(t) and a state vector (parameter list) which have specified statistical Observation equation at time t State transition equation Covariance matrices 4 03/31/03 12.540 Lec 13 8 Basic Kalman filter steps steps – data xˆt +1 t = St xˆt t St Ct +1 t = St Ct t St T + Wt Wt • Kalman filter can be broken into three basic • Prediction: Using process noise model, “predict” parameters at next data epoch Subscript is time quantity refers to, superscript is is state transition matrix is process noise covariance matrix
Prediction step The state transition matrix s projects state vector (parameters)forward to next time. For random walks: s=1 For rate terms: S is matrix [1 At][0 1] For FOGM: S=e - atp For white noise s=0 The second equation projects the covariance matrix of the state vector c. forward in time Contributions from state transition and process noise(W matrix). W elements are 0 for deterministic parameters Kalman gain The kalman gain is the matrix that allocates the differences between the observations at time t+1 and their predicted value at this time based on the current values of the state vector according to the noise in the measurements and the state vector noise t+1 14t+1
† 03/31/03 12.540 Lec 13 9 Prediction step • The state transition matrix S projects state vector (parameters) forward to next time. Dt][0 1] -Dtb • The second equation projects the covariance matrix of the state vector , C, forward in time. Contributions from state transition and process noise (W matrix). W elements are 0 for deterministic parameters – For random walks: S=1 – For rate terms: S is matrix [1 – For FOGM: S=e – For white noise S=0 5 03/31/03 12.540 Lec 13 10 Kalman Gain K = Ct +1 t A t +1 T Vt +1 + A t +1Ct +1 t A t +1 T ( ) -1 • The Kalman Gain is the matrix that allocates the differences between the observations at time t+1 and their predicted value at this time based on the current values of the state vector according to the noise in the measurements and the state vector noise