The middle term in( 8)is similarly oLLI g{a-2·LL} T 2logo+2logIliLl) since L is triangular gL lo Thus equation(6)and(7)are just two different expressions for the same magni- tude. Either expression accurately describes the log likelihood function 1.3 Exact maximum Likelihood estimators for the gaus- ian ar(1) Process The MLE 8 is the value for which(6) is maximized. In principle, this requires differentiating(6) and setting the result equal to zero. In practice, when an attempt is made to carry this out, the result is a system of nonlinear equation in 8 and (Y1, Y2,., Yr) for which there is no simple solution for 8 in terms of (Y1, Y2, ,Yr). Maximization of (6) thus requires iterative or numerical proce- dure described in p 21 of Chapter 3 1. 4 Conditional maximum Likelihood estimation An alternative to numerical maximization of the exact likelihood function is to regard the value of y as deterministic and maximize the likelihood conditioned on the first observation 11x-1,17=2-51(n,yx-yx-2,…,ln;)=1f=(ml-1;6)
The middle term in (8) is similarly 1 2 log |σ −2L 0L| = 1 2 log{σ −2T · |L 0L|} = − 1 2 log σ 2T + 1 2 log |L 0L| = − T 2 log σ 2 + 1 2 log{|L 0 ||L|} since L is triangular = − T 2 log σ 2 + log |L| = − T 2 log σ 2 + 1 2 log(1 − φ 2 ). Thus equation (6) and (7) are just two different expressions for the same magnitude. Either expression accurately describes the log likelihood function. 1.3 Exact Maximum Likelihood Estimators for the Gaussian AR(1) Process The MLE θˆ is the value for which (6) is maximized. In principle, this requires differentiating (6) and setting the result equal to zero. In practice, when an attempt is made to carry this out, the result is a system of nonlinear equation in θ and (Y1, Y2, ..., YT ) for which there is no simple solution for θ in terms of (Y1, Y2, ..., YT ). Maximization of (6) thus requires iterative or numerical procedure described in p.21 of Chapter 3. 1.4 Conditional Maximum Likelihood Estimation An alternative to numerical maximization of the exact likelihood function is to regard the value of y1 as deterministic and maximize the likelihood conditioned on the first observation fYT ,YT −1,YT −2,..,Y2|Y1 (yT , yT −1, yT −2, ..., y2|y1; θ) = Y T t=2 fYt|Yt−1 (yt |yt−1; θ), 6
the objective then being to maximize C()=-(T-1)/2log(2r)-(T-1)/2]lg(2)- (yt -C-oyt-1 T-1)/2log(27)-{T-1)/2log(o2)-∑ Maximization of( 9) with respect to c and o is equivalent to minimization of >It-C-pgt-1)2=(y-xBy'(y-XB 10 which is achieved by an ordinary least square(Ols)regression of yt on a constant and its own lagged value. where X= The conditional maximum likelihood estimates of c and o are therefore given by yt-1 t=2 yt t=29t-1 9t-1 9t-1yt The conditional maximum likelihood estimator of o is found by setting (v-c-0yt-1) T-1 It is important to note if you have a sample of size T to estimate an AR(1) process by conditional MLE, you will only use T-1 observation of this sample
the objective then being to maximize L ∗ (θ) = −[(T − 1)/2] log(2π) − [(T − 1)/2] log(σ 2 ) − X T t=2 (yt − c − φyt−1) 2 2σ 2 = −[(T − 1)/2] log(2π) − [(T − 1)/2] log(σ 2 ) − X T t=2 ε 2 t 2σ 2 (9) Maximization of (9) with respect to c and φ is equivalent to minimization of X T t=2 (yt − c − φyt−1) 2 = (y − Xβ) 0 (y − Xβ), (10) which is achieved by an ordinary least square (OLS) regression of yt on a constant and its own lagged value, where y = y2 y3 . . . yT , X = 1 y1 1 y2 . . . . . . 1 yT −1 , and β = c φ . The conditional maximum likelihood estimates of c and φ are therefore given by cˆ φˆ = T − 1 PT t=2 P yt−1 T t=2 yt−1 PT t=2 y 2 t−1 −1 PT t=2 P yt−1 T t=2 yt−1yt . The conditional maximum likelihood estimator of σ 2 is found by setting ∂L ∗ ∂σ 2 = −(T − 1) 2σ 2 + X T t=2 (yt − c − φyt−1) 2 2σ 4 = 0 or σˆ 2 = X T t=2 " (yt − cˆ− φyˆ t−1) 2 T − 1 # . It is important to note if you have a sample of size T to estimate an AR(1) process by conditional MLE, you will only use T − 1 observation of this sample. 7