To estimate Bo and B1 using ML(Computer) We do not knowβoβ1阝2x…,阝, Nor do we know a, In fact,our objective is estimateβoβ1,阝2…,βk ■ The procedure of ML: 1. Assume a combination ofβoβ1β2,…,阝k, call it bo,b Compute the implied e i=ybb1x1b2x2-…ba,b2,…bk f(e)=(y-bo-b1xb2x2…b×) 2. Compute the joint likelihood conditional on the assumed values of 0r1rur…bk L(bob1,b2…bk)=f(e1)米f(e2)**f(en) ■ Assume many more combination ofβo,β1,β2,…,阝 ke and repeat the above two steps, using a computer program (such as EXcel) a Choose the bor, b1 b2,., bx that yield a largest joint likelihood Ka-fu Wong C2007 ECON1003: Analysis of Economic Data Lesson 11-11
Ka-fu Wong © 2007 ECON1003: Analysis of Economic Data Lesson11-11 To estimate b0 and b1 using ML (Computer) ◼ We do not know b0 , b1 , b2 , …, bk . Nor do we know ei . In fact, our objective is estimate b0 , b1 , b2 , …, bk . ◼ The procedure of ML: 1. Assume a combination of b0 , b1 , b2 , …, bk , call it b0 , b1 , b2 , …, bk . Compute the implied ei = yi -b0 -b1x1i-b2x2i-…-bkxki and f(ei )=f(yi -b0 -b1x1i-b2x2i-…-bkxki) 2. Compute the joint likelihood conditional on the assumed values of b0 , b1 , b2 , …, bk : ◼ L(b0 , b1 , b2 , …, bk ) = f(e1 )*f(e2 )*…*f(en ) ◼ Assume many more combination of b0 , b1 , b2 , …, bk , and repeat the above two steps, using a computer program (such as Excel). ◼ Choose the b0 , b1 , b2 , …, bk that yield a largest joint likelihood
To estimate阝oand阝1 using ML( Calculus ■ The procedure of ML: 1. assume a combination ofβoβ1,β2x…,阝 k, call it bo, b1,b2,…,bk Compute the implied e=y-b0-b×1r-b2x2-… bkXki and f(e)=f(y-bo-b1x1b2×x-…bxk) 2. Compute the joint likelihood conditional on the assumed values of 0rn1b2r…k L(bo,b1,b2…,b)=f(e1)*f(e2)**f(en) Choose bo, bl, b2,., bk to maximize the likelihood function L(bo, b1 b2…b)- using calculus a Take the first derivative of l(bo, bl, b2,., b with respect to bor set it to zero a Take the first derivative of l(bo, bl, b,,,, b with respect to b set it to zero ■ Solve bo,b1,b2…, bk using the k+1 equations 冶aWng@2007 ECON1003: Analysis of Economic Data Lesson 11-12
Ka-fu Wong © 2007 ECON1003: Analysis of Economic Data Lesson11-12 To estimate b0 and b1 using ML (Calculus) ◼ Choose b0 , b1 , b2 , …, bk to maximize the likelihood function L(b0 , b1 , b2 , …, bk ) – using calculus. ◼ Take the first derivative of L(b0 , b1 , b2 , …, bk ) with respect to b0 , set it to zero. ◼ Take the first derivative of L(b0 , b1 , b2 , …, bk ) with respect to bj , set it to zero. ◼ Solve b0 , b1 , b2 , …, bk using the k+1 equations. ◼ The procedure of ML: 1. Assume a combination of b0 , b1 , b2 , …, bk , call it b0 , b1 , b2 , …, bk . Compute the implied ei = yi -b0 -b1x1i-b2x2i-…-bkxki and f(ei )=f(yi -b0 -b1x1i-b2x2i-…-bkxki) 2. Compute the joint likelihood conditional on the assumed values of b0 , b1 , b2 , …, bk : ◼ L(b0 , b1 , b2 , …, bk ) = f(e1 )*f(e2 )*…*f(en )
Estimation Ordinary least squares a For each value of X, there is a group of y values, and these r values are normally distributed. Y~NE(Y|X1,X2x…X)2)i=1,2,…n The means of these normal distributions of y values all lie on the straight line of regression E(Y|X1X2…X)=阝+βX1+β22+…+阝X a The standard deviations of these normal distributions are equal 2 . e homoskedasticity Ka-fu Wong C2007 ECON1003: Analysis of Economic Data Lesson 11-13
Ka-fu Wong © 2007 ECON1003: Analysis of Economic Data Lesson11-13 Estimation Ordinary least squares ◼ For each value of X, there is a group of Y values, and these Y values are normally distributed. Yi~ N(E(Y|X1 , X2 ,…,Xk ), si 2 ), i=1,2,…,n ◼ The means of these normal distributions of Y values all lie on the straight line of regression. E(Y|X1 , X2 ,…,Xk ) = b0+ b1X1 + b2X2 +… + bkXk ◼ The standard deviations of these normal distributions are equal. si 2= s2 i=1,2,…,n i.e., homoskedasticity
Choosing the line that fits best Ordinary Least Squares(OLS) Principle a Straight lines can be described generally by y=b+bx1n+b2xz+…+bX Finding the best line with smallest sum of squared difference is the same as Min S(bob,=ELy;(bo+ bj X1i+ 62x2i..+ bkXvDF a It can be shown the minimization yields the similar sample moment conditions as discussed earlier in the method of moments. Ka-fu Wong C2007 ECON1003: Analysis of Economic Data Lesson 11-14
Ka-fu Wong © 2007 ECON1003: Analysis of Economic Data Lesson11-14 Choosing the line that fits best Ordinary Least Squares (OLS) Principle ◼ Straight lines can be described generally by yi = b0 + b1x1i+ b2x2i +…+ bkxki i=1,…,n ◼ Finding the best line with smallest sum of squared difference is the same as ◼ It can be shown the minimization yields the similar sample moment conditions as discussed earlier in the method of moments. Min S(b0 ,b1 ) = S[yi – (b0 + b1x1i+ b2x2i +…+ bkxki )] 2
It can be shown that the estimators are BLUE Best: smallest variance a Linear: linear combination of yi ■ Unbiased:E(bo)=βo,E(b1)=阝1 ■ Estimator Ka-fu Wong C2007 ECON1003: Analysis of Economic Data Lesson 11-15
Ka-fu Wong © 2007 ECON1003: Analysis of Economic Data Lesson11-15 It can be shown that the estimators are BLUE ◼ Best: smallest variance ◼ Linear: linear combination of yi ◼ Unbiased: E(b0 ) = b0 , E(b1 ) = b1 ◼ Estimator