Chapter 10 Multicollinearity: What Happens if Explanatory Variables are Correlated
Chapter 10 Multicollinearity: What Happens if Explanatory Variables are Correlated
r One of the CLRM assumptions is there is no perfect multicollinearity--no exact linear relationships among explanatory variables, XS, in a multiple regression In practice, one rarely encounters perfect multicollinearity, but cases of near or very high multicollinearity where explanatory variables are approximately linearly related frequently arise in many applications
One of the CLRM assumptions is: there is no perfect multicollinearity—no exact linear relationships among explanatory variables, Xs, in a multiple regression. In practice, one rarely encounters perfect multicollinearity, but cases of near or very high multicollinearity where explanatory variables are approximately linearly related frequently arise in many applications
The objects of this chapter. OThe Nature of multicollinearity o ls multicollinearity really a problem? o The theoretical consequences of multicollinearity; o How to detect multicollinearity? o The remedial measures which can be used to eliminate multicollinearity
The objects of this chapter: ●The Nature of multicollinearity; ● Is multicollinearity really a problem? ● The theoretical consequences of multicollinearity; ● How to detect multicollinearity? ● The remedial measures which can be used to eliminate multicollinearity
10. 1: The Nature of Multicollinearity The Case of Perfect Multicollinearity rIn cases of perfect linear relationship or perfect multicollinearity among explanatory variables, we cannot obtain unique estimates of all parameters. And since we cannot obtain their unique estimates, we cannot draw any statistical inferences i.e hypothesis testing) about them from a given sample
10.1: The Nature of Multicollinearity: The Case of Perfect Multicollinearity In cases of perfect linear relationship or perfect multicollinearity among explanatory variables, we cannot obtain unique estimates of all parameters. And since we cannot obtain their unique estimates, we cannot draw any statistical inferences (i.e., hypothesis testing) about them from a given sample
Y=A1+A2×2+A3×3+g Transformation:×31=300-2×2 Y=A1+A2X2+A3(300-2×2)+ =(A1+300A3)+(A2-2A3)×2+u1 =C1+C2×2+ui Estimation get the OlS estimators C1=A1+300A3,C2=A2-2A3 So from the estimators of Cl, c2, we can not get the estimators of A, A2 and A3
Yi=A1+A2X2i+A3X3i+μi Transformation: X3i =300-2X2i Yi=A1+A2X2i+A3 ( 300-2X2i ) +μi =(A1+300 A3 )+(A2 -2 A3 ) X2i +μi = C1 + C2 X2i +μi Estimation: get the OLS estimators C1 =A1+300 A3 , C2 =A2 -2 A3, So from the estimators of C1 , C2 , we can not get the estimators of A1 , A2 and A3