Chapter 6 The Two-Variable model Hypothesis Testing
Chapter 6 The Two-Variable Model: Hypothesis Testing
The Object of Hypothesis Testing To answer How"“good” is the estimated regression line How can we be sure that the estimated regression function (i. e,, the SRF) is in fact a good estimator of the true PRF? Y=B, +BX+u nonstochastic stochast stochastic a Before we tell how good an SRF is as an estimate of the true Pre. we should assume how the stochastic u terms are generated
The Object of Hypothesis Testing To answer—— ◼ How “good” is the estimated regression line. ◼ How can we be sure that the estimated regression function (i.e., the SRF) is in fact a good estimator of the true PRF? Yi=B1+B2Xi+μi Xi———nonstochastic μi———stochasti Yi———stochastic ◼ Before we tell how good an SRF is as an estimate of the true PRF, we should assume how the stochastic μterms are generated
6.1 The Classical Linear Regression Model(CLRM) CLRM assumptions A6. 1. The explanatory variable(s)X is uncorrelated with the disturbance term u A6.2. Zero mean value assumption The expected, or mean, value of the disturbance term u is zero E()=0 A6.3. Homoscedasticity assumption The variance of each H is constant, or homoscedastic (p1) 6.2) A6. 4. No autocorrelation assumption There is no correlation between two error terms cov(upu1)=01≠j(6.3)
6.1 The Classical Linear Regression Model (CLRM) CLRM assumptions: ◼ A6.1. The explanatory variable(s) X is uncorrelated with the disturbance term μ. ◼ A6.2. Zero mean value assumption: ◼ ——The expected, or mean, value of the disturbance term μ is zero. E(μi )=0 (6.1) ◼ A6.3. Homoscedasticity assumption: ◼ ——The variance of each μi is constant, or homoscedastic. var(μi )=σ2 (6.2) ◼ A6.4. No autocorrelation assumption: ◼ ——There is no correlation between two error terms. ◼ cov(μi ,μj )=0 i≠j (6.3)
6.2 Variandes and standard Errors of Ordinary Least Squares (OLS) Estimators Study the sampling variability ofOLS estimators The variances and standard errors of the ols estimators var(b1=2X.02 (b1)=var(b1) va(b2×2 se(b2)=/var(b (6.7) 02 IS an estimator of o 2 (6.8) 2 n-2 (6.9) ∑e2=RSs( (residual sum of squares=∑(YrY1) degrees of freedom
6.2 Variandes and Standard Errors of Ordinary Least Squares(OLS)Estimators ◼ ——Study the sampling variability of OLS estimators. ◼ The variances and standard errors of the OLS estimators: var(b1 )= ·σ2 (6.4) se(b1 ) = (6.5) var(b2 )= (6.6) se(b2 ) = (6.7) is an estimator of σ2 (6.8) (6.9) ∑ei 2=RSS(residualsum of squares) =∑(Yi -Yi)2 n-2……..degrees of freedom 2 i 2 i n X X var(b ) 1 2 i 2 X σ var(b ) 2 σ ˆ 2 n 2 e σ 2 i 2 − ˆ = 2 σ ˆ = σ ˆ
6.3 The Properties of OLS Estimators Why Ordinary Least Squares (ols)? The OLS method is used popularly because it has some very strong theoretical properties which is known as the Gauss-Markov theoren Gauss-Markov theorem Given the assumptions of the classical linear regression model. the ols estimators. in the class of unbiased linear estimators have minimum variance; that is, they are BLUe(best linear unbiased estimators) That is the ols estimators b, and b are 1. Linear: they are linear functions of the random variable y Unbiased: E(b1)=B, E(b2)=B E(2)=0 ave minimum varlance
6.3 The Properties of OLS Estimators Why Ordinary Least Squares(OLS)? The OLS method is used popularly because it has some very strong theoretical properties, which is known as the Gauss-Markov theorem: ◼ Gauss-Markov theorem: ——Given the assumptions of the classical linear regression model, the OLS estimators, in the class of unbiased linear estimators, have minimum variance; that is, they are BLUE(best linear unbiased estimators). That is , the OLS estimators b1 and b2 are: ◼ 1. Linear: they are linear functions of the randomvariable Y. ◼ 2. Unbiased: E( b1 )=B1 E( b2 )=B2 E( )= ◼ 3. Have minimum variance. σ ˆ 2 σ 2