nullnullnull
Figure 2.1a Probability distribution of food expenditure y given income x = $1000Slide 2-*null
Figure 2.1b Probability distributions of food expenditures y given incomes x = $1000 and x = $2000
null
The simple regression function
null
Figure 2.2 The economic model: a linear relationship between average per person food expenditure and incomenull
Slope of regression line
“Δ” denotes “change in”
null
Figure 2.3 The probability density function for y at two levels of incomeSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullnull2.2.1 Introducing the Error Term
The random error term is defined as
Rearranging gives
y is dependent variable; x is independent variable
Slide 2-*null The expected value of the error term, given x, is
The mean value of the error term, given x, is zero.
Slide 2-*null
Figure 2.4 Probability density functions for e and ySlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*null
Figure 2.5 The relationship among y, e and the true regression lineSlide 2-*null
Slide 2-*null
Figure 2.6 Data for food expenditure example
Slide 2-*null2.3.1 The Least Squares Principle
The fitted regression line is
The least squares residual
Slide 2-*null
Figure 2.7 The relationship among y, ê and the fitted regression line
Slide 2-*nullAny other fitted line
Least squares line has smaller sum of squared residuals
If and then SSE < SSE*
Slide 2-*null
Least squares estimates for the unknown parameters β1 and β2 are obtained by minimizing the sum of squares function
Slide 2-*nullThe Least Squares Estimators
Slide 2-*null2.3.2 Estimates for the Food Expenditure Function
A convenient way to report the values for b1 and b2 is to write out the estimated or fitted regression line:
Slide 2-*null
Figure 2.8 The fitted regression line
Slide 2-*null2.3.3 Interpreting the Estimates
The value b2 = 10.21 is an estimate of 2, the amount by which weekly expenditure on food per household increases when household weekly income increases by $100. Thus, we estimate that if income goes up by $100, expected weekly expenditure on food will increase by approximately $10.21.
Strictly speaking, the intercept estimate b1 = 83.42 is an estimate of the weekly food expenditure on food for a household with zero income.
Slide 2-*null2.3.3a Elasticities
Income elasticity is a useful way to characterize the responsiveness of consumer expenditure to changes in income. The elasticity of a variable y with respect to another variable x is
In the linear economic model given by (2.1) we have shown that
Slide 2-*nullThe elasticity of mean expenditure with respect to income is
A frequently used alternative is to calculate the elasticity at the “point of the means” because it is a representative point on the regression line.
Slide 2-*null2.3.3b Prediction
Suppose that we wanted to predict weekly food expenditure for a household with a weekly income of $2000. This prediction is carried out by substituting x = 20 into our estimated equation to obtain
We predict that a household with a weekly income of $2000 will spend $287.61 per week on food.
Slide 2-*null2.3.3c Examining Computer Output
Figure 2.9 EViews Regression Output
Slide 2-*null2.3.4 Other Economic Models
The “log-log” model
Slide 2-*null2.4.1 The estimator b2
Slide 2-*null2.4.2 The Expected Values of b1 and b2
We will show that if our model assumptions hold, then , which means that the estimator is unbiased.
We can find the expected value of b2 using the fact that the expected value of a sum is the sum of expected values
using and
Slide 2-*null2.4.3 Repeated Sampling
Slide 2-*nullSlide 2-*The variance of b2 is defined as
Figure 2.10 Two possible probability density functions for b2
null2.4.4 The Variances and Covariances of b1 and b2
If the regression model assumptions SR1-SR5 are correct (assumption SR6 is not required), then the variances and covariance of b1 and b2 are:
Slide 2-*null2.4.4 The Variances and Covariances of b1 and b2
The larger the variance term , the greater the uncertainty there is in the statistical model, and the larger the variances and covariance of the least squares estimators.
The larger the sum of squares, , the smaller the variances of the least squares estimators and the more precisely we can estimate the unknown parameters.
The larger the sample size N, the smaller the variances and covariance of the least squares estimators.
The larger this term is, the larger the variance of the least squares estimator b1.
The absolute magnitude of the covariance increases the larger in magnitude is the sample mean , and the covariance has a sign opposite to that of .
Slide 2-*nullSlide 2-*The variance of b2 is defined as
Figure 2.11 The influence of variation in the explanatory variable x on precision of estimation (a) Low x variation, low precision (b) High x variation, high precision
null
Slide 2-*nullThe estimators b1 and b2 are “best” when compared to similar estimators, those which are linear and unbiased. The Theorem does not say that b1 and b2 are the best of all possible estimators.
The estimators b1 and b2 are best within their class because they have the minimum variance. When comparing two linear and unbiased estimators, we always want to use the one with the smaller variance, since that estimation rule gives us the higher probability of obtaining an estimate that is close to the true parameter value.
In order for the Gauss-Markov Theorem to hold, assumptions SR1-SR5 must be true. If any of these assumptions are not true, then b1 and b2 are not the best linear unbiased estimators of β1 and β2.
Slide 2-*nullThe Gauss-Markov Theorem does not depend on the assumption of normality (assumption SR6).
In the simple linear regression model, if we want to use a linear and unbiased estimator, then we have to do no more searching. The estimators b1 and b2 are the ones to use. This explains why we are studying these estimators and why they are so widely used in research, not only in economics but in all social and physical sciences as well.
The Gauss-Markov theorem applies to the least squares estimators. It does not apply to the least squares estimates from a single sample.
Slide 2-*nullIf we make the normality assumption (assumption SR6 about the error term) then the least squares estimators are normally distributed
Slide 2-*nullThe variance of the random error ei is
if the assumption E(ei) = 0 is correct.
Since the “expectation” is an average value we might consider estimating σ2 as the average of the squared errors,
Recall that the random errors are
Slide 2-*nullThe least squares residuals are obtained by replacing the unknown parameters by their least squares estimates,
There is a simple modification that produces an unbiased estimator, and that is
Slide 2-*nullSlide 2-*Replace the unknown error variance in (2.14)-(2.16) by to obtain: nullSlide 2-*The square roots of the estimated variances are the “standard errors” of b1 and b2. null
Slide 2-*nullSlide 2-*The estimated variances and covariances for a regression are arrayed in a rectangular array, or matrix, with variances on the diagonal and covariances in the “off-diagonal” positions.
nullSlide 2-*For the food expenditure data the estimated covariance matrix is:
nullSlide 2-*
nullSlide 2-*nullSlide 2-*nullSlide 2-*null
Figure 2A.1 The sum of squares function and the minimizing values b1 and b2
Slide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*nullSlide 2-*Let be any other linear estimator of β2.
Suppose that ki = wi + ci.nullSlide 2-*nullSlide 2-*
本文档为【ch2】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑,
图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。