全部版块 我的主页
论坛 数据科学与人工智能 数据分析与数据科学 SPSS论坛
6413 7
2007-04-03
rt
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

全部回复
2008-2-16 13:15:00
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2008-2-16 13:19:00

One of the assumption of ordinary least squares regression is that the variance of the residuals is homogeneous across levels of the predicted values, also known as homoscedasticity. If the model is well-fitted, there should be no pattern to the residuals plotted against the fitted values. If the variance of the residuals is non-constant then the residual variance is said to be "heteroscedastic." Below we illustrate graphical methods for detecting heteroscedasticity. A commonly used graphical method is to use the residual versus fitted plot to show the residuals versus fitted (predicted) values.  Below we use the /scatterplot subcommand to plot *zresid (standardized residuals) by *pred (the predicted values).  We see that the pattern of the data points is getting a little narrower towards the right end, an indication of mild heteroscedasticity.

    regression /dependent api00 /method=enter meals ell emer /scatterplot(*zresid *pred).
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2008-2-16 13:21:00
When heteroscedasticity is mild, OLS standard errors behave quite well (Long and Ervin 2000). However, when heteroscedasticity is severe, ignoring it may bias your standard errors and p values. The direction of the bias depends on the pattern of heteroscedasticity: p values may be too large or too small. Sometimes the form of the heteroscedasticity is clear and can be modeled. More commonly, though, heteroscedasticity is a nuisance that cannot be modeled because its source is not well understood. In this case, the classic correction for heteroscedasticity is the HC0 estimator proposed by Huber (1967) and White (1980). But although this estimator is correct in large samples, it is no better than OLS in small samples. MacKinnon and White (1985) discussed three improvements, HC1, HC2, and HC3. An evaluation by Long and Ervin (2000) suggests that HC3 is the best, especially in small samples. It is possible to correct for heteroscedasticity using popular software: In Stata, use the HC3 option in the REG command, e.g., reg y x, hc3 In small samples, this is better than the ROBUST option, which implements HC1. In SAS, the ACOV option in the REG procedure implements the HC0 correction, e.g.: proc reg;model y = x / acov;run; However, ACOV only corrects the covariance matrix; it does not correct the standard errors. To get the corrected standard errors, take the square roots of the diagonal elements in the covariance matrix. Alternatively,Andrew Hayes has written a SAS macro that gives HC0 standard errors directly, and implements the improved small-sample corrections HC1, HC2, and HC3. SPSS has not implemented any heteroscedasticity correction, but again Hayes has written an SPSS macro that implements the HC0, HC1, HC2, and HC3 corrections. LIMDEP implements the HC0 correction. In addition, Bob Kaufman has written the following LIMDEP code for implementing the HC3 correction: Title; Calculate MacKinnon and White's HC3 estimator of OLS Standard Errors $Namelist; iv=one,list of predictors; dv=name of dependent variable$Regress; lhs=dv; rhs=iv; res=resy$Matrix; xpxinv=; bols=b$Create; resysq=resy^2$ Create; hii=qfr(iv,xpxinv); hc3= resysq/ (1-hii) $Matrix; varhc3=xpxinv*iv'[hc3]iv*xpxinv; stat(bols,varhc3)$Now, how do you know if you should correct for heteroscedasticity? There are a number of tests for heteroscedasticity, so it seems natural to conduct a test, then use a correction if the test suggest heteroscedasticity. The trouble with this is that the tests often fail to detect heteroscedasticity, leading you to neglect the correction when it is actually needed. In simulations, Long and Ervin (2000) found that this possibility was quite serious. As a result, they recommended that "a test for heteroscedasticity should not be used to determine whether [an HC estimator] should be used." It is better to use an HC estimator whenever heteroscedasticity is suspected. ReferencesGreene, W.F. (1997). Econometric Analysis (3rd edition). New York: Prentice-Hall. Hayes, A. F. & Cai, L. (in review). "Heteroscedasticity-robust moderated multiple regression using heteroscedasticity-consistent standard error estimates." Manuscript submitted for publication. Huber, P.J. 1967. "The behavior of maximum likelihood estimates under non-standard conditions." Proceeding of the Fifth Berkeley Symposium on Mathematical Statistics and Probability 1: 221-233. Long, J.S and L.H. Ervin, 2000, "Using Heteroscedasticity Consistent Standard Errors in the Linear Regression Model." The American Statistician 54:217-224. MacKinnon, J.G. and H. White. 1985. "Some heteroskedasticity consistent covariance matrix estimators with improved finite sample properties." Journal of Econometrics, 29, 53-57. White, Halbert. 1980. "A heteroskedastic-consistent covariance matrix estimator and a direct test of heteroskedasticity." Econometrica 48:817-838.
本文来自: 人大经济论坛(http://www.pinggu.org) 详细出处参考:https://bbs.pinggu.org/thread-73944-1-1.html
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2008-2-16 13:28:00

Heteroscedasticity: Testing and Correcting in SPSS

1) Introduction
2) Causes
3) Consequences
4) Detection: Specific Tests
5) Detection: General Tests
6) Solutions

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

2009-12-16 17:40:50
哇~~~~完全看不懂耶
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

点击查看更多内容…
相关推荐
栏目导航
热门文章
推荐文章

说点什么

分享

扫码加好友,拉您进群
各岗位、行业、专业交流群