AIC、BIC
这两个只是判断估计结果好坏的两个不同的根据。
AIC: Akaike Information Criterion
BIC: Bayesian Information Criterion
建议楼主去查这两个,可以在Wiki上查到 链接:
http://en.wikipedia.org/wiki/Akaike_information_criterion
http://en.wikipedia.org/wiki/Bayesian_information_criterion
Akaike's information criterion, developed by Hirotsugu Akaike under the name of "an information criterion" (AIC) in 1971 and proposed in Akaike (1974) [1] , is a measure of the goodness of fit of an estimated statistical model. It is grounded in the concept of entropy, in effect offering a relative measure of the information lost when a given model is used to describe reality and can be said to describe the tradeoff between bias and variance in model construction, or loosely speaking that of precision and complexity of the model.
The AIC is not a test of the model in the sense of hypothesis testing, rather it is a test between models - a tool for model selection. Given a data set, several competing models may be ranked according to their AIC, with
the one having the lowest AIC being the best. From the AIC value one may infer that e.g. the top three models are in a tie and the rest are far worse, but it would be arbitrary to assign a value above which a given model is 'rejected'
BIC跟AIC也差不多~ 也就是说~ 你在最终的剩下的可能的几个模型中进行选择的时候你看一下各个模型的AIC和BIC指数,哪个更低就选择那个模型