Covariates are interval independents in most programs. However, in SPSS dialog all independents are entered as covariates, then one clicks the Categorical button in the Logistic Regression dialog to declare any of those entered as categorical.
Interaction terms. As in OLS regression, one can add interaction terms to the model (ex., age*income). For continuous covariates, one simply creates a new variable which is the product of two existing ones. For categorical variables, one also multiplies but you have to multiply two category codes as shown in the Categorical Variables Codings table of SPSS output (ex, race(1)*religion(2)). The codes will be 1's and 0's so most of the products in the new variables will be 0's unless you recode some products (ex., setting 0*0 to 1).
SPSS and SAS. In SPSS, select Analyze, Regression, Binary (or Multinomial), Logistic; select the dependent and the covariates; Continue; OK. SAS's PROC CATMOD computes both simple and multinomial logistic regression, whereas PROC LOGIST is for simple (dichotomous) logistic regression. CATMOD uses a conventional model command: ex., model wsat*supsat*qman=_response_ /nogls ml ;. Note that in the model command, nogls suppresses generalized least squares estimation and ml specifies maximum likelihood estimation.
Sequential logistic regression is analysis of nested models where the researcher is testing the control effects of a set of covariates. The logistic regression model is run against the dependent for the full model with independents and covariates, then is run again with the block of independents dropped. If chi-square difference is not significant, then the researcher concludes that the independent variables are controlled by the covariates (that is, they have no effect once the effect of the covariates is taken into account). Alternatively, the nested model may be just the independents, with the covariates dropped. In that case a finding of non-significance implies that the covariates have no control effect.
Continuous covariates: When the logit is transformed into an odds ratio, it may be expressed as a percent increase in odds. For instance, consider the example of number of publications of professors (see Allison, 1999: 188). Let the logit coefficient for "number of articles published" be +.0737, where the dependent variable is "being promoted". The odds ratio which corresponds to a logit of +.0737 is approximately 1.08 (e to the .0737 power). Therefore one may say, "each additional article published increases the odds of promotion by about 8%, controlling for other variables in the model.." (Obviously, this is the same as saying the original dependent odds increases by 108%, or noting that one multiplies the original dependent odds by 1.08. By the same token, it is not the same as saying that the probability of promotion increases by 8%.) To take another example, let income be a continuous explanatory variable measured in ten thousands of dollars, with a logit of 1.5 in a model predicting home ownership=1, no home ownership=0. A 1 unit increase in income (one $10,000 unit) is then associated with a 1.5 increase in the log odds of home ownership. However, it is more intuitive to convert to an odds ratio: exp(1.5) = 4.48, allowing one to say that a unit ($10,000) change in income increases the odds of the event ownership=1 about 4.5 times.
[此贴子已经被作者于2006-5-17 5:11:14编辑过]