Dear stats experts:
My name is Cortney Warren and I am a doctoral student in Clinical Psychology at A&M, currently finishing my dissertation. I conducting multivariate analyses and trying to determine if there is a way to get group centroid values on a canonical variable/discriminant function (with 3 DV's) by group (Categorical IV: participant ethnicity (3 levels)) after adjusting for covariates (pre-test scores). In SPSS, the Discrimant procedure doesnt seem to allow for covariates and the manova/GLM procedure doesn't give adjusted centroid values (it will give group centroid values WITHOUT the covariate, but not WITH covariates as far as I can tell).
Does anyone know how to get group centroid values after controlling for a covariate using SPSS? If not, is there another program that will calculate them for me?
Thank you for your time,
Cortney S. Warren
Hi,
I am using two psychological well being tests. One test has 5 questions
(measured from 0 to 5) and other has 12 questions (measured from 0 to 3).
I have two objectives.
1) Compare the two tests to find similarity (so that if they are similar we
can use the test with 5 questions which will be easy to administered.
2) To get the factors (dimentions.) for both the tests.
Analysis:
Correlation of total score for both the tests and for all items.
Generally for categorical data Spearman's corre is used. But can we use
pearson also?
Can I use ROC to get sensitivity & specificity? (Limitation of the study is
that we do not have psychological diagnosis by doctors)
but
if one of the test is validated?
If both tests are not validated?
To get factors Principal Component analysis is better or Conformatory factor
analysis is better? How different are the two results?
Will it be wrong to do Principal component analysis? (exploratory)
Is there facility in SPSS to get Confirmatory factor analysis?
How can one use it?
[此贴子已经被作者于2006-4-30 10:47:11编辑过]
Dr Veena,
I will address one of your queries below.Principal Component analysis better or worse than Confirmatory factor analysis?
Firstly I think you are confusing the type of extraction which is principal components analysis (PCA) or Principal Axis Factoring (PAF) with confirmatory factor analysis versus exploratory factor analysis (the purpose of undertaking your factor analysis). When comparing principal components analysis with other types of data extraction, which one is best depends on the purpose for which you are running the analysis. I know that factor analysis (PAF in spss) and (PCA) differ based on the way they assign values to the diagonals in a matrix. While principal components assigns a 1 to the diagonals (in other words considers the
total variance between items) PAF uses common variance and leaves out
unexplained variance by assigning squared multiple correlations to the diagonal. In other words PAF extracts unique explained variance and PCA extracts both explained and unexplained variance.
While you can attempt to force the data into a set number of factors, you do not have the flexibility (unless I am mistaken here) of being able to assign items to a particular factor and then test how well they load. You have limited control over what you can and cannot do with traditional factor analysis.
I think that Amos by spss is much better for your needs than spss factor analysis.
I am running a factor analysis in SPSS. Now, since some of the variables
I am interested in are dichotomous, I am trying to get around the
limitations of what sort of data is suitable for FA by creating a
Spearman's correlation matrix and using that as input for the FA procedure.
Can anyone tell me if that can be considered acceptable practice? After
all, if I'm working from correlations, and if the correlations are
nonparametric and therefore appropriate for the data, type of variable
should not have an effect on the results, right?
Many thanks for your help, it is very much appreciated.
Nicola Knight
Hi
For dichotomous variables, I'd rather use kendall's tau correlation
coeficient, or even tetrachoric correlation coeficients.
Regards,
Marta
biostatistics@terra.es
Thanks for all replies. I guess I should have provided more details in order to make my need clear. Sorry if I didn't last time.
I had to use two different scales with various subscales (behavioural dimensions) to cover all variables I need to analyzes my hypotheses; however, these scales have overlapping dimensions as well.
What I need to do is to reduce all scales to reasonable number of factors/variables so that I could avoid collinearity problem in further steps of analysis. The scales are likert-style and some consists more questions than others and each scale has its own answer format. (0-2, 1-3, 1-5) Therefore I have numeric intervals with different ranges.
I have other independent variables which might be correlated with subscales. I need to test them as well.
I have been trying factor analysis to reduce these subscales but I am not sure whether it is meaningful to include all there scales in raw format without any normalization.
My variables do not lack independence but those subscales have high correlations. I had to use these scales because couldn't find a scale that covers all dimensions I need. So I feel obligation to reduce number of variables derived from those subscales in order to obtain more meaningful model for the analysis.
For example two of the scales have subscale to measure hyperactivity, one ranges from 0 to 10 and the other ranges from 3 to 12. I am looking for a statistically meaningful way to reach one numeric variable for hyperactivity and related items if any. Do I need to normalize these subscales prior to entering Data Reduction and if it will be factor analysis what combination do you suggest?
The behaviours are expected to affect peer status and including the subscales as is would cause problems due to situation explained above.
So this is the problem I am facing now.
扫码加好友,拉您进群



收藏
