Hello There, Guest!  

[ODP] - Parents’ Income is a Poor Predictor of SAT Score

#21
That sounds reasonable to me. Some words of caution in the areas of the article dealing with interpretation of the MR results.
 Reply
#22
For example, the main claim of the paper here is that parent income is a poor "predictor" of SAT. Now, imagine the null direct effect hides a non-zero effect under the common/indirect effect, then it is unknown what the total effect of parent income will be. But the author can make some assumptions, like I said. Marks (2013) gives a good illustration. Page 355, he gives the sibling correlation for income (.13-.21), education (.53-.55) and occupation (.31-.37) and test scores (.47-.48). Even if income may be lower in its reliability (since the author never said if these correlations are (dis)attenuated or not), i doubt it will make much difference. Whatever the case, if family background has big impact on income, a large sibling correlation will be expected. I can infer from this that if family SES has poor impact on the children future income, then the family income variable as a proxy for family SES will probably have low impact on either SAT, education or occupation because "family income" is a poor measure of family background. In this case, it is unlikely that parent income will have a big impact on SAT through its indirect effect on, say, educational attainment of the child. Thus it will not contradict the main idea of the article.

Marks, Gary (2013) Reproduction of economic inequalities: Are the figures for the United States and United Kingdom too high? Comparative Social Research 30:341-363.

EDIT :

See here.

Herrnstein's Syllogism: Genetic and Shared Environmental Influences on IQ, Education, and Income.

Quote:The heritability estimates from model 7 are 0.64 for IQ, 0.68 for education, and 0.42 for income. The majority of the total genetic variation in IQ (55%) and education (75%) came from the common genetic variable, with the rest coming from the specific genetic factors. In contrast, the majority of the total genetic variance for income came from the specific genetic factor (71%). The common shared environmental factor accounted for 0.08 of the variance in income, 0.18 in education, and 0.23 in IQ.

This, once again, proves that income is a poor reflectance of family (environmental) background, as seen by the low shared-environment effect (although you need to SQRT it to get the correct effect size).
 Reply
#23
Speaking of Herrnstein's Syllogism. Has anyone been able to find his 1971 newspaper article? I cannot find it online. If anyone has it, please upload here or email it to me.
 Reply
#24
@menghu1001

Thanks for showing me all that and for your patience.

I decided to add extensive commonality analysis to address multicollinearity more formally, and since the common effects of income are non-zero, I did acknowledge the possibility that income could dominate.

We are talking about multicollinearity, which is not the same as variable interactions, as the source you provided explained:

<blockquote> There are important differences between “common effects” (explained variance associated with combinations of two or more of the predictors) and “interaction effects.” Common effects, or nonunique contributions to explained variance in the dependent variable, are those which arise because the two independent variables are correlated, so that their effects on the dependent variable cannot be told apart or separated statistically. Interaction effects, on the other hand, occur due to particular patterns of the values of independent variables (correlated or not) operating in combination. Specifically, interaction effects reflect a contrast that combines different levels or values of at least two independent variables, and they indicate there is a significant effect in the data which is not a main effect. Interaction effects make no contribution to explained variance in the typical regression equation (not having been specified by the researcher and therefore mathematically excluded), while common effects are nearly always present. The function of commonality analysis is to ferret out these common effects so that they may be interpreted.</blockquote>


Since interacting variables can correlate, I’d say there is overlap, but what I previously said about multicollinearity still holds. You cited a paper co-written by Kim Nimon, and she co-wrote <a href="http://journal.frontiersin.org/Journal/10.3389/fpsyg.2012.00044/full">another study</a> that made more explicit that commonality analysis addresses multicollinearity.

One copy of my amended study shows the changes in red.


Attached Files
.pdf   Parents Income is a Poor Predictor of SAT Scores - ODP new.pdf (Size: 1.33 MB / Downloads: 537)
.pdf   Parents Income is a Poor Predictor of SAT Scores - ODP - changes.pdf (Size: 1.33 MB / Downloads: 433)
 Reply
#25
Any comments from Meng Hu to new edition?
 Reply
#26
It's very late, I was busy with some nasty works, not to mention the 2-3 additional days I have spent reading about dominance analysis (DA) and learning R. i'll try to be faster next time.

So, I looked at your modified version labelled "changes". That's much easier for me. I have taken some hours/days of reflexion before responding because you make effort to use a test for multicollinearity and commonality even though I think they add nothing relevant here. I believe they were not necessary, at least, for not for the question of attributing the indirect effects to which variables.

One thing strikes me. You have written :

Quote:Of course, one could argue for the unlikely possibility that parents’ income constitutes the true root cause of most of its effects found in common with the other variables and, thereby, would be the most important influence. However, some statisticians do not even consider a variance inflation factor less than ten to be a concerning amount of multicollinearity (Neter et al, 1983).

It is not clear to me that there is a consensus. Most of the time, of course, you see VIF=10. But sometimes, you have VIF=5. So, I don't know. That's why, considering the uncertainty of the maximum VIF, the simple fact that "Bachelor" has VIF 8.52 looks somewhat dangerous to me.

Regardless, even if we accept the VIF=10, my previous comment makes clear that I don't believe that the absence of multicollinearity would reject the "unlikely possibility that parents’ income constitutes the true root cause of most of its effects". So, I see you have added a discussion concerning the indirect causal paths, but I disagree with your argument to dismiss it.

Concerning the passage you quoted about the difference in common effect and interaction effect, I don't understand your point. It seems to say that interaction effects cannot add to the explained variance for one simple reason : because it's a decomposition of an effect that has been already here. For example, if you think about the purpose of an interaction term, suppose the dependent var. is the (dichotomous) item of a test score (we use logistic regression), main effects of A (race) and B (total score) are strong, but when adding the interaction A*B, you see A becoming zero, and A*B is strong. That means there is non-linearity in the "race" effect. If the interaction is positive, it means, as the level of B increases, the race effect magnifies. Thus the conclusion is that if you apply linear regression, instead of allowing non-linearity pattern, then you are making misleading analyses. In general, when my interaction term has meaningful coefficient, I usually see the R² increasing. That is not related to what we were talking about, i.e., the similarity in MR and SEM, and I was arguing that SEM computes total effect by adding direct to indirect effects. SEM deals with (multi)collinearity by making causal inferences regarding the correlations of indep var., which is not what DA is doing.

Now, considering multicollinearity, in the past, I thought it was just a warning that the MR coefficient will be biased to the extent that the several indep var. in my model have lot of redundancy, and their effect are difficult to separate from each other. That is, if VIF and tolerance both showed reasonable values, then there is nothing to be worried about. I was wrong. The problem here is that the purpose of MR seems to require the variables to be correlated, otherwise there is nothing to be afraid about your indep var. that are "confounded" and thus need to be "partialed out". But that's why it is self-defeating. Since the simple fact that your indep var. must be correlated means that the total effect is not estimated by MR coefficient. The gist of multicollinearity is that your indep var. should not be correlated too much. Well, even if the VIF and tolerance shows good values, it reduces the problem of indirect effects somewhat, but if the indirect effect is non-zero, then, the direct effect and total effects need not to be equal. It's almost impossible to avoid correlation among predictors. The best case scenario seems to be when the predictors have very, very small correlation to each others.

In your situation, there is lot of common effect, which is expected. Most MR analyses should have lot of common effects, because, like I said, the purpose of MR is to "partial out" the variables that are correlated to each other. Researchers use predictors that they believe to be correlated. So, yes, in mostly 100% of cases, you have collinearity. You can't avoid it. But the absence of multicollinearity does not mean there is no substantial indirect effect. In your situation, the common effect is so large compared to unique effect while in the ideal situation you would like it to be small compared to unique effect. When you said "Total common effects of independent variables tended to be much greater than unique effects. Unique effects of parents’ income were usually negligible, but the total common effects of income were generally between those of parents’ education and race. Otherwise, the order of variables is comparable to that of Figures 1 and 3, and parents’ education still seems to dominate." It's ok for me, but I don't see where you have discussed the consequence of this finding concerning the main idea of the document that is "parents income is a poor predictor of SAT scores".

In general, I'm not convinced by your application of DA. It's not what Azen & Bodescu (2003) have done. Furthermore, I'm not even sure about the purpose of the DA you have applied. Showing that education has more common variance than income, it's good, but you haven't discussed the implication of this finding. Even so, it is only the first step. You should have continued and computed the dominance coefficients as well.

Anyway, here's my attempt. Using your data, of course.

The R syntax below applies to your analyses, row 3 in table 3. Note that some portions are just redundant, apparently several different coding can give you the same output. I don't understand R programming but, well, it means you can use either of these alternatives for the same results. However, I have spent (i.e., wasted) several hours in the web trying to find how to compute confidence intervals and bootstrap in R, under dominance analysis framework. I can't help you with that (not yet).

install.packages('MBESS')
install.packages('yhat')
install.packages('miscTools')
install.packages('plotrix')

require(yhat)
require(miscTools)
require(plotrix)
require(boot)

MENGHUshowsyouthepathtothedarkness<-read.csv("c:\\Users\\MENGHUisnothuman\\SPSS\\Parents Income is a Poor Predictor of SAT Scores - (nooffensebut 2014) supplement 3 - states continuous variables data.csv", header=TRUE)
cor(MENGHUshowsyouthepathtothedarkness)
lm.out=lm(satm~size+participation+year+native+Over100K+bachelors,data=MENGHUshowsyouthepathtothedarkness)
summary(lm.out)
summary(lm.out)$r.squared
rSquared(MENGHUshowsyouthepathtothedarkness$satm, lm.out$residuals)
stdEr(lm.out)
confint(lm.out)
effect.size(lm.out)
plot(lm.out)

calc.yhat(lm.out,prec=3)
regrOut<-calc.yhat(lm.out)
regrOut

boot.out<-boot(MENGHUshowsyouthepathtothedarkness,boot.yhat,100,lmOut=lm.out,regrout0=regrOut)
result<-booteval.yhat(regrOut,boot.out,bty="perc")

MENGHUthenastyboy<-regr(lm.out)
MENGHUthenastyboy$Beta_Weights
MENGHUthenastyboy$Structure_Coefficients
MENGHUthenastyboy$Commonality_Data

MENGHUthewickedgod<-commonalityCoefficients(MENGHUshowsyouthepathtothedarkness, "satm", list("size", "participation", "year", "native", "Over100K", "bachelors"), "F")
print (MENGHUthewickedgod)

rlw(MENGHUshowsyouthepathtothedarkness, "satm", c("size", "participation", "year", "native", "Over100K", "bachelors"))

MENGHUthebreaker<-aps(MENGHUshowsyouthepathtothedarkness, "satm", c("size", "participation", "year", "native", "Over100K", "bachelors"))
dominance(MENGHUthebreaker)


> MENGHUthenastyboy$Beta_Weights
size participation year native Over100K
0.15536052 -0.24416636 -0.03389326 0.20922148 0.04671277
bachelors
0.68415602

> MENGHUthenastyboy$Structure_Coefficients
size participation year native Over100K bachelors
[1,] -0.4478937 -0.9056144 0.04181908 0.5405896 0.5342355 0.9683267

> MENGHUthenastyboy$Commonality_Data
$CC
Coefficient
Unique to size 0.0154
Unique to participation 0.0120
Unique to year 0.0003
Unique to native 0.0286
Unique to Over100K 0.0003
Unique to bachelors 0.0426
Common to size, and participation 0.0013
Common to size, and year 0.0008
Common to participation, and year -0.0002
Common to size, and native -0.0084
Common to participation, and native -0.0016
Common to year, and native -0.0003
Common to size, and Over100K 0.0009
Common to participation, and Over100K 0.0009
Common to year, and Over100K -0.0003
Common to native, and Over100K 0.0008
Common to size, and bachelors -0.0097
Common to participation, and bachelors 0.1968
Common to year, and bachelors 0.0247
Common to native, and bachelors 0.0250
Common to Over100K, and bachelors 0.0804
Common to size, participation, and year -0.0009
Common to size, participation, and native -0.0002
Common to size, year, and native -0.0007
Common to participation, year, and native 0.0012
Common to size, participation, and Over100K -0.0017
Common to size, year, and Over100K -0.0007
Common to participation, year, and Over100K 0.0009
Common to size, native, and Over100K -0.0018
Common to participation, native, and Over100K 0.0059
Common to year, native, and Over100K 0.0020
Common to size, participation, and bachelors -0.0051
Common to size, year, and bachelors -0.0012
Common to participation, year, and bachelors 0.0871
Common to size, native, and bachelors 0.0027
Common to participation, native, and bachelors 0.0906
Common to year, native, and bachelors 0.0014
Common to size, Over100K, and bachelors -0.0043
Common to participation, Over100K, and bachelors 0.2503
Common to year, Over100K, and bachelors -0.0183
Common to native, Over100K, and bachelors -0.0176
Common to size, participation, year, and native 0.0002
Common to size, participation, year, and Over100K 0.0010
Common to size, participation, native, and Over100K -0.0003
Common to size, year, native, and Over100K 0.0004
Common to participation, year, native, and Over100K 0.0011
Common to size, participation, year, and bachelors 0.0065
Common to size, participation, native, and bachelors 0.0619
Common to size, year, native, and bachelors 0.0012
Common to participation, year, native, and bachelors 0.0336
Common to size, participation, Over100K, and bachelors 0.0446
Common to size, year, Over100K, and bachelors 0.0006
Common to participation, year, Over100K, and bachelors -0.0840
Common to size, native, Over100K, and bachelors 0.0055
Common to participation, native, Over100K, and bachelors 0.0064
Common to year, native, Over100K, and bachelors -0.0075
Common to size, participation, year, native, and Over100K -0.0005
Common to size, participation, year, native, and bachelors 0.0403
Common to size, participation, year, Over100K, and bachelors -0.0013
Common to size, participation, native, Over100K, and bachelors 0.0801
Common to size, year, native, Over100K, and bachelors -0.0003
Common to participation, year, native, Over100K, and bachelors -0.0408
Common to size, participation, year, native, Over100K, and bachelors -0.0448
Total 0.9038
% Total
Unique to size 1.71
Unique to participation 1.33
Unique to year 0.04
Unique to native 3.16
Unique to Over100K 0.03
Unique to bachelors 4.71
Common to size, and participation 0.15
Common to size, and year 0.09
Common to participation, and year -0.02
Common to size, and native -0.93
Common to participation, and native -0.17
Common to year, and native -0.04
Common to size, and Over100K 0.10
Common to participation, and Over100K 0.09
Common to year, and Over100K -0.03
Common to native, and Over100K 0.09
Common to size, and bachelors -1.07
Common to participation, and bachelors 21.77
Common to year, and bachelors 2.74
Common to native, and bachelors 2.76
Common to Over100K, and bachelors 8.90
Common to size, participation, and year -0.10
Common to size, participation, and native -0.03
Common to size, year, and native -0.08
Common to participation, year, and native 0.14
Common to size, participation, and Over100K -0.19
Common to size, year, and Over100K -0.08
Common to participation, year, and Over100K 0.10
Common to size, native, and Over100K -0.19
Common to participation, native, and Over100K 0.65
Common to year, native, and Over100K 0.22
Common to size, participation, and bachelors -0.56
Common to size, year, and bachelors -0.14
Common to participation, year, and bachelors 9.64
Common to size, native, and bachelors 0.30
Common to participation, native, and bachelors 10.02
Common to year, native, and bachelors 0.15
Common to size, Over100K, and bachelors -0.47
Common to participation, Over100K, and bachelors 27.70
Common to year, Over100K, and bachelors -2.03
Common to native, Over100K, and bachelors -1.94
Common to size, participation, year, and native 0.02
Common to size, participation, year, and Over100K 0.11
Common to size, participation, native, and Over100K -0.03
Common to size, year, native, and Over100K 0.05
Common to participation, year, native, and Over100K 0.13
Common to size, participation, year, and bachelors 0.72
Common to size, participation, native, and bachelors 6.85
Common to size, year, native, and bachelors 0.13
Common to participation, year, native, and bachelors 3.72
Common to size, participation, Over100K, and bachelors 4.93
Common to size, year, Over100K, and bachelors 0.07
Common to participation, year, Over100K, and bachelors -9.30
Common to size, native, Over100K, and bachelors 0.60
Common to participation, native, Over100K, and bachelors 0.71
Common to year, native, Over100K, and bachelors -0.83
Common to size, participation, year, native, and Over100K -0.05
Common to size, participation, year, native, and bachelors 4.46
Common to size, participation, year, Over100K, and bachelors -0.14
Common to size, participation, native, Over100K, and bachelors 8.86
Common to size, year, native, Over100K, and bachelors -0.03
Common to participation, year, native, Over100K, and bachelors -4.51
Common to size, participation, year, native, Over100K, and bachelors -4.96
Total 100.00

$CCTotalbyVar
Unique Common Total
size 0.0154 0.1659 0.1813
participation 0.0120 0.7292 0.7412
year 0.0003 0.0013 0.0016
native 0.0286 0.2355 0.2641
Over100K 0.0003 0.2576 0.2579
bachelors 0.0426 0.8048 0.8474

> rlw(MENGHUshowsyouthepathtothedarkness, "satm", c("size", "participation", "year", "native", "Over100K", "bachelors"))
[,1]
[1,] 0.04487804
[2,] 0.29797719
[3,] 0.01431806
[4,] 0.11342812
[5,] 0.11224995
[6,] 0.32090938

There are 6 (independent) variables, each corresponding to the ones I put above in my R code, in the same order. In other words, [5] is the relative weight for Over100K, which is 0.11224995.

Try to type this in EXCEL :

=0.04487804+0.29797719+0.01431806+0.11342812+0.11224995+0.32090938

The result must be :

0.903761

... which is exactly the number of your adjusted R² in row 3 of your table 3. They add up to the model R² with no problem whatsoever.

Alternatively, you can use the following here :

calc.yhat(lm.out,prec=3)
regrOut<-calc.yhat(lm.out)
regrOut

It will give you more output, e.g., b, Beta, r, rs, rs2, GenDom, Pratt, RLW, as well as the conditional dominance weights for each predictors (look for columns CD:0 to CD:5). See table 1 in the documentation below.
http://www.dspace.rice.edu/bitstream/han...sequence=1

Same thing with respect to the dominance weights. See the above syntax (remember you need yhat package), which should give you ...

> dominance(MENGHUthebreaker)
$DA

[it's a long list of numbers and submodels, so I skipped that]

$CD
size participation year native Over100K bachelors
CD:0 0.181302330 0.74120808 0.0015805287 0.26411243 0.2579401698 0.84741718
CD:1 0.067176029 0.47051315 0.0433006482 0.14784159 0.1898492816 0.57087167
CD:2 0.014284768 0.26580135 0.0317080406 0.07317284 0.1111116379 0.35551591
CD:3 0.007956338 0.13365199 0.0173035205 0.04312372 0.0548524230 0.20806838
CD:4 0.012381898 0.05140923 0.0052843464 0.03167920 0.0168449442 0.10605547
CD:5 0.015414170 0.01198023 0.0003424187 0.02858989 0.0003088299 0.04260971

$GD
size participation year native Over100K
0.04975259 0.27909400 0.01658658 0.09808661 0.10515121
bachelors
0.35508972

$CD stands for conditional dominance, $GD stands for general dominance. The latter is your coefficient weights. You'll see the dominance weights do not equal relative weights although they are roughly similar. According to Tonidandel & LeBreton (2011), this is not surprising :

Quote:In our view, these two procedures are largely interchangeable with one another mainly because, in our experiences, the differences in the estimates of relative importance produced by these two approaches are miniscule (see Tables 1, 2). Although one might observe some differences in a particular sample, the actual population weights are likely highly similar. Evidence for this comes from some large scale simulation studies were the average difference between the dominance weights and relative weights was often in the third decimal place when collapsing across thousands of simulation runs (LeBreton et al. 2004; LeBreton and Tonidandel 2008; Tonidandel and LeBreton 2010). The similarity in results produced by these two estimates of relative importance is actually an important strength of both procedures. As it is impossible to actually determine the true relative importance of predictors, the fact that both of these analyses arrive at virtually identical answers, while based on different approaches to estimating relative importance, should lend more confidence regarding their ability to accurately partition variance.

They say if you have less than 10 predictors, you should better select dominance weights. Especially when you add all of the GD coefficients, they sum to 0.903761. Same R² as the above.

If you haven't noticed, I have added in the above coding the following : plot(lm.out). The function gives you some interesting outcomes, e.g., normal QQ plots, which is always needed when you do multiple regression, I mean, you must check the residuals (even though 99.999999% of social scientists don't care about it) and be sure they look normal and don't deviate too much from the fitted line.

If you want my results for the row 4th in your table 3...

MENGHUshowsyouthepathtothedarkness<-read.csv("c:\\Users\\MENGHUisnothuman\\SPSS\\Parents Income is a Poor Predictor of SAT Scores - (nooffensebut 2014) supplement 3 - states continuous variables data.csv", header=TRUE)
cor(MENGHUshowsyouthepathtothedarkness)
lm.out2=lm(satv~size+participation+year+native+Over60K+bachelors,data=MENGHUshowsyouthepathtothedarkness)
summary(lm.out2)
summary(lm.out2)$r.squared
rSquared(MENGHUshowsyouthepathtothedarkness$satm, lm.out2$residuals)
stdEr(lm.out2)
confint(lm.out2)
effect.size(lm.out2)
plot(lm.out2)

calc.yhat(lm.out2,prec=3)
regrOut2<-calc.yhat(lm.out2)
regrOut2

> regrOut2<-calc.yhat(lm.out2)
> regrOut2
$PredictorMetrics
b Beta r rs rs2 Unique Common CD:0 CD:1
size 0.000 0.033 -0.517 -0.533 0.284 0.001 0.267 0.268 0.101
participation -0.314 -0.252 -0.898 -0.925 0.856 0.014 0.793 0.807 0.494
year -0.164 -0.020 0.004 0.004 0.000 0.000 0.000 0.000 0.042
native 37.586 0.133 0.504 0.520 0.270 0.012 0.242 0.254 0.108
Over60K -19.351 -0.060 0.575 0.592 0.350 0.001 0.330 0.330 0.196
bachelors 227.407 0.740 0.947 0.976 0.952 0.064 0.833 0.898 0.580
Total NA NA NA NA 2.712 0.092 2.465 2.557 1.521
CD:2 CD:3 CD:4 CD:5 GenDom Pratt RLW
size 0.023 0.004 0.001 0.001 0.066 -0.017 0.070
participation 0.280 0.151 0.066 0.014 0.302 0.226 0.319
year 0.029 0.015 0.004 0.000 0.015 0.000 0.012
native 0.034 0.015 0.012 0.012 0.073 0.067 0.081
Over60K 0.095 0.040 0.011 0.001 0.112 -0.034 0.115
bachelors 0.358 0.220 0.126 0.064 0.374 0.701 0.345
Total 0.819 0.445 0.220 0.092 0.942 0.943 0.942

The relative weight analysis does not figure in your analysis nor in the Nimon and colleagues (2008) paper on which you base your analysis. But here you have it, page 24 (look for "rlw").
http://cran.r-project.org/web/packages/yhat/yhat.pdf

See also :
http://cran.r-project.org/web/packages/p...lotrix.pdf
http://cran.r-project.org/web/packages/MBESS/MBESS.pdf
http://cran.r-project.org/web/packages/m...cTools.pdf

For the above, i also followed the instruction of that video here :
https://www.youtube.com/watch?v=MC5-KGkK6PU

You can have more here :
Understanding the Results of Multiple Linear Regression: Beyond Standardized Regression Coefficients (Nimon, Oswald, 2013, see appendix)

More on Nimon's papers at her google scholar webpage :
http://scholar.google.com/citations?user...AAAJ&hl=en

Apparently, you can even conduct that analysis on SPSS, for those interested :
Regression commonality analysis: demonstration of an SPSS solution (K Nimon). Multiple Linear Regression Viewpoints 36 (1), 10-17.

More on this at James LeBreton's website (I highly recommend it) :
http://www1.psych.purdue.edu/~jlebreto/downloads.html

I found a lot of articles on DA, some of them seem to be mere "replicate" of old ones, although you can still find application of DA to multivariate MR or logistic regression. My recommendation is to read Azen & Bodescu (2003) firstly, it's probably the best by far.

[OFFTOPIC//]

In my computer, I have thousands of millions of xls datafiles. I don't want to convert all of them into csv to work with R. I found R package for xlsx. But I don't understand how to apply it. I followed the procedure but, as usual with scientists, they don't know how to explain things clearly.
http://cran.r-project.org/web/packages/xlsx/xlsx.pdf

So, if someone here can help me understand this, send me an email at mh19870410 (gmail) or here, if you want. Thanks.

install.packages('xlsx')
require(xlsx)
XLSX1<-read.xlsx("c:\\Users\\SPSS\\Parents Income is a Poor Predictor of SAT Scores - (nooffensebut 2014) supplement 3 - states continuous variables data.xlsx", header=TRUE)
cor(XLSX1)

[//OFFTOPIC]

[OFFTOPIC 2//]

I wanted to get the bootstrap thing and CI, but the below codes did not give me what I wanted, perhaps I mis-understood some of the terms in the syntax. If you help me with this, I'll redo it.

regrOut
result$tauDS
result$domBoot
plotCI.yhat(regrOut$PredictorMetrics[-
nrow(regrOut$PredictorMetrics),],result$upperCIpm,result$lowerCIpm,
pid=which(colnames(regrOut$PredictorMetrics) %in%
c("Beta","rs","CD:0","CD:1","CD:2","CD:3","CD:4","CD:5","GenDom","Pratt","RLW") ==TRUE), nr=3,nc=3)
result$combCIpmDiff[,c("Beta","rs","CD:0","CD:1","CD:2","CD:3","CD:4","CD:5","GenDom","Pratt","RLW")]
plotCI.yhat(regrOut$APSRelatedMetrics[-nrow(regrOut$APSRelatedMetrics),-2],result$upperCIaps,result$lowerCIaps,nr=3,nc=2)
result$combCIapsDiff
result$combCIincDiff

plotCI.yhat(lm.out, upperCI, lowerCI, pid=1:ncol(lm.out), nr=2, nc=2)

[//OFFTOPIC 2]

Overall, DA appears to me that such analysis, as fascinating as it looks, does not address the question of causality. It seems to be a nice tool if you want to have a look at the pattern of common effects. But adding the common to the unique effect is another, and more difficult story. Making assumptions about the causal paths is necessary if you want to add the indirect to the direct effect. I have thought a lot about commonality analysis, but I don't see the point. Since you can't disentangle direct and indirect effects with SEM+ cross sectional data, i don't see how you can do that in MR.

In short, DA, according to Budescu (1993), states that "One variable is more important than its competitor if its predictive ability exceeds the other's in all subset regressions". Note the "all subset". What it means here is that the regression coefficients may depend on the subset. The table 3 in Azen & Bodescu (2003) best illustrates the situation. They have also written :

Quote:Although a predictor whose coefficient is relatively large will presumably have a relatively larger effect on the prediction of the criterion, in the presence of correlated predictors this effect can only be interpreted when the effects of all other predictors are kept constant. Therefore, if the predictors are correlated, it may not be meaningful to think of the change in predictor Xi while all other predictors remain constant because a change in one predictor will most likely result in a change in all predictors correlated with it (e.g., Mosteller & Tukey, 1977).

Thus, DA aims to get the regression coefficient corrected for subset biases. The same Budescu also said "Most of the approaches reviewed earlier lead to conclusions that are model dependent and are not invariant to subset selection". There must be no ambiguity here. To make sure everyone understand what a subset is, given model regression with 4 indep var, X1, X2, X3, X4, we have different possible regression models :

X1,
X2,
X3,
X4,
X1 and X2,
X1 and X3,
X1 and X4,
X2 and X3,
X2 and X4,
X3 and X4,
X1 and X2 and X3,
X1 and X3 and X4,
X1 and X2 and X4,
X2 and X3 and X4,
X1 and X2 and X3 and X4.

Thus, we have here 15 subset models. Sometimes a given predictor does not outperform (in terms of squared beta) the other in all of these subsets, and in this case there is no "complete dominance". But inference is still possible. Conditional dominance is effective when at least a given variable outperforms the other within each model size (denoted by "k" the number of indep var). The last type, named general dominance, happens when the average (additional) explained variance of a given variable outperforms the other, and it is calculated by averaging all of the squared betas of a variable in each "k" level. That is more or less what subset invariance means.

Another measure is the so-called relative weight, which produces similar estimates. The method is described as follows :

Quote:http://pareonline.net/pdf/v17n9.pdf

In contrast, when independent variables are correlated, relative weights address this problem by using principal components analysis to transform the original independent variables into a set of uncorrelated principal components that are most highly correlated with the original independent variables (Tonidandel & LeBreton, 2010). These components are then submitted to two regression analyses. The first analysis is a regression that predicts the dependent variable from these uncorrelated principal components. Next, the original independent variables are regressed onto the uncorrelated principal components. Finally, relative weights are computed by multiplying squared regression weights from the first analysis (regression of dependent variables on components) with squared regression weights the second analysis (regression of independent variables on components). Each weight can be divided by R2 and multiplied by 100 so that the new weights add to 100%, with each weight reflecting the percentage of predictable variance. Relative weights are unique as a measure of total effect in that they provide rank orderings of individual independent variables‘ contributions to a MR effect in the presence of all other predictors based on a computational method that addresses associations between independent variables between variables by creating their uncorrelated "counterparts" (Johnson & LeBreton, 2004).

Quote:A multidimensional approach for evaluating variables in organizational research and practice (LeBreton 2007)

Although these two indices tend to yield identical estimates of importance, there are instances when one index might be preferred over the other. For example, relative weights will probably be preferred for a large number of predictors because estimating general dominance weights can be computationally prohibitive (e.g., for 10 variables, the number of all-subsets regression models is 1,023), although the full regression model may threaten the desire to have a large N/p ratio. In contrast, dominance analysis may be preferred when the psychologist is interested in understanding complete and conditional forms of dominance, in addition to general dominance weights.

Budescu, D. V. (1993). Dominance analysis: a new approach to the problem of relative importance of predictors in multiple regression. Psychological Bulletin, 114, 542–551.

Azen, R., & D. V. Budescu (2003). The dominance analysis approach for comparing predictors in multiple regression. Psychological Methods 8:129–148.

In DA framework, it is commonly held that some of the credit that a predictor contributes to the dependent var may be represented in another variable, when there is (multi)collinearity (see for instance, Nathans 2013, p. 3). Imagine that a portion of the beta of a variable motivation is hidden in the beta of IQ, thus motivation is under-estimated. On the other hand, in SEM framework, you may have concluded that if IQ causes motivation rather than the reverse, you must attribute the indirect effect (the correlation of motivation and IQ) to IQ, and not motivation. And the impact of IQ will increase relative to motivation. The same thing applies to your income variable, which becomes stronger when applying dominance weight analysis. In this situation, the practitioners would have concluded that some of the effect of income has been wrongly attributed to the others, e.g., education.

So, the difference between SEM and dominance analysis (DA) framework is that the former specifically allows the researchers to add the indirect to the direct effect, whereas DA assumes no indirect effect but instead common effect. This causes a huge difference here. In SEM, the total effect of the indep var is obtained after determining the causal pathways between the predictors (theoretically by removing the implausible but equivalent models (see MacCallum, 1993). In DA, the total effect is obtained after correcting for model dependence. Thus, the difference between SEM and DA, under cross-sectional data, is that SEM is theoretically driven, whereas DA gets you the total effect of the indep var, not theoretically, but purely analytically. No one seems optimal because in the case of SEM (without longitudinal data) you do not have a true value, you simply guess, speculate about which causal pathway is the most plausible. In DA, you have a true, unbiased value of MR coefficients, but without the bother of answering the question of why the predictors are correlated (i.e., the likelihood of competing causal pathways).

In this case, my opinion is that DA approach is correct for the purpose of determining which predictor is the stronger. But only under a very strong assumption : that we assume no specific causal pathways. I have another problem with DA, is that it seems to imply that the fact that the regression coefficients differ depending on subset regression model is in itself an anomaly, which is why DA attempts to correct for subset non-invariance. I'm not sure it must be considered as an anomaly, and I think there must be a reason why they depend on subset models. I haven't thought about that so much, anyway.

In general, I'm OK with you using DA approach but you have to make explicit what DA does and does not assume. Besides, you must remember that dominance and relative weights, according to Tonidandel & LeBreton (2011, p.5) cannot circumvent the problem of too high collinearity, as they become misleading. In your case, you have in one variable VIF=8.52 which is close to borderline. It's not necessarily invalid, but it's a possible limitation.

Tonidandel, S., & LeBreton, J. M. (2011). Relative importance analysis—A useful supplement to regression analyses. Journal of Business and Psychology, 26, 1-9.

Nathans, L. L., Oswald, F. L., & Nimon, K. (2012). Interpreting multiple regression: A guidebook of variable importance. Practical Assessment, Research & Evaluation, 9, 1-19.

Here's a reminder of DA, if you really want to apply it, even though I'm not a big fan.

Quote:http://journal.frontiersin.org/Journal/1...00044/full

Like correlation coefficients, structure coefficients are also simply bivariate Pearson rs, but they are not zero-order correlations between two observed variables. Instead, a structure coefficient is a correlation between an observed predictor variable and the predicted criterion scores, often called “Y-hat” y^ scores (Henson, 2002; Thompson, 2006). These y^ scores are the predicted estimate of the outcome variable based on the synthesis of all the predictors in regression equation; they are also the primary focus of the analysis. The variance of these predicted scores represents the portion of the total variance of the criterion scores that can be explained by the predictors. Because a structure coefficient represents a correlation between a predictor and the y^ scores, a squared structure coefficient informs us as to how much variance the predictor can explain of the R2 effect observed (not of the total dependent variable), and therefore provide a sense of how much each predictor could contribute to the explanation of the entire model (Thompson, 2006).

Structure coefficients add to the information provided by β weights. Betas inform us as to the credit given to a predictor in the regression equation, while structure coefficients inform us as to the bivariate relationship between a predictor and the effect observed without the influence of the other predictors in the model. As such, structure coefficients are useful in the presence of multicollinearity. If the predictors are perfectly uncorrelated, the sum of all squared structure coefficients will equal 1.00 because each predictor will explain its own portion of the total effect (R2). When there is shared explained variance of the outcome, this sum will necessarily be larger than 1.00. Structure coefficients also allow us to recognize the presence of suppressor predictor variables, such as when a predictor has a large β weight but a disproportionately small structure coefficient that is close to zero (Courville and Thompson, 2001; Thompson, 2006; Nimon et al., 2010).

http://pareonline.net/pdf/v17n9.pdf

The results of a commonality analysis can aid in identifying where suppressor effects occur and also how much of the regression effect is due to suppression. Negative values of commonalities generally indicate the presence of suppressor effects (Amado, 1999). In the suppression case, a variable in a particular common effect coefficient that does not directly share variance with the dependent variable suppresses variance in at least one of the other independent variables in that coefficient. The suppressor removes the irrelevant variance in the other variable or variables in the common effect to increase the other variable(s)‘ variance contributions to the regression effect (DeVito, 1976; Zientek & Thompson, 2006). Commonality analysis is uniquely able to both identify which variables are in a suppressor relationship and the specific nature of that relationship.

Summing all negative common effects for a regression equation can quantify the amount of suppression present in the regression equation.

Overall, these findings supported how both numeric was the most significant direct contributor and arithmetic was the second most important direct contributor to predicting variance in reasoning, as reflected across different measures of direct, total, and partial effects. It is important to note that this is not always the case: One independent variable may be deemed the most important through one lens, and another independent variable may achieve that status through another lens. Results also supported from multiple lenses how addition functioned as a suppressor in this regression equation. Reliance on beta weights alone would not have pointed out the nature of the suppressor effect.

The Dominance Analysis Approach for Comparing Predictors in Multiple Regression (Azen, Budescu, 2003)

For example, consider a situation in which Y is to be modeled by three predictor variables, X1, X2, and X3. Suppose that X1 and X3 are each very highly correlated with Y, while X2 is only moderately correlated with Y. Furthermore, suppose X1 and X3 are highly correlated with each other but are not highly correlated with X2. In this case, once X1 is included in the regression model, X3 will not make any additional contribution to the prediction of Y. Therefore, once X1 is in the model, X3 will be “less important” than X2, even though the correlation between Y and X3 is higher than the correlation between Y and X2. Similarly, once X3 is in the model, X1 will appear to be less important than X2. To further complicate matters, it has been shown by several researchers (e.g., Shieh, 2001) that in a regression with two correlated predictors p²YYˆ may exceed the sum of the squared correlations p²YX<sub>1</sub> + p²YX<sub>2</sub>. Therefore, measures that take the predictors’ intercorrelations into account have been suggested to address importance in the case of correlated predictors.

However, the contributions of a suppressor variable may deviate from this monotonic pattern - paradoxically, the predictor makes a more substantial contribution in more complex models. This result is shown in Table 6; while the average additional contributions (i.e., the conditional dominance measures) of X1 and X3 decrease with model size, k (from 0.250 to 0.239 to 0.234 in the case of X1, and from 0.062 to 0.042 to 0.026 in the case of X3), these measures actually increase in the case of X2, the classic suppressor (from 0.000 to 0.014 to 0.034). Although not shown, similar patterns of results are observed in examples of negative and reciprocal suppression (as defined by Tzelgov & Henik, 1991).

Some of the key features of DA are not restricted to the p²YYˆ (or the sample R²) metric. In fact, it is easy to show that any measure of model fit that is a monotone (increasing or decreasing) function of the model’s error sum of squares (SSE) would yield the same dominance pattern. In other words, if DA based on the squared multiple correlations shows that in a certain sample or population Xi dominates Xj (generally, conditionally, or completely), it must also be the case that DA of the same data set based on any other measure that is monotonically related to SSE would similarly show Xi to dominate Xj (generally, conditionally, or completely, respectively). A formal proof of this claim for three such measures (adjusted R², Akaike’s Information Criterion, and Cp) is given in Azen (2000).

We believe that DA is superior to most other approaches of determining relative importance because of its more natural and intuitive interpretation, which allows a general approach to predictor comparisons. Unlike most other approaches, which start with a particular statistic (e.g., correlation coefficient) and try to “extract” its interpretation, DA starts with a clear definition of importance and identifies the measures that address the key question: Is variable Xi more or less (or equally) important than variable Xj in predicting Y in the context of the predictors included in the selected model? The definition of importance is straightforward - Xi is considered more important than Xj if it contributes more to the prediction of the response than does Xj; in other words, if one had to choose only one of the two predictors, Xi is judged more important if it would be chosen over Xj.

As an analogy, consider the question: Is City A warmer than city B? The question can be answered in the affirmative fashion at three distinct levels that tolerate various amounts of inconsistency: (a) A is warmer than B if its temperature is higher every day of the year (complete); (b) A is warmer than B if its average monthly temperature is higher every month of the year but not necessarily every day (conditional); and © A is warmer than B if its average annual temperature is higher (general). We recommend that dominance be reported at the strongest level (complete) when it can be established, because it implies that the weaker levels of dominance have also been achieved. If dominance cannot be established at the complete level, we suggest reporting this; the fact that dominance is undetermined is useful in its own right.

Considering the last paragraph, if you use dominance instead of relative weights, you can also report the patterns of the dominance (if conditional, general or complete) to get the best overview of your predictors. But that is just a suggestion. Even the proponents of DA don't say you must necessarily report it. It's just a bonus. In your situation, your 6 indep var makes the operation very intensive.

....
....

Anyway, all this stuff is not necessary. I prefer to control for "education" by using regression in a different way. Say, your variable education has different values, e.g., 1-20. You can collapse these categories/values to different group levels, 1-5, 6-9, 10-12, 13-15, 16-20. And conduct regression with income variable for each of these category of education. That's a crude way to divide the original variable into group levels, but if you have enough sample and if each of the group level does not comprise values that are too wide, then you can use that. Obviously the more group levels you have, the more regressions you will have to do, and the more time-consuming the method is. Among the item bias techniques, you have Mantel-Haenszel, or the Simultaneous Item Bias, or the less known P-DIF standardization method, that are based on this approach. However, looking at your dataset and considering the values of the variables, I'm not sure how I would do it if I were you. So you can probably disregard this alternative, as it also requires quite lot of samples.

Finally, considering the following :

Quote:Race is not an ordinal variable because races cannot be ranked, but, in order to apply a consistent approach to this categorical variable akin to the approach taken for parents’ income and education, the study required that racial groups be ordered according to national SAT-score gaps, which persisted for all study years...

I know your race variables were continuous and it can be seen in the data, but the passage in bold is not necessarily clear to me.

Also, in your next version(s), will it be possible to add the page number(s) either at the top or bottom ?
 Reply
#27
@menghu1001

Okay, let me recap, from my perspective, the trajectory of this exchange.

1- I submit a paper of MR, in which I acknowledge the serious issue of multicollinearity.
2- You object, basically saying that only longitudinal SEM is acceptable because you had read a study about multicollinearity and commonality analysis, even though you didn’t know that it was about multicollinearity.
3- I reiterate that “I agree [multicollinearity] is an important issue,” but I point out that the explained variance of my model is very high, and the model achieved this “using the variables specified without additional interaction variables.” Therefore, the interactions specified by a longitudinal SEM can’t provide much additional value. (This is the point that you have never legitimately addressed, but I still remember it.)
4- You disagree, maintaining your original view, which is that “the total effect” has “4 possibilities” for its composition: “A. x1 causes x2. B. x2 causes x1. C. each variable causes the other … D. x1 and x2 are caused by another, omitted variable.” To support your view about explained variance including such interaction effects (A, B, and C), you reveal your study about multicollinearity and commonality analysis, which says, “Interaction effects make no contribution to explained variance in the typical regression equation (not having been specified by the researcher and therefore mathematically excluded), while common effects are nearly always present.” (I swear that I did not know about this study until you showed it to me, so I could not be guilty of plagiarism.)
5- After realizing that we have been talking about multicollinearity all this time, I calculate some VIFs. In the case of my maximum standard error, 96.8, (since a high standard error is evidence of possibly problematic multicollinearity), none of my variables fail a strict standard of VIF (VIF < 5). In another MR iteration, my education variable has a standard error of only 16.4, but its VIF is 8.52. This still meets an acceptable definition of an acceptable amount of multicollinearity (VIF < 10), but, in the spirit of generosity, I added extensive commonality analysis in the form of a couple graphs. I also go a little overboard with the generosity by saying, “one could argue for the unlikely possibility that parents’ income constitutes the true root cause of most of its effects found in common with the other variables and, thereby, would be the most important influence.” This statement does seem to contradict your source. If multicollinearity is “nearly always present,” when exactly are interaction effects not “mathematically excluded” from the explained variance??? (I actually posted a question about this on Talk Stats forum, but no one replied. I guess I stumped them.)
6- You are still not happy. You try to claim that interaction effects can still be a part of the explained variance “because it’s a decomposition of an effect that has been already here.” In other words, you disagree with your own source, but you still haven’t provided an alternative source that defends your view, and, therefore, you still haven’t addressed my original point. For some reason, you redo one iteration of my commonality analysis (out of 80, thank you very much). For some reason, you add new demands for dominance analysis and residuals.

<blockquote>I don't see where you have discussed the consequence of this finding concerning the main idea of the document that is "parents income is a poor predictor of SAT scores"</blockquote>

You need to reconcile your understanding of VIF with your beliefs about commonality analysis. From Professor <a href="http://www.statisticalhorizons.com/multicollinearity">Paul Allison</a>:

<blockquote>It’s called the variance inflation factor because it estimates how much the variance of a coefficient is “inflated” because of linear dependence with other predictors. Thus, a VIF of 1.8 tells us that the variance (the square of the standard error) of a particular coefficient is 80% larger than it would be if that predictor was completely uncorrelated with all the other predictors.</blockquote>

So, if you saw the results of a commonality analysis that showed a total effect of an independent variable that was twice as big as its unique effect, you would pull your hair out and demand an explanation. However, I seriously doubt that there is one, single statistician on the entire surface of planet Earth who is worried about a VIF of 2. I admitted that the bachelor’s degree’s VIF was moderately high, and I offered the omitted variable of “genetic and developmental effects on cognitive ability” as a possible explanation. My prior discussion of GCTA already admitted and supported this.

Look, we seem to be talking at cross-purposes, and I think we’re reaching a point of diminishing returns on this whole peer-review thing. I’m kind of wondering if you’re trying to change the subject (with the dominance analysis) after the whole multicollinearity thing. If this is how your journal is going to address MR studies in the future, I really don’t agree, but it’s not my journal. I can post the study on my own blog. That’s totally fine. Since I’m not using my real name, anyway, it’s not for my CV. So, there’s no problem.

If you’re still interested in the study, here it is with some very minor improvements (and changes in red).


Attached Files
.pdf   Parents Income is a Poor Predictor of SAT Scores - ODP - changes.pdf (Size: 1.33 MB / Downloads: 489)
.pdf   Parents Income is a Poor Predictor of SAT Scores - ODP.pdf (Size: 1.33 MB / Downloads: 511)
 Reply
#28
(2014-Jul-01, 03:47:55)nooffensebut Wrote: @menghu1001
If this is how your journal is going to address MR studies in the future, I really don’t agree, but it’s not my journal. I can post the study on my own blog. That’s totally fine. Since I’m not using my real name, anyway, it’s not for my CV. So, there’s no problem.

If you’re still interested in the study, here it is with some very minor improvements (and changes in red).


Hi,
you can publish your paper on this journal, provided that it's approved by 3 reviewers. On this journal, reviewers have NO veto power, so if a reviewer is against publication, it can still be published as long as 3 other reviewers agree that it's published. I see that here the discussion has come to a standstill. So far 2 reviewers (Philbrick and I) have approved publication, now if another reviewer (Emil, Chuck?) approves publication, this paper can be published. Of course the discussion between Meng Hu and the author can continue in the post-publication review.
 Reply
#29
(2014-Jul-01, 03:47:55)nooffensebut Wrote: I submit a paper of MR, in which I acknowledge the serious issue of multicollinearity.


My problem is that you are too prompt to conclude that "Parents’ Income is a Poor Predictor of SAT Score". Every researchers who fail to address issues and questions must at least acknowledge the possibility that your result will not hold under some specific conditions (e.g., income is a causal factor behind education and/or other variables). I repeat for the 2nd time you don't need to agree with me. But when you acknowledge the possibility that income can drives the other factors, your argument to reject it is wrong. If you disagree, we have to discuss about it. You must show me where i am wrong. Otherwise I doubt any other journal will accept a publication if the author refuse to comment the objections.

(2014-Jul-01, 03:47:55)nooffensebut Wrote: You object, basically saying that only longitudinal SEM is acceptable because you had read a study about multicollinearity and commonality analysis, even though you didn’t know that it was about multicollinearity.


You get it wrong again. I said already that the absence of multicollinearity does not imply the predictors are not correlated. That being said, the word multicollinearity is ambiguous, because as I usually encounter it, the reason invoked is that the predictors are "too highly" correlated. On the other hand, some people, for instance the proponents of DA, use collinearity and multicollinearity interchangeably. But if multicollinearity denotes a situation where there is too high correlation between indep var, then your whole point about multicollinearity missed the entire discussion, which was initially about ... what happens when the indep. var. happened to be correlated. My conclusion is that we must be cautious.

Quote:After realizing that we have been talking about multicollinearity all this time, I calculate some VIFs.

Our problem is not really about multicollinearity. To recall, I said the mere fact that the indep var are correlated pushes you to speculate about the most plausible causal pathways. If you think income drives education and then by this effect will cause SAT, then do SEM and multiply the path income->educ by the path educ->SAT, and then add it the direct path income->SAT. For SAT-M, my path analysis model (on AMOS) tells me that bachelor and income correlate at 0.63. Obviously, if the correlation was small, even if you attribute the indirect path to income, it will not make much of a difference. But here, if bachelor correlates (directly) with SATm at 0.68, multiplying these two paths give an indirect effect for income of about 0.43. This assumes income causes bachelor and there is no possible mutual causality. This too is a strong assumption. Since your study does not answer this question, i think you should at least recognize the possibility that income can have big impact under some very specific assumptions, unlikely or not you're the one to say and provide the explanation. In any case, it's perfectly normal to acknowledge the limitation of a study.

I already responded to points 3 and 4, and I don't appreciate how you avoid my answer. Your citation says this : "Specifically, interaction effects reflect a contrast that combines different levels or values of at least two independent variables, and they indicate there is a significant effect in the data which is not a main effect." By that, the authors (Seibold & McPhee 1979) were thinking about interaction terms. In my above message I describe what is an interaction term, and it has nothing to do with the SEM decomposition of total effect into indirect and direct effect. SEM approach, to repeat, only asks you to attribute the indirect effect to the predictor you think the most likely. That's an assumption of SEM that DA does not make.

Quote:If multicollinearity is “nearly always present,” when exactly are interaction effects not “mathematically excluded” from the explained variance???

Concerning the "interaction effect" in SEM which is not really one, the link below shows you what an interaction is. Look for "Figure A.2 A path model with a residualized product term that represents an interaction effect".
http://www.nyu.edu/classes/shrout/G89-22...iables.htm

As I said in my previous comment, when you add an additional variable in the form of interaction term, if that one has an effect, the r² should increase.

Since the issue has nothing to do with interaction effects, to answer your question, it's just like I said already before. You have the possibility of conducting longitudinal SEM, but if you have only cross-sectional data, then you must either make assumption about the plausible causal pathways (you can provide and cite studies supporting your views), or if you can't, you should say that the weak effect of income may not necessarily hold under other conditions, and more research is needed. That's usually how a section discussing limitations of the study should be done. A lot of researchers admit limitations of their study, because (1) they don't want to hasten conclusions (2) research is not easy (3) it gives other people a hint about the direction of future research, test of robustness etc.

For instance, you should have said in your paper that your regressions violate the assumption of normal residuals. When you try this...

REGRESSION
/DESCRIPTIVES MEAN STDDEV CORR SIG N
/MISSING LISTWISE
/STATISTICS COEFF OUTS CI(95) BCOV R ANOVA COLLIN TOL CHANGE ZPP
/CRITERIA=PIN(.05) POUT(.10)
/NOORIGIN
/DEPENDENT satm
/METHOD=ENTER bachelors Over100K size participation year native
/SCATTERPLOT=(*ZRESID ,*ZPRED)
/RESIDUALS HISTOGRAM(ZRESID) NORMPROB(ZRESID).

REGRESSION
/DESCRIPTIVES MEAN STDDEV CORR SIG N
/MISSING LISTWISE
/STATISTICS COEFF OUTS CI(95) BCOV R ANOVA COLLIN TOL CHANGE ZPP
/CRITERIA=PIN(.05) POUT(.10)
/NOORIGIN
/DEPENDENT satv
/METHOD=ENTER bachelors Over60K size participation year native
/SCATTERPLOT=(*ZRESID ,*ZPRED)
/RESIDUALS HISTOGRAM(ZRESID) NORMPROB(ZRESID).

You'll see the regression residual plot shows some pattern of heteroscedasticity, especially SATM. It does not reject the entire analysis, but well, it tells you to be careful about its generalizability. You may think I request a lot, yes. But science must be treated with cautious. Nothing should be hasten. When someone merely refuses to add word of cautious despite evidence of it, it become suspicious.

Quote:I’m kind of wondering if you’re trying to change the subject (with the dominance analysis) after the whole multicollinearity thing.

I answered your questions. But you don't seem to like the idea of being wrong, and you go even further saying the discussion here will soon become probably useless, but I never thought it that way. I spent a large amount of my free time reviewing your article, I doubt a lot people will do so far (I have indeed heard a lot of reviewers do sloppy jobs, and I'm not like that, which is why I'm hard to deal with). I can easily perceive a point of irritation from your comments, but this has begun already a while ago. I never said anything before but if you continue to speak like this with me, I'm not sure I will continue. When i think about the huge amount of days I spent on this to demonstrate your application of DA is incomplete and wrong, reading this "you’re trying to change the subject with the dominance analysis" or that "For some reason, you redo one iteration of my commonality analysis" is not acceptable for me. You don't even bother to answer any of my objection. I can keep going if you want, but I'm sure someone else would have already stopped. I don't like your manner.

Anyway, I indeed pointed out your DA approach that is wrongly used. It's not how it must be done, as I demonstrate above. To recall, if you try the DA approach, you must also compute dominance and/or relative weights. That's the most relevant part of DA. You can see above that income is not 0.04 or -0.05 but instead it is something like 0.11 with the DA approach. If you don't think DA adds anything useful, you can remove that part, but when you use DA, you must do it well. And you must also present the DA approach in the method section because this paragraph...

Quote:Due to the high probability of variable intercorrelation, variance inflation factors were examined in STATA for the composite SAT iteration with the highest coefficient of determination and the iteration with the highest standard error of one of the independent variables, and extensive commonality analysis in R was added, as well.

... is not a satisfying introduction of DA.

And I don't also understand the citation of Paul Allison. I'm sure you mis-interpret it, as you reacted to the citation with this sentence "So, if you saw the results of a commonality analysis that showed a total effect of an independent variable that was twice as big as its unique effect, you would pull your hair out and demand an explanation". Actually that's wrong. What he merely says is that the standard error of the correlated variables will be too high. See wikipedia, for another statement of this kind :

Quote:http://en.wikipedia.org/wiki/Variance_inflation_factor

The square root of the variance inflation factor tells you how much larger the standard error is, compared with what it would be if that variable were uncorrelated with the other predictor variables in the model.

If the variance inflation factor of a predictor variable were 5.27 (√5.27 = 2.3) this means that the standard error for the coefficient of that predictor variable is 2.3 times as large as it would be if that predictor variable were uncorrelated with the other predictor variables.

And here too :
http://www.how2stats.net/2011/09/varianc...r-vif.html

Quote:My prior discussion of GCTA already admitted and supported this.

You mean, the reference to the Marioni (2014) study ? It merely says that the genetic correlation of SIMD (index of deprivation) and IQ is modest (compared to education with IQ) whereas the high VIF of bachelor says only that bachelor is too highly correlated with the other independent var., but SAT score is your dependent var, not the independent var.
 Reply
#30
@menghu1001

<blockquote><cite><span> (07-02-2014 08:45 PM)</span>menghu1001 Wrote: <a href="http://www.openpsych.net/forum/showthread.php?pid=660#pid660" class="quick_jump">&nbsp;</a></cite>My problem is that you are too prompt to conclude that "Parents’ Income is a Poor Predictor of SAT Score" …. the word multicollinearity is ambiguous …. Our problem is not really about multicollinearity…. i think you should at least recognize the possibility that income can have big impact under some very specific assumptions, unlikely or not you're the one to say and provide the explanation…. an interaction term … has nothing to do with the SEM decomposition of total effect into indirect and direct effect.</blockquote>


I did state the possibility that income can have a big impact in order to find agreement with you, which failed, and I was wrong to do so because income <i>cannot</i> have a big impact. I’m taking that statement back out of my paper. Of course, parents’ income correlates with parents’ education. That’s obvious and was in my paper from the beginning. Multicollinearity is present in my models. It usually is present in social science research. I appreciate that you are willing to admit that you don’t know what multicollinearity is, but you need to learn about it because it is <i>precisely</i> the subject of discussion. I don’t appreciate that you insist on obfuscating by borrowing the language of multicollinearity (“total effect,” “indirect effect,” and “direct effect”) when you clearly mean interaction effects in order to keep alive the income variable. My models specified the income variable. Income did not have a large impact on the coefficients of determination, which were large, and interaction effects are mathematically excluded. Professor Allison agrees with Seibold, McPhee, and me:

<blockquote>I don’t think unspecified interactions are likely to explain high R^2 and multicollinearity.</blockquote>


<blockquote><cite><span> (07-02-2014 08:45 PM)</span>menghu1001 Wrote: <a href="http://www.openpsych.net/forum/showthread.php?pid=660#pid660" class="quick_jump">&nbsp;</a></cite>SAT score is your dependent var, not the independent var. </blockquote>

GCTA bivariate correlation for education (my independent variable) and cognitive ability is about perfect.

Fundamentally, you cannot legitimately conflate multicollinearity and interaction effects to keep the income variable. If you could, you would be able to reject the core of the paper and reel back in your favorite approach, SEM. You insist on doing so without providing any evidence for your unorthodox view, which makes you unreasonable. My advice is to convince your colleagues to let your journal state up-front that this type of MR research will not be published, and SEM is strongly encouraged. If that’s a good thing, then you will have distinguished your journal from every other one through your unique, principled stance.


Attached Files
.pdf   Parents Income is a Poor Predictor of SAT Scores - ODP.pdf (Size: 1.33 MB / Downloads: 586)
 Reply
 
Forum Jump:

Users browsing this thread: 1 Guest(s)