I said before I had a paper on r². It took me an eternity to find it. Hopefully I got it. the reason why I failed to catch that right in time, perhaps not uniquely due to the fact I have lot of documents, but because of this, I create multiple folders, one for CFA/SEM, DIF, multiple regression, other basic stats things, and all, because too lot of things. But the paper was a presentation of a tool for R package, so I put it there, in my folder "R packages", whereas I was searching again and again in my folder "regressions" and "CFA/SEM"... in vein.

You can have a free link here :

http://www.tarleton.edu/institutionalres...ethods.pdf
The proof is given directly in table 6, where the R² is the total of the unique and common effects. They also say :

Quote:Also called element analysis, commonality analysis was developed in the 1960s as a method of partitioning variance (R2) into unique and nonunique parts (Mayeske et al., 1969; Mood, 1969, 1971; Newton & Spurrell, 1967).

And because I'm a little curious, I have searched through Google scholar, and found this one :

Seibold DR, Mcphee RD. Commonality analysis: A method for decomposing explained variance in multiple regression analyses. Human Communication Research. 1979;5:355–365.

Quote:Whether in the “elements analysis” of Newton and Spurrell(1967), the “components analysis” of Wisler (1969), or the “commonality analysis” of Mood (1969, 1971), it was also noted that the unique effects of all predictors, when added, rarely summed to the total explained variance.

So in the case of even five predictors R2 may be decomposed into 31 elements. Since 26 of these are commonalities, difficulties in interpreting higher-order common effects increase in proportion to increases in the number of predictors.

And by the same token, I also found this :

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2930209/
It says the same thing, and I haven't read it carefully (just quickly) but the first one is more than sufficient.

(2014-Jun-16, 01:51:23)nooffensebut Wrote: Okay, so beta coefficients can’t see the total effect, but coefficients of determination can because they can in an SEM model. I assume you are referring to some other research project you have, but if it is not MR, how does that prove that the coefficient of determination in an MR model can see the same thing?

You can see above. But regardless of that, it's common sense. I mean, MR and SEM are both multiple regression. It's the same stuff. It's just that SEM offers more possibilities and thus is even more complex. Look here. Same numbers everywhere.

http://humanvarietiesdotorg.files.wordpr...ession.png
http://humanvarietiesdotorg.files.wordpr...os-sem.png
Quote:Therefore, any MR study that mentions a beta value is wrong because beta values assume that there is some purpose in standardizing a variable for the purpose of a comparison, which is wrong according to your research.

No. In my last comment I have said you can make some assumptions. But this should be either theoretically based on supported/suggested by previous research. This relies on strong assumption of course, and it makes your interpretation very dependent on your assumption. This is doable but you cannot say there is no ambiguity in MR.

Emil Wrote:A small or zero b does not prove that a variable has a small total effect, sure. This does not seem to be news.

Considering how many if not all of the researchers interpret MR coefficient the wrong way, I have the impression this is new to them. Because if not, then I don't understand why they continue to say stuff like that.

In any case, I'm not arguing the author must agree with me here. Like I said, my opinion is not "recognized". However, if you want me to approve, then the author has 2 options :

1. He beats me in the ongoing dual. And I will withdraw my argument, and approve.

2. He chooses to make explicitly in the text that he acknowledges the "possibility" that MR coefficient evaluates the direct effect (not total) of the independent variables, and thus in this case, the total effects of all these variables need not be the same as what is displayed by the MR regress coefficient. For example, he can say (linking to my blogpost) that MR coefficient is the equivalent of the direct path in SEM models. I need only this modification, a sort of "caveat" for the readers, and then I approve. Once again, I don't say he needs to endorse my views, but only shows he is open to this possibility.

Does it sound reasonable ?

EDIT.
Emil Wrote:My understanding is that the point of the paper is to respond to the claim that SAT tests just measure parental income. The author shows that this does not pan out in regressions with other important variables.

I'm Ok with that. I don't doubt the effects of these variables. I'm just thinking about their relative strength, e.g., which one is the strongest. Or what is the total effect of x1, x2 etc...

Some variables have strong effect, e.g., participation, some others have an effect that is close to zero, e.g., income. And like I said before, I doubt income can cause education (unless education is time2 and income time1, in a repeated-measure analysis). But now, like I also said, the more indep var you have, the more you need to disentangle these possible indirect effects, and write either x1 can cause x2, etc.

I never said he needs to redo the analysis. Sure not. But he needs to makes crystal clear how he interprets the output, and specifically, the possible indirect effects.