(2014-Aug-19, 08:38:01)Emil Wrote: This ups the approval count to 3. Perhaps Meng Hu will agree to publication too?

I wanted to look at your study first because it came before Dalliard's, but what you're trying to do is becoming more and more complicated. Perhaps I should deal with Dalliard first. Do the simplest thing first. (Sorry about that, you have to wait a little bit for your paper...)

(2014-Aug-18, 09:50:28)Chuck Wrote: 1. Environmental effects such as schooling tend to be most pronounced on the least g-loaded sub-tests.

2. The B/W gap shows the reverse pattern.

Ergo: The B/W gap is not due to these types of effects.

Flynn dealt with that argument already. He computes correlation with MCV, then he calculates the g-score by summing the subtests weighted by their g-loadings. He compares the g-score with the IQ score (the mere sum of the subtests). He concludes that the difference in g-score and IQ score is minor with regard to FE gains and B-W IQ gap narrowing. You see this in Dickens & Flynn (2006) study of B-W IQ gap over time. That means the "g argument" is flawed. And it is true. MCV correlations imply that when the g-loading of the test increases, the BW gap is stronger, the educational gain is lower, the Flynn gain is lower, and so on. But we know already that the IQ tests are already very highly g-loaded. That means you can't increase g-loadings by much anymore. From Jensen's (1998) The g Factor, there is a (bivariate) regression analysis that almost no one has ever cited. It's on page 377. The dependent var is B-W gap, the independent var is g-loadings. The intercept was -.163. Remember, the intercept is the value of the dependent var when all independent var is(are) zero. In other words, the B-W gap is negative, i.e., in favor of blacks when g-loading is zero (assuming linearity assumption holds, that is, there is no floor or ceiling effects). Now the regression slope seems to be 1.47. So, 1.47-0.16=1.31. To which he concludes that when the g-loading of the IQ tests are at their maximum (100%), the expected B-W gap should be 1.31 SD difference, compared to what we see today, mostly around 1.0 or 1.1 SD. What does it tell us ? That 1.1 SD is less real than 1.3 SD ? Of course not. Or that increasing the amount of g-loading makes lot of difference ? Not even so. And that's what Flynn attempted to show in his book "Where Have All the Liberals Gone? Race, Class, and Ideals in America" page 87 box 14. There is an apparent negative correlation between IQ gap of blacks in 2002 versus whites in 1947-48 and the g-loadings. The IQ of blacks was 104.42 and their GQ was 103.53, which is lower, thus confirming MCV but at the same time killing the "g argument" you both make. This can be seen by the trivial (1 point) difference between IQ and GQ. This confusion concerning the idea that g and Flynn gain have different properties just because they load on different factors, through PCA, is similar to what I have pointed it

elsewhere about the distinction we should make between correlation and means. If PCA "group" the variables and show you a pattern on which education/FE gains is not on the component with g-loading but on the other hand heritability and B-W gap load on the component along with g-loadings, it cannot prove that educ/FE gains are unreal gains. Back to Jensen's (1998) regression analysis, if the best we can have is to widen the gap by a mere 0.2 SD, this is a pretty weak argument.

If you want to show Flynn gain or educational gain to be devoid of g, there is only and only one way to do that : by way of transfer effect. Such as the Nettelbeck & Wilson (

2004) or the Ritchie et al. (

2012) for non-transferability of education gain to reaction times. Every other methods are flawed in their purpose of showing if the score change is g or not g. Even the MGCFA decomposition of g/non-g gains is irrelevant here.

As for the B-W gap, there is nothing I can say. If you're not looking for any pattern of score changes, it's clear that transfer effect studies can't be of help. At least, you can rely on MGCFA g/non-g decomposition along with its model fit.

Quote:Also, I strongly disagree with his characterization of the evidential status of SH. I consider SH to be well supported.

Then, I suppose you disagree with the fact that a g model ought to be preferred over non-g model(s) only on the basis of better model fit. In social sciences it is a well known fact that a model is to be preferred when and only when the model in question fits better than others. If you're not familiar with that, I can tell you a lot of scientists are not aware of that either. Or maybe they are, but they don't show it in their work. In economics in particular, I have read econometricians saying that lot of studies attempt to test a particular model but they don't oppose the studied model against the alternative ones. They say that if other models can predict the data as well, it's not obvious which one is the loser or the winner.

Based on that, I remain with my argument. There is no clear winner or no loser among g and non-g models. You only see g to be winner because you put more weight on the worst methodologies (MCV and PCA) but not on the best and recommended methods (CFA modeling). That' why I said to Dalliard earlier that the evidence for g is positive but only weak.

Quote:Also, I explained that different models in fact can be tested with MCV here.

Model testing should involve "model fit indices" but it's not what you did.

-----

-----

Concerning Piffer's method, I don't understand why some of you here reject it without giving any argumentation whatsoever. Just because it's not "vetted" does not mean the method is flawed. To prove it flawed, you should explain what's wrong in there. I always disliked argument from authority, and you know that.

By the way, you keep saying "Rietvald" but it's "Rietveld" ! There is no mispelling in the name among the list of references given the last version of the paper (Perhaps precise at the top of the first page it's a

DRAFT) but there is a mispelling at page 35.

Also, when someone makes changes, especially if the article is lengthy, try to make explicit which pages have been modified, changed, or made them in color, or whatever. I don't want to re-read the entire article again.

And I'm not even sure what's being changed here. From what Dalliard says, he has added several studies (even though I don't see Ang et al. 2010). I found these passages already (CTRL+SHIFT+F is helpful sometimes) but what about my comments on measurement invariance and g models ? I want to be sure about what the author think of this issue, and he is planning to rewrite the relevant passages according to my comments, before I give my final opinion.