Emil O.K., I think I will give you my vote, because I don't have see any particular flaws on the articles, just some points I don't recommend and i explained that in my earlier comment. I replied to your comment below. Let's see if you disagree or not but whatever the case, I don't see any reason to disapprove the publication.

Quote:You were looking at the Spearman rho only. The Pearson r is .064. The Spearman rho has p=.072, so perhaps a fluke.

I will never trust p-value if i were you. It's not significant because the sample is not large enough. But the size is not small. I will not recommend you to be mistaken on what a significance test is. It seems to me a lot of people don't know what it is. I have seen a lot of people having correlation of, say, between 0.1 and 0.3 but with p larger than 0.05 and conclude in the end "no correlation". That's wrong. Large p means that whatever your correlation is, your N is not large enough to have lot of confidence in your result. Why I dislike p value is because the p value cannot add much new information. p value (or X²) is based on two things : sample size and effect size. You already have these two information, so p value does not add anything worthy of consideration.

And yes, you note there's a difference between r and rho. That means, maybe, that an outlier is killing your correlation.

Quote:I have written some more in the text about PC2, but kept it in the matrix to show that it is a nonsense factor.

I know. But it's not what I said. No, I said that it's not necessary to correlate the PC2 with the other variables.

Quote:It is because some variables measure good things and others negative things (in the context of well-doing on the group in Denmark). We can reverse variables so that positive values are always better and negative always worse, but it makes no difference for the math.

It will make a big difference for the interpretation, i'm sure that I'm not the only one who find your table 11 difficult to read. Generally, a "g" factor has all positive loadings on it. Sometimes, even practioners remove the variables which have zero or negative loadings on PC1. They want it to be all positive.

Quote:The reason to use R² in this case is that SPSS calculates the adjusted R² but not the adjusted R. In multiple regression, just adding a variable generally increases the R value, even when it is a nonsense, randomly distributed variable. This is because the regression abuses random fluctuation in the data.

Well. A lot of researchers interpret it like that. They add a new variable in the regression, and this variable has good correlation with dependent var. And yet R² is low. They conclude the new variable is not important. Given that, I don't see why R² should be trusted. To get the best picture of an effect of any given variable, the best way is to examine the regression coefficient, either standardized or not. It's better than R² or R.

P.S. regarding the small size of the numbers in your tables, I was reading the 2nd version, not the 1rst. Even on the 3rd draft, the numbers are all smaller than the letters in your text.