Thank you for reviewing again, Dailliard.

Removed "useful". The editor must have added it.

Done.

An interaction would be that a given predictor P1 is better at predicting variable V1 than P2, but that P2 is better at predicting V2 than P1. A predictor x outcome variable interaction. This shows up as lower than |1| correlations between the prediction correlation vectors. However, surprisingly, they were all close or closish to 1.

For instance, one might have posited that Islam prevalence should be good at predicting crime due to incompatible religion/culture, but that it has little influence on male unemployment. This wasn't found. The IQ x Islam vectors correlate |.99| (shown in Table 2). Surprising to me. I remember when I first got my hands on the Danish data, I checked for this. Obviously, having read a lot about intelligence and education, I would expect IQ to be a better predictor than Islam, but perhaps the other way around for crime. However, it was pretty linear there too. Actually, I didn't think of this way of testing it before now. Perhaps I should add this to the reanalysis of the Danish data. The Danish data is much more suited for testing it since the number of outcome variables is much larger (9 vs. 25).

How would you like me to change this?

Islam does not correlate highly with the others (data from the DF.cor object in R):

Islam x IQ = -27

x Altinok = -43

x GDP = -14

x International S = -.33

so it can be combined fruitfully in multiple regression. For instance, I tried with IQ+Islam to predict S scores in Norway (imp. 3). R2 adjusted is .59, i.e. r=0.77 which is higher than any of the predictors alone (if one doesn't consider S scores from Denmark a predictor, it has .78).

If there are no missing values for a case. Check the relevant code:

Fixed.

Fixed to: "the squared multiple correlation of regressing the first factor on the original variables".

Quote: 1) "Recent studies show that criminality and other useful socioeconomic traits"

Removed "useful". The editor must have added it.

Quote: 2) Use the equal or greater than sign (≥) rather than >=.

Done.

Quote: 3) "Are some predictors just generally better at predicting than others, or is there an interaction effect between predictor and variables?"

Not sure what you mean by interaction here. The question is whether any of the predictors have unique predictive power.

An interaction would be that a given predictor P1 is better at predicting variable V1 than P2, but that P2 is better at predicting V2 than P1. A predictor x outcome variable interaction. This shows up as lower than |1| correlations between the prediction correlation vectors. However, surprisingly, they were all close or closish to 1.

For instance, one might have posited that Islam prevalence should be good at predicting crime due to incompatible religion/culture, but that it has little influence on male unemployment. This wasn't found. The IQ x Islam vectors correlate |.99| (shown in Table 2). Surprising to me. I remember when I first got my hands on the Danish data, I checked for this. Obviously, having read a lot about intelligence and education, I would expect IQ to be a better predictor than Islam, but perhaps the other way around for crime. However, it was pretty linear there too. Actually, I didn't think of this way of testing it before now. Perhaps I should add this to the reanalysis of the Danish data. The Danish data is much more suited for testing it since the number of outcome variables is much larger (9 vs. 25).

How would you like me to change this?

Islam does not correlate highly with the others (data from the DF.cor object in R):

Islam x IQ = -27

x Altinok = -43

x GDP = -14

x International S = -.33

so it can be combined fruitfully in multiple regression. For instance, I tried with IQ+Islam to predict S scores in Norway (imp. 3). R2 adjusted is .59, i.e. r=0.77 which is higher than any of the predictors alone (if one doesn't consider S scores from Denmark a predictor, it has .78).

Quote:4) "using multiple imputation8 to impute data to cases with 1 or fewer missing values"

How is it possible to have fewer than 1 missing values?

If there are no missing values for a case. Check the relevant code:

Code:

`#count NA's`

DF.norway.missing = apply(DF.norway, 1, is.na) #produces a col with T/F for each case

DF.norway.missing = apply(DF.norway.missing, 2, sum) #sums the number of missing per col

DF.norway.missing.table = table(DF.norway.missing) #tabulates them

#subsets

DF.norway.complete = DF.norway[DF.norway.missing <= 0,] #complete cases only, reduces N to 15

DF.norway.miss.1 = DF.norway[DF.norway.missing <= 1,] #keep data with 1 or less missing values, N=18

DF.norway.miss.2 = DF.norway[DF.norway.missing <= 2,] #keep data with 2 or less missing values, N=26

DF.norway.miss.3 = DF.norway[DF.norway.missing <= 3,] #keep data with 3 or less missing values, N=67

Quote: 5) "Table 4 shows description statistics"

Fixed.

Quote: 6) "the squared multiple correlation of regression the first factor on the original variables"

Word missing or something.

Fixed to: "the squared multiple correlation of regressing the first factor on the original variables".