Hello There, Guest!  
 Previous 1 3 4 5 6 7 8 Next   

[ODP] Crime, income and employment among immigrant groups in Norway and Finland

#41
"Specificity" is preferable to "specificness". Also, the predictive value of variables differs between studies quite a bit, and it would be nice to address that. I am incompetent to address the statistical techniques used in this paper.
 Reply
#42
I asked Wicherts to review the paper, but he declined due to time constraints. I will ask Meisenberg too, but I was looking to get a non-hereditarian on the review board.
 Reply
#43
(2014-Sep-18, 15:26:09)Emil Wrote: P1 = predictor 1, P2 = predictor 2, etc.
V1 = outcome var 1, V2 = outcome var 2, etc.


I understand the labels, but I quoted this sentence "An interaction would be that a given predictor P1 is better at predicting variable V1 than P2, but that P2 is better at predicting V2 than P1" for another reason; it's that I don't understand the meaning of it. By saying V1 and then V2, you have in mind two different regression models.

Originally, the question was asked by Dalliard :

Quote:3) "Are some predictors just generally better at predicting than others, or is there an interaction effect between predictor and variables?"

Not sure what you mean by interaction here. The question is whether any of the predictors have unique predictive power.

What you said is whether or not the inclusion of interaction terms will affect the (independent) relative strength of your independent variables, within the same regression. But not two different ones.

Another thing I don't understand, it's because an interaction between predictors is aimed to answer the question about if adding interaction such as P1*P2, with or without squaring them (P1+P2+P2^2+P2^3+P1*P2+P1*P2^2+P1*P2^3), can change your independent coefficients. If the slopes are not linear but curvilinear, the addition of an interaction term will fit the data better. In general, what happens when an interaction is meaningful, is that the main effects (i.e., P1 and P2) will be attenuated. Even if one of the two predictors is more attenuated than the other, I don't think it's relevant here. The interpretation of the main effects becomes totally different when you add interactions. With an interaction, P1 and P2 are the effects net of the interaction, but the interaction itself includes and confounds the effect of both.

I remember several months ago when I attempted to perform regression with wordsum (dep) and race + SES (indep) variables. With the interaction of race*SES, the coefficient of race was near zero. A plot of the predicted values from the model revealed that at the very low SES levels, the BW gap in Wordsum was just meaningless, but that it increases considerably when SES increases. In such a situation, how can we say that race has become less important ?

You cannot say that SES is more important than race just because the interaction term nullifies the main effect of race, because the interaction term confounds the two effects. (when I say "more important", I am of course talking about the direct effects of the independent variables)
 Reply
#44
(2014-Sep-23, 21:08:42)menghu1001 Wrote:
(2014-Sep-18, 15:26:09)Emil Wrote: P1 = predictor 1, P2 = predictor 2, etc.
V1 = outcome var 1, V2 = outcome var 2, etc.


I understand the labels, but I quoted this sentence "An interaction would be that a given predictor P1 is better at predicting variable V1 than P2, but that P2 is better at predicting V2 than P1" for another reason; it's that I don't understand the meaning of it. By saying V1 and then V2, you have in mind two different regression models.

Originally, the question was asked by Dalliard :

Quote:3) "Are some predictors just generally better at predicting than others, or is there an interaction effect between predictor and variables?"

Not sure what you mean by interaction here. The question is whether any of the predictors have unique predictive power.

What you said is whether or not the inclusion of interaction terms will affect the (independent) relative strength of your independent variables, within the same regression. But not two different ones.

Another thing I don't understand, it's because an interaction between predictors is aimed to answer the question about if adding interaction such as P1*P2, with or without squaring them (P1+P2+P2^2+P2^3+P1*P2+P1*P2^2+P1*P2^3), can change your independent coefficients. If the slopes are not linear but curvilinear, the addition of an interaction term will fit the data better. In general, what happens when an interaction is meaningful, is that the main effects (i.e., P1 and P2) will be attenuated. Even if one of the two predictors is more attenuated than the other, I don't think it's relevant here. The interpretation of the main effects becomes totally different when you add interactions. With an interaction, P1 and P2 are the effects net of the interaction, but the interaction itself includes and confounds the effect of both.

I remember several months ago when I attempted to perform regression with wordsum (dep) and race + SES (indep) variables. With the interaction of race*SES, the coefficient of race was near zero. A plot of the predicted values from the model revealed that at the very low SES levels, the BW gap in Wordsum was just meaningless, but that it increases considerably when SES increases. In such a situation, how can we say that race has become less important ?

You cannot say that SES is more important than race just because the interaction term nullifies the main effect of race, because the interaction term confounds the two effects. (when I say "more important", I am of course talking about the direct effects of the independent variables)


Meng Hu,

Here's a very simple scenario:

Imagine we have three predictors and 5 outcome variables.

The correlations between the three predictors and 5 outcome variables. Suppose we obtain their prediction vectors, i.e. the correlations between the predictors and each of the outcome variables.

Code:
vec.a = c(.1, .2, .3, .4, .5) #vector a
vec.b = c(.15, .15, .4, .6, .8) #vector b
vec.c = c(0.26, -1.12, -0.33, -0.34,  0.06) #vector c
DF.vec = cbind(vec.a, vec.b, vec.c) #dataset
DF.vec.cor = cor(DF.vec) #cortrix
round(DF.vec.cor,2)


Which gives:

Code:
vec.a vec.b vec.c
vec.a  1.00  0.97  0.11
vec.b  0.97  1.00  0.33
vec.c  0.11  0.33  1.00


So we see that the r between a and b is very high, so they function in the same way, but may not be equally strong predictors. However, c is clearly very different and has low r's with the other two. This means there is a predictor x outcome variable interaction.

I only talk of correlations, no regression models. I am not talking about adding interactions variable (e.g. a*b) in regression models.

I am also not talking about predicting unique parts of the variance in multiple regression.

Apparently, the term has confused some readers. What term do prefer me to use? Perhaps just talk about testing the generality vs. specifically of the predictors predictive power?

---

Also, Dalliard's point made in the review of the International S factor paper about the use of variables that have not been reversed holds here as well. If one reverses them so that predictors always predict something better with a positive value, the correlations will get smaller.

I have used the data as given by the sources and not biased them in any way. Reversing them arguably makes the results less interpretable e.g. using Islam prevalence to predict low-crime as opposed to high crime.

However, suppose one really wants to minimize correlations. Doing it consistently to make as many correlations positive as possible, the new mean abs. correlations are .54 (Norwegian datasets) and .90 (Danish). The Danish dataset is much better since it has much less sampling error (all 25 vars have near N=70). So even arguably biasing the results against the hypothesis yields a strong positive outcome.
 Reply
#45
Someone has created the Open Science Framework. It is possible to create projects there and have them host the files. This is faster then using the forum to upload stuff and potentially saves me from having to upgrade the server for more space.

I have created a repository for this project: https://osf.io/emfag/
 Reply
#46
I asked Meisenberg to review the paper.

Quote:The data and results seem basically sound. The correlations between predictors look unusually high. I am more used to correlations of about 0.7 between “development indicators” such as IQ and lgGDP, but it seems you base these correlations only on those countries that have sent sufficiently large migrant groups, therefore sample sizes are small. Mainly, there are many little ways in which this paper can be improved. Especially, you have to make sure that your style is clear so that readers can follow it without irrelevant cognitive effort. I am attaching the file with lots of sticky notes that have specific suggestions how the writing can be improved.

Gerhard

His further comments are here: https://osf.io/7jaxm/
 Reply
#47
Replying in the order of the notes in the PDF.

Quote:Do you mean variables pertaining to the country of origin, or the destination country? This should be made clear in this sentence.

Made it more clear.

Quote:These two sentences are linguistically suboptimal. Better: This is because part of the reason...is that the people living there possess behavioral traits that are unfavorable for the generation of wealth. When they move..., they will still possess these traits,.... Also, does the hypothesis specify the mechanism of behavioral transfer (genetic or cultural), or is it agnostic about mechanisms? But perhaps you put this in the discussion section.

Sentences seem fine to me. Added a note about the agnosticism of the spatial transferability hypothesis about the cause of the stability of traits.

Quote:Does this last paragraph refer only to tertiary educational attainment? In that case it should not be a separate paragraph. Also, does the website define what specifically landbakgrunn means? For example, how would it list someone with Norwegian citizenship and a Moroccan mother and Norwegian father? One of the central question in migration research is in what way first and second and third generation immigrant are different from each other and the natives of the host country.

Joined it with the last paragraph.

Added a footnote about the meaning of "landbakgrunn". I could not find any clarification on their website. It is probably a legalistic definition akin to Denmark. There is no information about immigrant generation. Presumably, they are mostly 1st gen. immigrants.

Quote:for simple correlatens we don't really have predictors, only correlates. "Predictor" implies causality, which we cannot infer from correlation alone. In multiple regression, the term predictor should be avoided as well ("independent variable" is the better term), because you can only say that the model predicts the outcome (the "dependent variable" in a statistical sense, again without proving causality. Check this throughout the following text.

"predictor" does not imply causality, it simply is another term for "independent variable" which is far longer. Added a footnote about this.

Compare: https://en.wikipedia.org/wiki/Dependent_...s_synonyms

Quote:This would be expressed more clearly if you state that you need 1. a sufficiently large sample of countries so that country comparisons can produce statistically significant results, and 2. a sufficiently large number of individuals representing each country to reduce random sampling errors, and that there is a tradeoff between these two requirements.

I think it is fine the way it is. I don't care that much about statistical significance. I am interested in effect sizes. Standard statistical significance tests are not suited for grouped data anyway.

Quote:Observations about the results in this table: The correlations of crime and education with IQ and lgGDP that you describe in Norway and Finland are very similar to those at the country level worldwide: Crime, and especially violent crime, correlates more with IQ than with lgGDP. In regression models, lgGDP is usually not a significant predictor of crime, but IQ and racial diversity are the important independent predictors. Education (measured as educational degrees or average years in school) correlates more with lgGDP than with IQ, most likely because rich countries can afford extensive school systems. You may want to discuss your results on this background, if you don't do it in your discussion section already.

I rather not. The crime variable have a small sample which may effect their strength. The Danish datasets are better suited for this kind of comparison because it has a larger sample of countries, and is both age (by age groups) and sex controlled (men only). See the previous paper: http://openpsych.net/ODP/2014/05/educati...n-denmark/

Quote:Better: Correlations of 1 indicate perfect prediction (in the statistical sense), and 0 means no relationship.

This is not right. See the pedagogical example above: http://openpsych.net/forum/showthread.ph...04#pid1704

These correlations are not predictor x outcome variable correlations, they are predictor vector intercorrelations.

Quote:This could be expressed much clearer. What you seem to mean is collinearity, which means correlations among the independent variables in a regression model. Better write that you want to determine the correlations among those country-level variables that predict migrant outcomes in Norway and Finland. Also, in the table you should indicate the number of countries on which these correlations are based. You mention N = 9 in the text, but it is easier for readers when they don't have to search the text when trying to make sense of the table.

How would you rewrite it? It seems clear to me. I am waiting for the input of Dalliard and Meng Hu about what terminology they prefer me to use, since it seems that the word "interaction" causes misunderstandings.

The table does not concern co-linearity as such (one could assess that with variable inflation factors). The paper does not include discussion of multiple regression results as that seemed uninteresting to me.

Added information about sample size to table captions for both predictor vector analyses.

Quote:In this place, you can delete "or fewer".

Deleted.

Quote:Syntax of this sentence.

Syntax seems fine to me.

Quote:To reduce cognitive effort needed to read and understand this, it would be better to write something more like "I want to know how socioeconomic outcomes of migrants in Norway (measured as S factor scores) relate to country-level variables measured in the migrants' countries of origin."

Added "country-level" to the sentence. The term "S factor score" is used throughout the paper and does not need further explanation. However, changed the abstract to the non-abbreviated term, as well as the section (4) where the international S scores are introduced. Note that section 5 also mentions the abbreviation.

Quote:There are similar differences for lgGDP, IQ and Altinok. But none of them are very large, and because of the limited number of countries they may be chance findings.

There are not. E.g. IQ x local S for imputed Danish dataset is .54, Norway is .59. You must mean the smaller datasets. Note that the IQ predictor strength decreases with increasing sample size, while the Islam one doesn't change much. The others tend to increase. The differences in the smaller samples are probably statistical artifacts.

Quote:Here, make clear whether you mean the S factor calculated for immigrant groups in the host country (calculated from income, crime rate etc), or for the countries of origin (calculated from lgGDP, national IQ etc).

Made it more clear.

----

An updated draft can be found in the paper repository. https://osf.io/g2fsr/
 Reply
#48
"Are some predictors just generally better at predicting than others, or is there an interaction effect between predictor and variables? An example of an interaction effect would be that Islam is better than IQ as predicting crime, while IQ is better at predicting educational attainment."

Interaction in regression analysis means that the predictor variables interact non-additively. For example, if the relation between predictor A and outcome variable Y varies at different levels of predictor B, then there's A x B interaction. Interaction means that aside from the additive main effects of the predictors there are interactive effects between them. To say that there is an interaction between predictor and outcome variables is a misuse of terminology.

You should include also predictor intercorrelations in Table 2 (below the diagonal) because reporting just the correlations between the "prediction vectors" is confusing.
 Reply
#49
Predictor intercorrelations can be seen in the supplementary material. E.g. here: https://osf.io/3752j/ Rownames are missing due to the way the export function works. They are in the same order as the colnames. So e.g. IQ x Altinok is .91. The 4 non-Islam predictors have high intercorrelations and so are not much use together in MR. Islam does not correlate highly, so it can be combined with one of them in MR.

I will work on a version that fixes the confusion with nonstandard use of "interaction".
 Reply
#50
(2014-Sep-25, 18:10:20)Emil Wrote: Predictor intercorrelations can be seen in the supplementary material. E.g. here: https://osf.io/3752j/ Rownames are missing due to the way the export function works. They are in the same order as the colnames. So e.g. IQ x Altinok is .91. The 4 non-Islam predictors have high intercorrelations and so are not much use together in MR. Islam does not correlate highly, so it can be combined with one of them in MR.

I will work on a version that fixes the confusion with nonstandard use of "interaction".


So add the predictor correlations to the paper. They are essential for interpreting the results.
 Reply
 Previous 1 3 4 5 6 7 8 Next   
 
 
Forum Jump:

Users browsing this thread: 2 Guest(s)