Hello There, Guest!  

[ODP] And the next president of the United States...

#1
Journal:

Open Differential Psychology

Authors:
Bryan J. Pesta

Title:
And the Next President of the United States is: Predicted by Race-adjusted State IQ and Well-being


Abstract:
I report U.S. state-level relationships between measures of IQ, race, well-being (e.g., income, health), and the results of the 2016 U.S. presidential election. Based on prior research (Pesta & McDaniel, 2014), I predicted that IQ and race would be relatively unrelated to election results in bivariate analysis. Instead, a mutual suppression effect was expected, such that IQ would more strongly predict election outcomes when controlling for race, and vice versa. The predicted pattern appeared; so too did mutual suppression effects between racial composition and most but not all measures of state well-being (i.e., religiosity, crime, education, health, and income). The suppression patterns consistently revealed that after adjusting for state racial composition, blue states were smarter and more prosperous than red states. I conclude that conservatism is inversely co-linear with IQ.


Length:

~5465 words, ~26 pages.

I thought this paper was accepted at Intelligence, but it was rejected after the second revise and resubmit. If appropriate, I'm willing to share feedback from prior reviewers. At any rate, please consider this for publication here.

Sincerely,

Bryan


Attached Files
.docx   Trump_rev2.docx (Size: 58.08 KB / Downloads: 326)
.docx   sup_mat.docx (Size: 14.45 KB / Downloads: 307)
 Reply
#2
Hi Bryan,

I recall this paper. Note that the result of this paper does not hold when one analyses counties (n=3100), so I would recommend rewriting this to be more weakly worded. Otherwise, it would get immediately disproved by the county paper.

Aside from that, I'll read your paper and get back to you with my thoughts.
 Reply
#3
(2017-Jun-12, 22:27:20)Emil Wrote: Hi Bryan,

I recall this paper. Note that the result of this paper does not hold when one analyses counties (n=3100), so I would recommend rewriting this to be more weakly worded. Otherwise, it would get immediately disproved by the county paper.

Aside from that, I'll read your paper and get back to you with my thoughts.


Hi Emil,

Thanks for the comment. This wasn't something prior reviewers brought up, but I had thought about how "reliable" my results were, and therefore how strongly or weakly I could discuss them.

It occurred to me that I'm not trying to estimate population values from some sample. I have the population. There are no more cases to add, and so this study is completely descriptive versus inferential. In fact, I'm not sure my use of significance testing and p-values is even appropriate (I included them as interpretational aids in that these are what journal readers expect to see).

County-level analysis producing different results is interesting, but it does not invalidate the results I found when using U.S. states as the unit of analysis. In other words, the numbers are the numbers. The population parameters I reported are what they are. A remaining puzzle, then, is why results differ across units of analysis?

I could be wrong about this, and am interested in what others here think.

Bryan
 Reply
#4
Bryan,

Please find attached my commented version of your paper.

Note that your study files must be uploaded to a repository. Currently, the data are entirely missing. As before, I recommend that you make a user at OSF, create a repository and upload the files.

The paper has no figures. Perhaps you can a way to visualize your main suppression finding?

https://osf.io/


Attached Files
.docx   Pesta paper Emil comments 2017-07-02.docx (Size: 37.32 KB / Downloads: 240)
 Reply
#5
(2017-Jul-03, 04:55:07)Emil Wrote: Bryan,

Please find attached my commented version of your paper.

Note that your study files must be uploaded to a repository. Currently, the data are entirely missing. As before, I recommend that you make a user at OSF, create a repository and upload the files.

The paper has no figures. Perhaps you can a way to visualize your main suppression finding?

https://osf.io/


Hi Emil,

Thanks for your review. I'm in somewhat of a dilemma here. I probably shouldn't have submitted this paper until mid-August, as then I'd have the proper time to deal with responses to reviews. I'm in the middle of coordinating and then running an event that ends August 18. It's now apparent that I'll have little time to deal with this paper till then.

Given that, should I retract, or would it be ok to leave it unattended for a month?

If I did retract, could I resubmit later?

Thank you for your consideration,

Bryan
 Reply
#6
(2017-Jul-18, 16:10:41)bpesta22 Wrote:
(2017-Jul-03, 04:55:07)Emil Wrote: Bryan,

Please find attached my commented version of your paper.

Note that your study files must be uploaded to a repository. Currently, the data are entirely missing. As before, I recommend that you make a user at OSF, create a repository and upload the files.

The paper has no figures. Perhaps you can a way to visualize your main suppression finding?

https://osf.io/


Hi Emil,

Thanks for your review. I'm in somewhat of a dilemma here. I probably shouldn't have submitted this paper until mid-August, as then I'd have the proper time to deal with responses to reviews. I'm in the middle of coordinating and then running an event that ends August 18. It's now apparent that I'll have little time to deal with this paper till then.

Given that, should I retract, or would it be ok to leave it unattended for a month?

If I did retract, could I resubmit later?

Thank you for your consideration,

Bryan


You can leave it unattended for some time. I myself do this. We don't actually have any rule in place about a time-out effect. However, we did once move a paper to the 'abandoned' category after >1 year without any author replies despite multiple contact attempts.
 Reply
#7
(2017-Jul-19, 04:25:39)Emil Wrote:
(2017-Jul-18, 16:10:41)bpesta22 Wrote:
(2017-Jul-03, 04:55:07)Emil Wrote: Bryan,

Please find attached my commented version of your paper.

Note that your study files must be uploaded to a repository. Currently, the data are entirely missing. As before, I recommend that you make a user at OSF, create a repository and upload the files.

The paper has no figures. Perhaps you can a way to visualize your main suppression finding?

https://osf.io/


Hi Emil,

Thanks for your review. I'm in somewhat of a dilemma here. I probably shouldn't have submitted this paper until mid-August, as then I'd have the proper time to deal with responses to reviews. I'm in the middle of coordinating and then running an event that ends August 18. It's now apparent that I'll have little time to deal with this paper till then.

Given that, should I retract, or would it be ok to leave it unattended for a month?

If I did retract, could I resubmit later?

Thank you for your consideration,

Bryan


You can leave it unattended for some time. I myself do this. We don't actually have any rule in place about a time-out effect. However, we did once move a paper to the 'abandoned' category after >1 year without any author replies despite multiple contact attempts.


Emil,

I have some time now to work on this, and I hope to have a response to your review soon. For now, though, I've attached the data. I'm happy to host this somewhere else as well, if that's required.

Bryan


Attached Files
.xlsx   trump.xlsx (Size: 14 KB / Downloads: 232)
 Reply
#8
Emil,

Here is my reply to your review.

Best,

Bryan


Attached Files
.docx   Response to Reviewer Kirkegaard.docx (Size: 18.09 KB / Downloads: 252)
 Reply
#9
Hi Bryan,

I'll reply in thread since that's easier to follow. Anything not commented on is things that were ok.


Quote:Notes. I started by replying to your Word comments in Word, but that was hard to follow. So, I decided to do my replies here as an outline. I will upload the entire revised paper once I know what changes other reviewers suggest. Also, I am now rewriting various sections of the paper (and changing the title) to sound less dramatic / conclusive about the results.


Sounds good.

Quote:2. Actually, it’s about how the states vote in the presidential elections cf. first past the post. The states don’t have to be very skewed politically for this to happen. I’m not sure I completely understand your first point here. I agree it’s the states—via how its residents vote—that are either blue or red. If it helps, here is the dictionary definition of a blue state: A US state that predominantly votes for or supports the Democratic Party. Please let me know if I’m not addressing your concern.

My point is that a state can be consistently blue/red while the actual voter margin is fairly close to 50%.

https://en.wikipedia.org/wiki/Red_states...lue_states

Example: Montana is listed as a consistent red state (4/4 last elections to R). However, the average voter margin is only 3-10%, meaning that results like 45-55% and 48.5-51.5% are seen. Quite small difference in actual voter behavior.

This red/blue state terminology is an artifact of the bizarre voting system (FPTP).

Quote:4. Results which were also found by Kirkegaard 2015a, b. Now cited.

Don't have to cite all my work, just some suggestions. Unethical for reviewers to require submissions to cite their own work. Cite it if you think it is relevant.

Quote:5. Very old citations [on suppression]. Are there any newer reviewers? When I originally submitted this to a different journal, I relied on Pesta and McDaniel (2014) as the background citation on suppression effects. A reviewer there wanted more details about these effects, including mention of some classic papers. Thus, my paper evolved to discussion of that. This is an example of the rabbit hole one goes down when resubmitting a manuscript to a different journal. I hope you won’t ask me to go back up it!

Haha. Yes. This is alright.

Quote:6. I think this section should clarify matters of unidimensional vs. multi-dimensional measures of politics vs. self-identification with labels/parties. Messy findings result from messy conceptualization. Noah’s studies used a 2-dimensional conceptualization. Good point. This was a hard section to write, and I will work on making this distinction in the revision.

The dimensionality of political preferences is not very well researched I'm afraid. Some references of interest:

- http://journals.sagepub.com/doi/10.1177/...6512436995
- http://journals.sagepub.com/doi/abs/10.1...6511434618

US research on the topic is particularly annoying in that it is almost always reduced to a single left-right/liberal-conservative dimension. Another artifact of the FPTP system I guess.

Quote:8. Is this with or without excluding proportions to third parties? It matters in some states. The non-perfect correlation here means that the non-main parties were not excluded. Does it change results if they are? In my data, (100% - %Trump - %Clinton) = %Third Party votes. Across the 50 states, this mean / residual value was 5.97 (SD = 3.54). But, it cannot be allocated to some unified “independent” vote. In California, for example, although 93.4% of residents voted for either Trump or Clinton, the remaining percentages were scattered across several other, distinct candidates / parties. Specifically, 3.4% voted Libertarian; 2.00% voted Green, 0.28% voted “independent,” and 1.00% of votes were coded as “others” (https://en.wikipedia.org/wiki/United_Sta...tion,_2016).


In my data, the third party vote correlated nominally with percent Clinton (r = -.28, p = .05). It also correlated strongly with percent Black (r = -.52), and with Health (r = .44). The percent Clinton correlation, however, was attenuated to .11 when controlling for percent Black. Thus, I don’t see much value in adding data about third party votes to my manuscript.

Maybe add a brief note to a robustness section. Good practice to have a robustness section that briefly summarizes what happens if various alternative method choices. Don't want to end up like this!
http://www.nature.com/news/crowdsourced-...rk-1.18508

Quote:12. Unclear if adjusted for overfitting [page 13]. I don’t think the analyses I’m running are “inferential,” in that I’m not trying to estimate population values from some sample. Instead, I have data for an entire population—the 50 U.S. states. Statistical techniques used to make a sample more representative of a population don’t seem relevant / appropriate here.

Nonetheless, my understanding is overfitting is a concern with small sample sizes. I agree that N = 50 is small, but each case is quite “stable” in that it is an aggregate number based on the voting patterns of millions of people in each state. In sum, I don’t think overfitting is a concern here, but I will again defer to those with more knowledge of the topic.

The discussion of this is more philosophical. I lean towards the other view, but it's a judgment call.


Quote:14. Why [is well-being the common dimension]? What about reverse causation? What about common cause? Good question, and I now mention these other possibilities in the discussion.

In my thinking, we know that scores on seemingly distinct mental tests nonetheless correlate. We explain this by appeal to a latent trait common to scores on all mental tests. I’ve applied the same idea to the seemingly distinct social, political, and economic variables that nonetheless correlate by state. My explanation appeals to a general factor named well-being. It includes IQ; whereas your S factor does not.

I think of S as a formative factor, not a reflective one though. It's a useful index of social well-being, but it's not a cause or much of a thing at all.

Quote:15a. A finding that parallels the usual finding of more left-wing politics in cities, and cities are more prosperous. Maybe try a control for population density? Thank you. The correlations for state population density are .07 (IQ), .50 (% Trump), .32 (% White), and -.37 (% Black or Hispanic). However, in a regression with IQ, % White, and population density predicting % Trump, the IQ suppression effect still occurred (i.e., IQ’s Beta was still -.563).


I was going to add a section reporting all this, but when % Black or Hispanic is instead the race variable, the IQ suppression result goes away (IQ’s Beta was -.241, n/s). Given the lack of consistency (and the many additional analyses previous reviewers asked me to add), I chose not to present these data in the revision.

Seems like it is worth mentioning in a robustness section.

Quote:15b. It would be wise to note the effect sizes here are quite tiny. To derive it, you need the standard deviation of state-level IQ, which is quite small: 2.7 IQ. The IQ / voting pattern effect sizes are small only initially, because they are being suppressed by race. When percent White is controlled, for example, the effect size for IQ now predicting percent Trump is -.65.

Standardized metrics are tricky. A std. beta of .65 at the state level and at the individual level is not the same in terms of IQ at the individual level. It depends on the variance. As I mentioned, the state-level IQ SD is only 2.7 (18% of the individual-level one). When you find a std. beta at the state level of .65, it means that for an .65 decrease in Trump voting (state-level), the IQ of a state goes down by .65*2.7=1.8 IQ. Very small effect size.
 Reply
#10
Hi Emil,

Like you, I didn't paste below again points I think we've reached consensus on (or I assume require no further action on my part).


Quote:2. Actually, it’s about how the states vote in the presidential elections cf. first past the post. The states don’t have to be very skewed politically for this to happen. I’m not sure I completely understand your first point here. I agree it’s the states—via how its residents vote—that are either blue or red. If it helps, here is the dictionary definition of a blue state: A US state that predominantly votes for or supports the Democratic Party. Please let me know if I’m not addressing your concern.

Emil: My point is that a state can be consistently blue/red while the actual voter margin is fairly close to 50%.

https://en.wikipedia.org/wiki/Red_states...lue_states

Example: Montana is listed as a consistent red state (4/4 last elections to R). However, the average voter margin is only 3-10%, meaning that results like 45-55% and 48.5-51.5% are seen. Quite small difference in actual voter behavior.

This red/blue state terminology is an artifact of the bizarre voting system (FPTP).


I agree that using "blue" or "red" implies an all or none situation. But, in some sense, the labels are just conveniences. One can say "red" versus "a state that had relatively higher percentages of votes cast for Trump" every time a red state is referenced. Note, though, all the analyses here use the full range of percents cast for Trump (i.e., I did regressions, versus first categorize states as red or blue, and then run tests on them).

That said, I tried researching what constitutes a small versus a landslide win in a presidential election. I found very mixed results. For example:

"One generally agreed upon measure of a landslide election is when the winning candidate beats his opponent or opponents by at least 15 percentage points in a popular vote count. Under that scenario a landslide would occur when the winning candidate in a two-way election receives 58 percent of the vote, leaving his opponent with 42 percent.

There are variations of the 15-point landslide definition. The online political news source Politico has defined a landslide election as being on in which the winning candidate beats his opponent by at least 10 percentage points, for example. And the well-known political blogger Nate Silver, of The New York Times, has defined a landslide district as being one in which a presidential vote margin deviated by at least 20 percentage points from the national result."

So, there is at least one definition of "landslide" here that falls within your (high-end) definition of "average voter margin".

Finally, I compared the N = 14 states with 42% or less Trump votes to the 13 with 58% or more. It basically produced the same results as that reported for all 50 states.




Quote:Emil: The dimensionality of political preferences is not very well researched I'm afraid. Some references of interest:

- http://journals.sagepub.com/doi/10.1177/...6512436995
- http://journals.sagepub.com/doi/abs/10.1...6511434618

US research on the topic is particularly annoying in that it is almost always reduced to a single left-right/liberal-conservative dimension. Another artifact of the FPTP system I guess.


Thank you, I will check these out.




Quote:8. Is this with or without excluding proportions to third parties? It matters in some states. The non-perfect correlation here means that the non-main parties were not excluded. Does it change results if they are? In my data, (100% - %Trump - %Clinton) =
%Third Party votes. Across the 50 states, this mean / residual value was 5.97 (SD = 3.54). But, it cannot be allocated to some unified “independent” vote. In California, for example, although 93.4% of residents voted for either Trump or Clinton, the remaining percentages were scattered across several other, distinct candidates / parties. Specifically, 3.4% voted Libertarian; 2.00% voted Green, 0.28% voted “independent,” and 1.00% of votes were coded as “others” (https://en.wikipedia.org/wiki/United_Sta...tion,_2016).

In my data, the third party vote correlated nominally with percent Clinton (r = -.28, p = .05). It also correlated strongly with percent Black (r = -.52), and with Health (r = .44). The percent Clinton correlation, however, was attenuated to .11 when controlling for percent Black. Thus, I don’t see much value in adding data about third party votes to my manuscript.



Emil: Maybe add a brief note to a robustness section. Good practice to have a robustness section that briefly summarizes what happens if various alternative method choices. Don't want to end up like this!
http://www.nature.com/news/crowdsourced-...rk-1.18508


Will do-- as mentioned below too.




Quote:14. Why [is well-being the common dimension]? What about reverse causation? What about common cause? Good question, and I now mention these other possibilities in the discussion.

In my thinking, we know that scores on seemingly distinct mental tests nonetheless correlate. We explain this by appeal to a latent trait common to scores on all mental tests. I’ve applied the same idea to the seemingly distinct social, political, and economic variables that nonetheless correlate by state. My explanation appeals to a general factor named well-being. It includes IQ; whereas your S factor does not.

Emil: I think of S as a formative factor, not a reflective one though. It's a useful index of social well-being, but it's not a cause or much of a thing at all.

I guess it's what's causing the intercorrelations among all the different variables that leads me to my view of it.



Quote:15a. A finding that parallels the usual finding of more left-wing politics in cities, and cities are more prosperous. Maybe try a control for population density? Thank you. The correlations for state population density are .07 (IQ), .50 (% Trump), .32 (% White), and -.37 (% Black or Hispanic). However, in a regression with IQ, % White, and population density predicting % Trump, the IQ suppression effect still occurred (i.e., IQ’s Beta was still -.563).


I was going to add a section reporting all this, but when % Black or Hispanic is instead the race variable, the IQ suppression result goes away (IQ’s Beta was -.241, n/s). Given the lack of consistency (and the many additional analyses previous reviewers asked me to add), I chose not to present these data in the revision.

Emil: Seems like it is worth mentioning in a robustness section.


Will do.



Quote:15b. It would be wise to note the effect sizes here are quite tiny. To derive it, you need the standard deviation of state-level IQ, which is quite small: 2.7 IQ. The IQ / voting pattern effect sizes are small only initially, because they are being suppressed by race. When percent White is controlled, for example, the effect size for IQ now predicting percent Trump is -.65.

Emil: Standardized metrics are tricky. A std. beta of .65 at the state level and at the individual level is not the same in terms of IQ at the individual level. It depends on the variance. As I mentioned, the state-level IQ SD is only 2.7 (18% of the individual-level one). When you find a std. beta at the state level of .65, it means that for an .65 decrease in Trump voting (state-level), the IQ of a state goes down by .65*2.7=1.8 IQ. Very small effect size.


I agree 1.8 IQ points is small, assuming the typical SD of 15 for IQ tests.

But, I'm perhaps missing something: wouldn't moving 1.8 IQ points in an IQ distribution with a SD of 2.7 be a large effect? I tried researching whether a standardized beta is a measure of effect size, but got mixed results...
 Reply
 
 
Forum Jump:

Users browsing this thread: 1 Guest(s)