Hello There, Guest!  

[ODP] Discounting IQ’s Relevance to Organizational Behavior: The “Somebody Else’s Pro

#1
Title

Discounting IQ’s Relevance to Organizational Behavior: The “Somebody Else’s Problem” in Management Education

Authors

Bryan J. Pesta
Michael A. McDaniel
Peter J. Poznanski
Timothy DeGroot

Abstract

We hypothesize the existence of a “somebody else’s problem” in management education for the sub-discipline of organizational behavior (OB). The problem regards human intelligence, specifically, the general factor, g. Although g is arguably the most powerful variable in social science, OB educators largely ignore it. To demonstrate the former, we review a vast literature establishing g’s construct validity. To demonstrate the latter, we show that current OB textbooks place far less emphasis on g relative to a popular but less potent predictor of organizational success, emotional intelligence. We also show that when textbooks do reference g, it is often just to offer criticism. Misperceptions about empirical data on intelligence testing, denial that a general factor of intelligence exists, the reality of mean racial differences in mental ability, and the finding that genes play a non-trivial role in causing intelligence, seem to make OB’s treatment of this topic “somebody else’s problem.”

Files

https://www.dropbox.com/s/cicxskyfxvm4xt...l.odt?dl=0
 Reply
#2
(2015-Apr-29, 23:14:54)bpesta22 Wrote: Title

Discounting IQ’s Relevance to Organizational Behavior: The “Somebody Else’s Problem” in Management Education

Authors

Bryan J. Pesta
Michael A. McDaniel
Peter J. Poznanski
Timothy DeGroot

Abstract

We hypothesize the existence of a “somebody else’s problem” in management education for the sub-discipline of organizational behavior (OB). The problem regards human intelligence, specifically, the general factor, g. Although g is arguably the most powerful variable in social science, OB educators largely ignore it. To demonstrate the former, we review a vast literature establishing g’s construct validity. To demonstrate the latter, we show that current OB textbooks place far less emphasis on g relative to a popular but less potent predictor of organizational success, emotional intelligence. We also show that when textbooks do reference g, it is often just to offer criticism. Misperceptions about empirical data on intelligence testing, denial that a general factor of intelligence exists, the reality of mean racial differences in mental ability, and the finding that genes play a non-trivial role in causing intelligence, seem to make OB’s treatment of this topic “somebody else’s problem.”

Files

https://www.dropbox.com/s/cicxskyfxvm4xt...l.odt?dl=0


I didn't see a line where I should recommend one reviewer. I suggest James Thompson.

Bryan
 Reply
#3
You don't have to recommend one, but you can. I will email James for you.
 Reply
#4
Generally, this is a well written publication. I have no particular large complaints. Some comments below. Quotes are from the paper unless otherwise specified.

Quote:Organizational behavior (OB) textbooks contain perhaps the most interdisciplinary content in all of social science. They include lecture material from psychology, sociology, anthropology, political science, and economics. We argue here, however, that OB textbooks tend to ignore what is possibly social science’s most powerful variable: IQ (i.e., the general factor, termed “g”). OB textbooks discount the general factor to the point that when discussing intelligence, they often either criticize g, relegate its status as secondary to that of specific mental abilities, or focus more attention on other types of “intelligence” (e.g., emotional intelligence) that possess less predictive validity.

Careful with blending together IQ and g and general intelligence. Much confusion happens because of this. While IQ and extracted g usually correlate .99 or so, general mental ability is not the same as extracted g/IQ scores. To see this, think of IQ gains following retesting. Such gains can be substantial, especially if taking the same test twice within a short time-span (e.g. a day). However, everybody will (hopefully!) agree that such IQ gains do not really represent gains in general mental ability. Studies using Jensen's method (method of correlated vectors) show that such gains are hollow in terms of general mental ability, in that their MCV correlation is strongly negative, maybe -1 (after errors are removed).

This is why e.g. Gottfredson talks about IQs as vehicles of measurement. Gottfredson, L. S. (2002). Where and why g matters: Not a mystery. Human Performance, 15(1/2), 25-46.

http://emilkirkegaard.dk/en/?p=4029 papers about training gains in IQ

“misperceptions”, perhaps better to use “misconception”.

Quote:Some (see, e.g., Gould 1996) believe that g is a statistical artifact, emerging from the arbitrary nature of factor analysis, and the intelligence researcher’s misguided attempts to interpret correlations among cognitive ability test scores. It is admittedly counterintuitive that single IQ scores meaningfully reflect the vast array of human mental ability. However, IQ test scores do this precisely because they measure the general factor. Although other specific types of intelligence exist, g accounts for roughly 50 percent of their variance (Jensen, 1998; McGrew, 2009). g then likely reflects the speed and efficiency with which brains process most any perceptual or cognitive information (Jensen, 2011).

As for percent of variance claims, there are various ways to measure this. There is both percent of all variance, and also percent of common variance (i.e. excluding variance specific to each subtest). I think the 50% value comes from the common variance data. I have often found that g typically explains about 30-40% of all variance.

It also depends on factor methodology. PCA gives higher numbers since it includes some error variance into the general factor. Hierarchical methods give lower results too.

There are other methods for measuring factor strength than variance percents, and they have their own problems.

I'm not aware of any meta-analysis of this question either, but it is possible to do so without having to dig up studies because Carroll's correlation matrices are publicly available. One would simply have to reanalyze all these using consistent methodology and calculate summary statistics.

Carroll's data
http://www.iapsych.com/wmfhcaarchive/map.htm

for factor strength methods, see
Revelle, W., & Wilt, J. (2013). The general factor of personality: A general critique. Journal of research in personality, 47(5), 493-504.

See also Dalliard's great paper on this topic of the 'statistical g factor myth': http://humanvarieties.org/2013/04/03/is-...-g-a-myth/

Quote:Consensus now exists regarding the factorial structure of human mental abilities: The Cattell-Horn-Carroll (CHC) theory of intelligence (Carroll, 1993; McGrew, 2009). In the model, cognitive abilities form a three-stratum hierarchy. The apex contains a single general factor, g, which correlates strongly with all factors below it. The middle of the hierarchy consists of nine broad abilities (e.g., processing speed; short-term memory). Fully 80 narrow mental abilities (e.g., perceptual speed; word fluency) comprise the lowest stratum.

I wouldn't say consensus, but perhaps large majority. Oddly, I don't think the two surveys of intelligence researchers (1978, 2013) included a question about this.

The number of narrow abilities is open-ended, at least according to Jensen 1998.

As for the second-stratum abilities, there is also disagreement. See e.g. Johnson, W., & Bouchard, T. J. (2005). The structure of human intelligence: It is verbal, perceptual, and image rotation (VPR), not fluid and crystallized. Intelligence, 33(4), 393-416.

But there is broad agreement about a hierarchical structure, no matter the details about the lower levels.

Quote:Relationships between g and health are ubiquitous and striking. Arden et al. (2009) postulated the existence of a general fitness factor to explain links between these constructs. We suspect that Jensen’s (1998) g nexus, Pesta et al.’s (2010) well-being nexus, and Arden et al.’s (2009) general fitness factor are converging on the same constructs.

There is also the general socioeconomic factor, which I have published extensively on lately. I don't know if you are familiar with it, but basically anything good tends to covary with everything else good, at least at aggregate levels. I have found it at country-level, immigrant origin-level and state-level many times. It is quite similar to your well-being nexus.

Some key publications (mine)
http://openpsych.net/ODP/2014/10/crime-i...d-finland/
http://openpsych.net/ODP/2014/09/the-int...-rankings/
https://thewinnower.com/papers/the-s-fac...ian-states

These S factors of course are strongly related to cognitive scores.

Quote:Data on race and IQ go back to at least World War I (Jensen, 1998). This century, Roth, Bevier, Bobko, Switzer, and Tyler (2001) conducted a large-scale meta-analytic summary of these effects (N = 6,246,729 people). Black / White differences in mean IQ scores are quite large, although the distributions overlap considerably across groups. Overlapping distributions indicate that millions of minorities score higher on IQ tests relative to tens of millions of Whites, just in the USA. Nonetheless, many of the effect sizes exceed 1.0 (Roth et al., 2001). Finally, debate exists about whether these gaps have recently narrowed, but no one has yet determined the cause of race differences in intelligence, nor how to meaningfully reduce them.

You could cite some of the papers about the recent debate:

Dickens, W. T., & Flynn, J. R. (2006). Black Americans reduce the racial IQ gap evidence from standardization samples. Psychological Science, 17(10), 913-920.

Rushton, J. P., & Jensen, A. R. (2006). The totality of available evidence shows the race IQ gap still remains. Psychological Science, 17(10), 921-922.

Murray, C. (2007). The magnitude and components of change in the black–white IQ difference from 1920 to 1991: A birth cohort analysis of the Woodcock–Johnson standardizations. Intelligence, 35(4), 305-318.

There are some more.

Quote:The problem is that race differences covary strongly with how well a mental test measures g. The digit span task is a classic example. Here, the tester vocalizes a random number string (e.g., 5, 4, 7, 1, 9) at the rate of one number per second. The respondent then immediately attempts to recall each number in either forward (i.e., 5, 4, 7, 1, 9) or backward (i.e., 9, 1, 7, 4, 5) serial order. The backward version is roughly twice as g-loaded as the forward version, and it produces roughly twice as large a difference across racial groups (Jensen, & Figueroa, 1976).

I don't know about this double as g-loaded finding. I haven't seen any replications and I'm cautious about accepted findings from 1 old study (1975).

I found a study reporting a correlation matrix for forward and backwards digit span. Then I factor analyzed it. The loadings were .39 and .41, forwards/backwards. http://dx.doi.org/10.1016/j.intell.2004.07.007

R code:
Code:
library(pacman)
p_load(psych, lavaan)

cor.matrix = c(1, .20, .21, .35, .25,
               1,.72, .14, .17,
               1,.14, .09,
               1,.68,
               1)

cor.matrix = upper2full(cor.matrix)
colnames(cor.matrix) = rownames(cor.matrix) = c("Raven", "Vocab WAIS", "Vocab Shipley", "D. backwards", "D. forwards")
fa(cor.matrix)


In a Vietnamese study, subtest x age correlations were correlated with subtest g-loadings, and the age cors were about equal for forward and backward. http://emilkirkegaard.dk/en/?p=4612

This study found loadings of .64 and .63 forward/backward.
Ramsay, M. C., & Reynolds, C. R. (1995). Separate digits tests: A brief history, a literature review, and a reexamination of the factor structure of the Test of Memory and Learning (TOMAL). Neuropsychology Review, 5(3), 151-171.

I went thru another 30 or something correlation matrices in Carroll's data but did not see one with forward and backward separate. Perhaps if you look more (I looked all in the first two pages of studies). http://www.iapsych.com/wmfhcaarchive/wmfhcaindex.html

Quote:Bouchard (2004) offered a sound summary of the literature in this area. For young children, environmental factors (e.g., parenting philosophy, socioeconomic status) strongly influence individual differences in intelligence. Surprisingly, the effects of “shared environment” gradually drop to approximately zero as children mature into adulthood. Multiple lines of evidence confirm this conclusion, including studies using twins and siblings (raised in either the same or different environments) or adoptees. By the time people reach working age and organizational behaviors become salient, approximately 85% of the variance in IQ test scores is caused by genes.

This is a more general pattern and has been called the Wilson effect. There's a meta-analysis:

Bergen, S. E., Gardner, C. O., & Kendler, K. S. (2007). Age-related changes in heritability of behavioral phenotypes over adolescence and young adulthood: a meta-analysis. Twin Research and Human Genetics, 10(03), 423-433.

Bouchard, T. J. (2013). The Wilson effect: the increase in heritability of IQ with age. Twin Research and Human Genetics, 16(05), 923-930.

Quote:Conventional wisdom wrongly suggests that IQ tests reflect only narrow academic skills of no real-world import. Instead, the presumed existence of multiple intelligences captures the public’s fascination. Consider the popularity of Gardner’s (1983) theory of “multiple intelligences.” Via Google Scholar, Frames of Mind (Gardner, 1983), has been cited 20,694 times in just the academic literature.

Scholar does not only count academic literature. It is more inclusive.

Quote:Gardner hypothesized that unique, relatively uncorrelated, intelligences exist, and that they rival g in terms of importance. However, no one has produced data contradicting the “law” of positive manifold (for a review, see Wilson, 2012). Instead, scores on diverse sets of mental tests correlate positively.

See also:

Intelligence 34(5): Testing multiple intelligence theory

Visser, Beth A., Michael C. Ashton, and Philip A. Vernon. “Beyond< i> g</i>: Putting multiple intelligences theory to the test.” Intelligence 34.5 (2006): 487-502.
1. Beyond g Putting multiple intelligences theory to the test
Gardner, Howard. “On failing to grasp the core of MI theory: A response to Visser et al.” Intelligence 34.5 (2006): 503-505.
2. On failing to grasp the core of MI theory A response to Visser et al.
Visser, Beth A., Michael C. Ashton, and Philip A. Vernon. “< i> g</i> and the measurement of Multiple Intelligences: A response to Gardner.” Intelligence 34.5 (2006): 507-510.
3. g and the measurement of Multiple Intelligences A response to Gardner


Educational psychologist 41(4), the issue that is about multiple intelligence ‘theories’

Waterhouse, Lynn. “Multiple intelligences, the Mozart effect, and emotional intelligence: A critical review.” Educational Psychologist 41.4 (2006): 207-225.
Gardner, Howard, and Seana Moran. “The science of multiple intelligences theory: A response to Lynn Waterhouse.” Educational psychologist 41.4 (2006): 227-232.
Rauscher, Frances H., and Sean C. Hinton. “The Mozart effect: Music listening is not music instruction.” Educational Psychologist 41.4 (2006): 233-238.
Cherniss, Cary, et al. “Emotional intelligence: what does the research really indicate?.” Educational Psychologist 41.4 (2006): 239-245.
Waterhouse, Lynn. “Inadequate evidence for multiple intelligences, Mozart effect, and emotional intelligence theories.” Educational Psychologist 41.4 (2006): 247-255.
Alexander, Patricia A. “Evolution of a learning theory: A case study.” Educational Psychologist 41.4 (2006): 257-264.
 Reply
#5
This is a nice paper, but there are a couple of minor issues:

1) "The backward version is roughly twice as g-loaded as the forward version, and it produces roughly twice as large a difference across racial groups (Jensen, & Figueroa, 1976)."

That paper doesn't report g loadings for backward (BDS) and forward digit span (FDB). Rather, it reports correlations of BDS and FDS with full scale IQ (FSIQ), which is not exactly the same thing. The BDS-FSIQ correlation is about 25% (whites) or 50% (blacks) higher than the FDS-FSIQ correlation. The standandized black-white gap on BDS in this sample is more than 100% larger than the gap on FDS. The FDS-BDS discrepancy between whites and blacks is too large to be explained in terms of Spearman's hypothesis.

I don't think there's a single study where the g loading of BDS has been found to be anywhere near to being twice that of FDS. For example, on the DAS-II (data from Keith et al. 2010), the g loadings of FDS and BDS are 0.593 and 0.675, respectively.

While Spearman's hypothesis is strongly borne out by IQ data in general, I don't believe it can explain (most of) the FDS-BDS discrepancy. Blacks are relatively strong on FDS mainly because they are superior to whites on some narrow ability required in this test, not because FDS has a low g loading. See Jensen & Reynolds 1982 where it was shown that net of g, blacks were superior to whites on the memory factor. See also my analysis here.

The passage about digit span tests as an illustration of Spearman's hypothesis should be removed because it appears to be empirically wrong. Replace it with a more general description of Spearman's hypothesis and the data supporting it (e.g., Jensen's meta-analysis of 15 test batteries as reported in Jensen 1998).

2) "Also problematic is EQ’s construct validity, as some claim it may merely be a repackaged version of the Big Five personality traits (Conte, 2005; Robbins & Judge, 2015)."

MacCann et al. found that an emotional intelligence factor computed from the MSCEIT test had a g-loading of 0.80. This suggests that emotional intelligence, at least when measured with the MSCEIT, may derive its predictive validity from g rather than any emotional ability. This should be discussed alongside with the evidence that emotional intelligence may not be independent from traditional personality measures.

3) Explain how the five-star rating in the table is determined.

If these three issues are addressed adequately, I approve publication.

(Fixed formatting -Emil)
 Reply
#6
To Emil:

I’ll upload a revision that incorporates our comments below, but I’m waiting to see if anyone else is going to peer review this. This paper is aimed at non-experts, so we tried not to delve into peripheral or specific technical issues in our generic lit review. We wanted to give readers a basic sense of what we know about g based on decades of data. Admittedly, there are more issues and effects in field that we could have discussed, but again our goal was to dispel what we felt are the most persistent myths about g.

(2015-May-01, 05:38:57)Emil Wrote: …Careful with blending together IQ and g and general intelligence. Much confusion happens because of this. While IQ and extracted g usually correlate .99 or so, general mental ability is not the same as extracted g/IQ scores. To see this, think of IQ gains following retesting. Such gains can be substantial, especially if taking the same test twice within a short time-span (e.g. a day). However, everybody will (hopefully!) agree that such IQ gains do not really represent gains in general mental ability. Studies using Jensen's method (method of correlated vectors) show that such gains are hollow in terms of general mental ability, in that their MCV correlation is strongly negative, maybe -1 (after errors are removed).

1. We agree. We think a simple fix is “IQ (i.e., tests used as proxies for measures of the general factor of intelligence, g).” We don’t really want to get into the Flynn effect or the specifics of extracting g scores from IQ scores.
2. We will change misperception to misconception.

Quote: g accounts for roughly 50 percent of their variance (Jensen, 1998; McGrew, 2009). g then likely reflects the speed and efficiency with which brains process most any perceptual or cognitive information (Jensen, 2011)…
As for percent of variance claims, there are various ways to measure this. There is both percent of all variance, and also percent of common variance (i.e. excluding variance specific to each subtest). I think the 50% value comes from the common variance data. I have often found that g typically explains about 30-40% of all variance.
It also depends on factor methodology. PCA gives higher numbers since it includes some error variance into the general factor. Hierarchical methods give lower results too.
There are other methods for measuring factor strength than variance percents, and they have their own problems.
I'm not aware of any meta-analysis of this question either, but it is possible to do so without having to dig up studies because Carroll's correlation matrices are publicly available. One would simply have to reanalyze all these using consistent methodology and calculate summary statistics…

3. We were careful to use the word “roughly” and took the 50% value from Jensen, 1998. The point we were trying to make is not to precisely pinpoint the number but to suggest that it’s quite large. We can omit mentioning a number entirely (e.g., “g accounts for a large percentage of their variance”) or keep “roughly 50%,” whichever is your preference. We didn’t really want to get into the statistics behind deriving g, given our target audience.

Quote: I wouldn't say consensus, but perhaps large majority. Oddly, I don't think the two surveys of intelligence researchers (1978, 2013) included a question about this.
The number of narrow abilities is open-ended, at least according to Jensen 1998.
As for the second-stratum abilities, there is also disagreement. See e.g. Johnson, W., & Bouchard, T. J. (2005). The structure of human intelligence: It is verbal, perceptual, and image rotation (VPR), not fluid and crystallized. Intelligence, 33(4), 393-416.

But there is broad agreement about a hierarchical structure, no matter the details about the lower levels.

4. We will change the word consensus; and not assert a specific / fixed number of abilities (our intent was not to present a drawn out literature review, but just to emphasize the hierarchical structure).

Quote:Relationships between g and health are ubiquitous and striking. Arden et al. (2009) postulated the existence of a general fitness factor to explain links between these constructs. We suspect that Jensen’s (1998) g nexus, Pesta et al.’s (2010) well-being nexus, and Arden et al.’s (2009) general fitness factor are converging on the same constructs…

There is also the general socioeconomic factor, which I have published extensively on lately. I don't know if you are familiar with it, but basically anything good tends to covary with everything else good, at least at aggregate levels. I have found it at country-level, immigrant origin-level and state-level many times. It is quite similar to your well-being nexus.

Some key publications (mine)
http://openpsych.net/ODP/2014/10/crime-i...d-finland/
http://openpsych.net/ODP/2014/09/the-int...-rankings/
https://thewinnower.com/papers/the-s-fac...ian-states

These S factors of course are strongly related to cognitive scores.

5. We can incorporate the S factor and citations into this section.


Quote:Data on race and IQ go back to at least World War I (Jensen, 1998). This century, Roth, Bevier, Bobko, Switzer, and Tyler (2001) conducted a large-scale meta-analytic summary of these effects (N = 6,246,729 people). Black / White differences in mean IQ scores are quite large, although the distributions overlap considerably across groups. Overlapping distributions indicate that millions of minorities score higher on IQ tests relative to tens of millions of Whites, just in the USA. Nonetheless, many of the effect sizes exceed 1.0 (Roth et al., 2001). Finally, debate exists about whether these gaps have recently narrowed, but no one has yet determined the cause of race differences in intelligence, nor how to meaningfully reduce them.

You could cite some of the papers about the recent debate:

Dickens, W. T., & Flynn, J. R. (2006). Black Americans reduce the racial IQ gap evidence from standardization samples. Psychological Science, 17(10), 913-920.
Rushton, J. P., & Jensen, A. R. (2006). The totality of available evidence shows the race IQ gap still remains. Psychological Science, 17(10), 921-922.
Murray, C. (2007). The magnitude and components of change in the black–white IQ difference from 1920 to 1991: A birth cohort analysis of the Woodcock–Johnson standardizations. Intelligence, 35(4), 305-318.
There are some more.

6. We will add these citations. We worried that the paper was over-cited already—given how short it is—but again, adding more is fine.

Quote:The problem is that race differences covary strongly with how well a mental test measures g. The digit span task is a classic example. Here, the tester vocalizes a random number string (e.g., 5, 4, 7, 1, 9) at the rate of one number per second. The respondent then immediately attempts to recall each number in either forward (i.e., 5, 4, 7, 1, 9) or backward (i.e., 9, 1, 7, 4, 5) serial order. The backward version is roughly twice as g-loaded as the forward version, and it produces roughly twice as large a difference across racial groups (Jensen, & Figueroa, 1976).

7. The reviewer below also had an issue with this paragraph. We will delete it and just mention the problem is Spearman’s hypothesis, without providing the digit symbol example.



Quote:Bouchard (2004) offered a sound summary of the literature in this area. For young children, environmental factors (e.g., parenting philosophy, socioeconomic status) strongly influence individual differences in intelligence. Surprisingly, the effects of “shared environment” gradually drop to approximately zero as children mature into adulthood. Multiple lines of evidence confirm this conclusion, including studies using twins and siblings (raised in either the same or different environments) or adoptees. By the time people reach working age and organizational behaviors become salient, approximately 85% of the variance in IQ test scores is caused by genes.

This is a more general pattern and has been called the Wilson effect. There's a meta-analysis:

Bergen, S. E., Gardner, C. O., & Kendler, K. S. (2007). Age-related changes in heritability of behavioral phenotypes over adolescence and young adulthood: a meta-analysis. Twin Research and Human Genetics, 10(03), 423-433.

Bouchard, T. J. (2013). The Wilson effect: the increase in heritability of IQ with age. Twin Research and Human Genetics, 16(05), 923-930.

8. No problem incorporating the above into the revision.


Quote:Conventional wisdom wrongly suggests that IQ tests reflect only narrow academic skills of no real-world import. Instead, the presumed existence of multiple intelligences captures the public’s fascination. Consider the popularity of Gardner’s (1983) theory of “multiple intelligences.” Via Google Scholar, Frames of Mind (Gardner, 1983), has been cited 20,694 times in just the academic literature…

Scholar does not only count academic literature. It is more inclusive.
9. Noted and will change the wording here.

Quote:Gardner hypothesized that unique, relatively uncorrelated, intelligences exist, and that they rival g in terms of importance. However, no one has produced data contradicting the “law” of positive manifold (for a review, see Wilson, 2012). Instead, scores on diverse sets of mental tests correlate positively….
See also:
Intelligence 34(5): Testing multiple intelligence theory

Visser, Beth A., Michael C. Ashton, and Philip A. Vernon. “Beyond< i> g</i>: Putting multiple intelligences theory to the test.” Intelligence 34.5 (2006): 487-502.
1. Beyond g Putting multiple intelligences theory to the test
Gardner, Howard. “On failing to grasp the core of MI theory: A response to Visser et al.” Intelligence 34.5 (2006): 503-505.
2. On failing to grasp the core of MI theory A response to Visser et al.
Visser, Beth A., Michael C. Ashton, and Philip A. Vernon. “< i> g</i> and the measurement of Multiple Intelligences: A response to Gardner.” Intelligence 34.5 (2006): 507-510.
3. g and the measurement of Multiple Intelligences A response to Gardner
Educational psychologist 41(4), the issue that is about multiple intelligence ‘theories’
Waterhouse, Lynn. “Multiple intelligences, the Mozart effect, and emotional intelligence: A critical review.” Educational Psychologist 41.4 (2006): 207-225.
Gardner, Howard, and Seana Moran. “The science of multiple intelligences theory: A response to Lynn Waterhouse.” Educational psychologist 41.4 (2006): 227-232.
Rauscher, Frances H., and Sean C. Hinton. “The Mozart effect: Music listening is not music instruction.” Educational Psychologist 41.4 (2006): 233-238.
Cherniss, Cary, et al. “Emotional intelligence: what does the research really indicate?.” Educational Psychologist 41.4 (2006): 239-245.
Waterhouse, Lynn. “Inadequate evidence for multiple intelligences, Mozart effect, and emotional intelligence theories.” Educational Psychologist 41.4 (2006): 247-255.
Alexander, Patricia A. “Evolution of a learning theory: A case study.” Educational Psychologist 41.4 (2006): 257-264.
10. We can incorporate some of all of these citations into the revision.
 Reply
#7
Bryan,

I see the point with the target audience.

1) This is fine with me.
2) No requirement from me. Use the word you think best.
3) I prefer the vaguer phrasing, but no requirement.
4) Fine.
5) You don't have to. I would not require authors that they cite my own papers. That would be unethical.
6) Cite them if you want, or don't. Your call.
7) Fine.
8) Fine.
9) Fine.
10) No requirement. Cite which you think is relevant.

--

Regarding EQ issue raised by Dalliard above, it is somewhat complicated. There are a number of EQ tests. Some of them are ability related and thus related to GCA, while the others are personality related, but both are called measurements of EQ. Very confusing.

Some time ago I read a bunch of review articles about this. One of them was this: http://www.jstor.org/discover/10.2307/20...6337968831
 Reply
#8
(2015-May-07, 15:32:44)Dalliard Wrote: This is a nice paper, but there are a couple of minor issues:

<b>1) "The backward version is roughly twice as g-loaded as the forward version, and it produces roughly twice as large a difference across racial groups (Jensen, & Figueroa, 1976)."</b>

That <a href="http://arthurjensen.net/wp-content/uploads/2014/06/Forward-and-Backward-Digit-Span-Interaction-with-Race-and-IQ-Predictions-from-Jensens-Theory-1975-by-Arthur-Robert-Jensen-Richard-A.-Figueroa.pdf">paper</a> doesn't report g loadings for backward (BDS) and forward digit span (FDB). Rather, it reports correlations of BDS and FDS with full scale IQ (FSIQ), which is not exactly the same thing. The BDS-FSIQ correlation is about 25% (whites) or 50% (blacks) higher than the FDS-FSIQ correlation. The standandized black-white gap on BDS in this sample is more than 100% larger than the gap on FDS. The FDS-BDS discrepancy between whites and blacks is too large to be explained in terms of Spearman's hypothesis.

I don't think there's a single study where the g loading of BDS has been found to be anywhere near to being twice that of FDS. For example, on the <a href="https://humanvarietiesdotorg.files.wordpress.com/2013/12/das-ii-g-loadings.png">DAS-II</a> (data from <a href="http://onlinelibrary.wiley.com/doi/10.1002/pits.20498/abstract">Keith et al. 2010</a>), the g loadings of FDS and BDS are 0.593 and 0.675, respectively.

While Spearman's hypothesis is strongly borne out by IQ data in general, I don't believe it can explain (most of) the FDS-BDS discrepancy. Blacks are relatively strong on FDS mainly because they are superior to whites on some narrow ability required in this test, not because FDS has a low g loading. See <a href="http://arthurjensen.net/wp-content/uploads/2014/06/Race-Social-Class-and-Ability-Patterns-on-the-WISC-R-1982-by-Arthur-Robert-Jensen-Cecil-Randy-Reynolds.pdf">Jensen & Reynolds 1982</a> where it was shown that net of g, blacks were superior to whites on the memory factor. See also my analysis <a href="http://humanvarieties.org/2013/12/21/racial-differences-on-digit-span-tests/">here</a>.

The passage about digit span tests as an illustration of Spearman's hypothesis should be removed because it appears to be empirically wrong. Replace it with a more general description of Spearman's hypothesis and the data supporting it (e.g., Jensen's meta-analysis of 15 test batteries as reported in Jensen 1998).


1. Agree and consistent with Emil's comments on this, we will remove mention of digit span in the revision.

Quote:<b>2) "Also problematic is EQ’s construct validity, as some claim it may merely be a repackaged version of the Big Five personality traits (Conte, 2005; Robbins & Judge, 2015)."</b>

<a href="http://psycnet.apa.org/journals/emo/14/2/358/">MacCann et al.</a> found that an emotional intelligence factor computed from the MSCEIT test had a g-loading of 0.80. This suggests that emotional intelligence, at least when measured with the MSCEIT, may derive its predictive validity from g rather than any emotional ability. This should be discussed alongside with the evidence that emotional intelligence may not be independent from traditional personality measures.

2. If you accept our "target audience" explanation above, we didn't want to get into the side issue of whether IQ and EQ overlap. Instead, we hoped to contrast EQ's popularity (but relative lack of validity) with IQs in management education.


Quote:<b>3) Explain how the five-star rating in the table is determined.</b>

If these three issues are addressed adequately, I approve publication.

3. As mentioned in the paper (p 12), the ratings were purely subjective based on the authors' impressions of how well the text covered g.
 Reply
#9
Here is the revision.

Quick note: The original was roughly 12 pages of text and 6 pages of references. We decided not to add many of the optional/suggested references in the reviews above. We did add mention of the S factor.

Thanks!
Bryan

https://www.dropbox.com/s/yozzdljqus8u2i...2.odt?dl=0
 Reply
#10
Most problems I mentioned before have been fixed as far as I can see.

Redundant reference
On my skimming I found an additional problem: the following reference is now redundant. It was cited before but you forgot to remove it from the reference section. Perhaps it is a good idea to go thru the references to make sure there are no more unnecessary ones.

Jensen, A., & Figueroa, R.A. (1976). Forward and backward digit span interaction with race and IQ: predictions from Jensen's theory. Journal of Educational Psychology, 67(6): 882-93.

Table 1
I looked at the data table at the end. Maybe there is something I don't understand, but I don't see how the total for George (et al) can be greater than the sum of accurate and critical paragraphs. Sometimes the "total" is smaller than the sum, but I take it that that is because some paragraphs contain more accurate and critical parts (?).

Inter-rater reliability
You mention at some point the inter-rater reliability of .86. However, I do not see which data this value is calculated from. This journal has a mandatory open data policy, so any data used for this study must be published.

The actual paragraphs of text from the text books, were they saved? At least the exact text locations must have been saved somewhere. They should be found in the supplementary material so that anyone wishing to reproduce the authors' judgment calls can do so.

It is within copyright law to publish short text excerpts for the purpose of criticism etc (fair use).

Statistical analyses
I do not see any mention of which statistical software was used to do the analyses. Is there a code-file used? If they were made by point-and-click software (e.g. SPSS), can the authors upload the output file so that others may verify that the reported results are correct from the output file?

It would be better if the authors could upload the source code file(s) (if any) and data files in a folder, so that anyone else may download the material and re-run all analyses with identical results. I recommend using the Open Science Framework for this. In order for science to be reproducible, a bare minimum step is that researchers with access to the same data should get the same results from running the same analyses. This has been called computational reproducibility, to distinguish it from empirical reproducibility (i.e. the results replicate when a new sample is gathered). http://edge.org/annual-question/2014/response/25340

I contacted James Thompson to hear if he wants to review, but have not heard from him yet. Perhaps he was busy organizing the London Conference of Intelligence. I will try again.
 Reply
 
 
Forum Jump:

Users browsing this thread: 1 Guest(s)