Hello There, Guest!  

[ODP] Discounting IQ’s Relevance to Organizational Behavior: The “Somebody Else’s Pro

#11
(2015-May-19, 00:05:20)Emil Wrote: Most problems I mentioned before have been fixed as far as I can see.

Redundant reference
On my skimming I found an additional problem: the following reference is now redundant. It was cited before but you forgot to remove it from the reference section. Perhaps it is a good idea to go thru the references to make sure there are no more unnecessary ones.

Jensen, A., & Figueroa, R.A. (1976). Forward and backward digit span interaction with race and IQ: predictions from Jensen's theory. Journal of Educational Psychology, 67(6): 882-93.

Table 1
I looked at the data table at the end. Maybe there is something I don't understand, but I don't see how the total for George (et al) can be greater than the sum of accurate and critical paragraphs. Sometimes the "total" is smaller than the sum, but I take it that that is because some paragraphs contain more accurate and critical parts (?).

Inter-rater reliability
You mention at some point the inter-rater reliability of .86. However, I do not see which data this value is calculated from. This journal has a mandatory open data policy, so any data used for this study must be published.

The actual paragraphs of text from the text books, were they saved? At least the exact text locations must have been saved somewhere. They should be found in the supplementary material so that anyone wishing to reproduce the authors' judgment calls can do so.

It is within copyright law to publish short text excerpts for the purpose of criticism etc (fair use).

Statistical analyses
I do not see any mention of which statistical software was used to do the analyses. Is there a code-file used? If they were made by point-and-click software (e.g. SPSS), can the authors upload the output file so that others may verify that the reported results are correct from the output file?

It would be better if the authors could upload the source code file(s) (if any) and data files in a folder, so that anyone else may download the material and re-run all analyses with identical results. I recommend using the Open Science Framework for this. In order for science to be reproducible, a bare minimum step is that researchers with access to the same data should get the same results from running the same analyses. This has been called computational reproducibility, to distinguish it from empirical reproducibility (i.e. the results replicate when a new sample is gathered). http://edge.org/annual-question/2014/response/25340

I contacted James Thompson to hear if he wants to review, but have not heard from him yet. Perhaps he was busy organizing the London Conference of Intelligence. I will try again.


I fixed the references-- thanks for catching that.

Regarding the table, accuracy and criticality are independent. So, any paragraph can be accurate but critical, or vice versa. For example, George et al. had 4 paragraphs on IQ. Two of these four were accurate on IQ. Independently, one of these four was critical of IQ.

I have uploaded the data and attach it here. It shows each paragraph and gives page numbers. It also has the data we used to calculate reliability.

Note, I did the chi-squares by hand (they are easy to do), so I have no code to upload for these. All the data needed to conduct them, however, is in the table.

I'm gonna post the links in a new / fresh reply, which will contain the newest version of the paper, and the paragraph coding data.

Thanks!
BP
 Reply
#12
Newest version:

https://www.dropbox.com/s/bstw56hmdem53u...3.odt?dl=0


Data
https://www.dropbox.com/s/c6hvjt6ftw0x6n....xlsx?dl=0
 Reply
#13
Bryan et al,

Re. count for George
I see. I had misunderstood. So for George, there are a total of 4 paragraphs, 2 of which were accurate and 1 which was critical, and 1-2 which were neither. I had failed to consider the option that paragraphs could be neither.

i.e. vs. e.g.
Quote:We then conducted searches in these books looking for coverage of either “IQ” (i.e., specific mention of g), or “EQ” as a comparison group.

I think you meant "e.g." here, not "i.e.". "e.g." (exempli gratia) means for instance/example, whereas "i.e." (id est) means that is.

Analytic replication
I looked at the data, but could not figure out where the numbers in cols I and J came from, so I decided to use the data in cols B-E.

I successfully recreated Table 1 from the original data. Except for the data about the bonus material, which I don't see in the data file (?).

Statistical tests in detail
I found 4 tests in the paper all of which are chi square tests. I used this R function.

1. "Nonetheless, when the authors of our nine, sampled textbooks referenced “intelligence,” they devoted nearly twice as many paragraphs to EQ (63) relative to IQ (35), Ӽ2 (1) = 8.00, p = .005."

I guess the null hypothesis is that the proportion is .5 out of the 63+35 mentions. It looks like you did not use Yates correction. I could not immediately find out whether one should or should not use this correction. http://stats.stackexchange.com/questions...ncy-tables

My result:
Code:
1-sample proportions test without continuity correction

data:  totals.sum[4] out of totals.sum[1] + totals.sum[4], null probability 0.5
X-squared = 8, df = 1, p-value = 0.004678
alternative hypothesis: true p is not equal to 0.5
95 percent confidence interval:
0.5442525 0.7306847
sample estimates:
        p
0.6428571


Exact match.

2. "Likewise, the number of bonus-content items devoted to EQ (11) was over three times that devoted to IQ (3), Ӽ2 (1) = 4.57, p = .0325."

Same test as above. I do not see the bonus data in the data file. But manually entering the data gives:

Code:
1-sample proportions test without continuity correction

data:  11 out of (11 + 3), null probability 0.5
X-squared = 4.5714, df = 1, p-value = 0.03251
alternative hypothesis: true p is not equal to 0.5
95 percent confidence interval:
0.5241077 0.9242861
sample estimates:
        p
0.7857143


Exact match.

3. "Coverage of EQ was also considerably more accurate (98%, relative to 71% for IQ), Ӽ2 (1) = 16.4, p = .0001."

This looks like a two sample test of equal proportions.

Code:
2-sample test for equality of proportions without continuity correction

data:  c(totals.sum[2], totals.sum[5]) out of c(totals.sum[1], totals.sum[4])
X-squared = 16.4414, df = 1, p-value = 5.018e-05
alternative hypothesis: two.sided
95 percent confidence interval:
-0.4226538 -0.1170287
sample estimates:
   prop 1    prop 2
0.7142857 0.9841270


Exact match.

4. "We found only one example of an inaccurate claim about EQ (i.e., Hellriegel & Slocum, 2011, although the authors did provide citations supporting their claim). Critical coverage of IQ (46%) was far more frequent relative to that for EQ (13%), Ӽ2 (1) = 13.3, p = .0003."

Same test as above.

Code:
2-sample test for equality of proportions without continuity correction

data:  c(totals.sum[3], totals.sum[6]) out of c(totals.sum[1], totals.sum[4])
X-squared = 13.2629, df = 1, p-value = 0.0002707
alternative hypothesis: two.sided
95 percent confidence interval:
0.1457757 0.5145417
sample estimates:
   prop 1    prop 2
0.4571429 0.1269841


Exact match.

Everything replicated for me, although it took me some time to tidy the data up again. My code and data file are attached to this post if anyone else is curious.


Attached Files
.csv   pesta_paper.csv (Size: 3.21 KB / Downloads: 446)
.r   pesta_code.R (Size: 2.34 KB / Downloads: 445)
 Reply
#14
Thanks for checking our stats.

Because there were so few bonus content items, I didn't create numerical columns for them, but mentioned them in the "notes" column.

Also, I think we intended to use "i.e." versus "e.g." as we wanted to distinguish between mentions of multiple intelligences, for example, versus mention of the general factor.

If you want me to clarify this in a revision, let me know.

Thanks to all who commented here; what's the next step?

Bryan

(2015-May-20, 00:58:55)Emil Wrote: Bryan et al,

Re. count for George
I see. I had misunderstood. So for George, there are a total of 4 paragraphs, 2 of which were accurate and 1 which was critical, and 1-2 which were neither. I had failed to consider the option that paragraphs could be neither.

i.e. vs. e.g.
Quote:We then conducted searches in these books looking for coverage of either “IQ” (i.e., specific mention of g), or “EQ” as a comparison group.

I think you meant "e.g." here, not "i.e.". "e.g." (exempli gratia) means for instance/example, whereas "i.e." (id est) means that is.

Analytic replication
I looked at the data, but could not figure out where the numbers in cols I and J came from, so I decided to use the data in cols B-E.

I successfully recreated Table 1 from the original data. Except for the data about the bonus material, which I don't see in the data file (?).

Statistical tests in detail
I found 4 tests in the paper all of which are chi square tests. I used this R function.

1. "Nonetheless, when the authors of our nine, sampled textbooks referenced “intelligence,” they devoted nearly twice as many paragraphs to EQ (63) relative to IQ (35), Ӽ2 (1) = 8.00, p = .005."

I guess the null hypothesis is that the proportion is .5 out of the 63+35 mentions. It looks like you did not use Yates correction. I could not immediately find out whether one should or should not use this correction. http://stats.stackexchange.com/questions...ncy-tables

My result:
Code:
1-sample proportions test without continuity correction

data:  totals.sum[4] out of totals.sum[1] + totals.sum[4], null probability 0.5
X-squared = 8, df = 1, p-value = 0.004678
alternative hypothesis: true p is not equal to 0.5
95 percent confidence interval:
0.5442525 0.7306847
sample estimates:
        p
0.6428571


Exact match.

2. "Likewise, the number of bonus-content items devoted to EQ (11) was over three times that devoted to IQ (3), Ӽ2 (1) = 4.57, p = .0325."

Same test as above. I do not see the bonus data in the data file. But manually entering the data gives:

Code:
1-sample proportions test without continuity correction

data:  11 out of (11 + 3), null probability 0.5
X-squared = 4.5714, df = 1, p-value = 0.03251
alternative hypothesis: true p is not equal to 0.5
95 percent confidence interval:
0.5241077 0.9242861
sample estimates:
        p
0.7857143


Exact match.

3. "Coverage of EQ was also considerably more accurate (98%, relative to 71% for IQ), Ӽ2 (1) = 16.4, p = .0001."

This looks like a two sample test of equal proportions.

Code:
2-sample test for equality of proportions without continuity correction

data:  c(totals.sum[2], totals.sum[5]) out of c(totals.sum[1], totals.sum[4])
X-squared = 16.4414, df = 1, p-value = 5.018e-05
alternative hypothesis: two.sided
95 percent confidence interval:
-0.4226538 -0.1170287
sample estimates:
   prop 1    prop 2
0.7142857 0.9841270


Exact match.

4. "We found only one example of an inaccurate claim about EQ (i.e., Hellriegel & Slocum, 2011, although the authors did provide citations supporting their claim). Critical coverage of IQ (46%) was far more frequent relative to that for EQ (13%), Ӽ2 (1) = 13.3, p = .0003."

Same test as above.

Code:
2-sample test for equality of proportions without continuity correction

data:  c(totals.sum[3], totals.sum[6]) out of c(totals.sum[1], totals.sum[4])
X-squared = 13.2629, df = 1, p-value = 0.0002707
alternative hypothesis: two.sided
95 percent confidence interval:
0.1457757 0.5145417
sample estimates:
   prop 1    prop 2
0.4571429 0.1269841


Exact match.

Everything replicated for me, although it took me some time to tidy the data up again. My code and data file are attached to this post if anyone else is curious.

 Reply
#15
I have no more comments. I approve.

---

To get published, you need to get 3 approvals. You have 1 now, Dalliard will probably approve as well when he reads the revision. I have emailed James Thompson again, but have yet to hear back from him. If he does not want to review, then one of the other reviewers will do so (Meisenberg, Rindermann, Piffer, Meng Hu, or Fuerst).
 Reply
#16
(2015-Apr-29, 23:14:54)bpesta22 Wrote: Title

Discounting IQ’s Relevance to Organizational Behavior: The “Somebody Else’s Problem” in Management Education


I apologize for not commenting earlier.

Generally, in analyzing textbook presentations, the article makes an important contribution. Such studies are not uncommon in other fields (e.g., the treatment of race in medical textbooks); they give insight into what students are being taught and, to a lesser extent, the general opinion in the field. Jensen in Miele (2002) expressed the view that most discussions of intelligence in psych textbooks are either error filled or misleading. Unfortunately, as far as I am aware, no published research has formally investigated the matter. One wonders if intelligence is "someone else's problem" across psychology.

Regarding the general topic, there are three distinct issues. They are:
(a) Whether intelligence as a construct in under-researched in IO. This was discussed in a focus article by Scherbaum et al. (2012a) and commentary thereon.

Hanges et al. (2012b) concluded:

Quote:Both we and the majority of the commentaries clearly agree that intelligence should be a hotter topic than it currently is in the I–O journals. What does the limited publication of this type of research say about our field and its contribution toward understanding one of our most important constructs?

(b) Whether intelligence as an assessment is being under-employed by practitioners.

© Whether intelligence as a concept is not well being discussed in classes and in text books.

Your concern is, of course, ©. It would be interesting to know how © relates to (a) and (b), but I appreciate that that question falls outside of the scope of the paper. In general, I feel that this is a worthwhile paper which deserves publication.

I have a few quibbles and comments, however.

(P1) The discussion of FSIQ (a measure/manifest variable), g (a construct/latent variable), and intelligence (a concept which, in IO, either refers to stratum III in the hierarchy of cognitive abilities or to cognitive abilities in general) is less than perfectly clear. This is perhaps unavoidable given the condensed discussion.
This issue does not have bearing on the main point of the paper since the concern is whether FSIQ/GMA/GCA/g is given its due relative to other constructs. I just make note of the matter. I can discuss specific statements and transitions that I found to be less than crystal clear, if the author wishes. To restate, I am just offering my impression here, not requesting any changes.

(P2) The authors state:

Quote:Agreement now exists regarding the factorial structure of human mental abilities: The Cattell-Horn-Carroll (CHC) theory of intelligence (Carroll, 1993; McGrew, 2009).

I am not aware of a survey which shows this. I would rephrase the sentence along the lines of Lievens and Reeve (2012):

Quote:Most experts today accept some form of a hierarchal model, with a single general cognitive ability factor at the apex (referred to as ‘‘g’’ or ‘‘general mental ability’’) in large part due to the exhaustive work of John Carroll (1993).

This is a safer statement since there are multiple models in which g is at the apex e.g., VPR and it is seemingly not established that "most" agree with the CHC one.
(P3) The authors state:

Quote:Consider the following three quotations, each from a different OB textbook:

--Thus, managers must decide which mental abilities are required to successfully perform each job. For example, a language interpreter…would especially need language fluency (Gibson, et al., 2012, p. 90).

--Different jobs require various blends of these [mental] abilities. As some obvious examples, writers have to be adept at word fluency, statisticians have to be good at numerical ability and numerical reasoning, and architects have to be skilled at spatial visualization (Greenberg, 2013, p. 131)."

Though misleading, I am not sure that these first two statements are in fact incorrect. Imagine that I stated, "Above and beyond g, managers must decide which specific mental abilities are required to successfully perform each job." Would you say that this was incorrect, given that specific abilities do contribute some above and beyond general ability? If not then (1) (as quoted) is not. The same can be said for (2). (I do not have the full quotes, of course.) I agree that the third statement is incorrect because of the condition "as long as". Perhaps you can find more clearly erroneous statements.

(P4) You do not mention if the textbooks are at the undergraduate or graduate level. If the concern is with how IO practitioners -- who would have graduate degrees -- view IQ, this distinction is relevant. Please clarify.

(P5) Regarding the analysis, could you give a couple of examples of paragraphs/excerpts at least in the supplementary/excel file?

e.g., example excerpt -- reason coded thusly.

This gives readers/researchers a better sense of how you grade the paragraphs and what you deem to be accurate/critical, etc. This is important as perspectives may differ. For example, I would not have considered statements (1-2) in (P3) strictly to be inaccurate (as opposed to being incomplete).

Please address P2, P3, P4, P5. I have no other comments. Thank you for your submission.

--John

References:

Hanges, P. J., Scherbaum, C. A., Goldstein, H. W., Ryan, R., & Yusko, K. P. (2012). I–O Psychology and Intelligence: A Starting Point Established.Industrial and Organizational Psychology, 5(2), 189-195.
Lievens, F., & Reeve, C. L. (2012). Where I–O psychology should really (re) start its investigation of intelligence constructs and their measurement.Industrial and Organizational Psychology, 5(2), 153-158.
Miele, F. (2002). Intelligence, race, and genetics: Conversations with Arthur R. Jensen. Westview Press.
Scherbaum, C. A., Goldstein, H. W., Yusko, K. P., Ryan, R., & Hanges, P. J. (2012a). Intelligence 2.0: Reestablishing a research program on g in I–O psychology. Industrial and Organizational Psychology, 5(2), 128-148.
 Reply
#17
(2015-May-21, 03:08:44)Chuck Wrote:
(2015-Apr-29, 23:14:54)bpesta22 Wrote: Title

Discounting IQ’s Relevance to Organizational Behavior: The “Somebody Else’s Problem” in Management Education


I apologize for not commenting earlier.

Generally, in analyzing textbook presentations, the article makes an important contribution. Such studies are not uncommon in other fields (e.g., the treatment of race in medical textbooks); they give insight into what students are being taught and, to a lesser extent, the general opinion in the field. Jensen in Miele (2002) expressed the view that most discussions of intelligence in psych textbooks are either error filled or misleading. Unfortunately, as far as I am aware, no published research has formally investigated the matter. One wonders if intelligence is "someone else's problem" across psychology.

Regarding the general topic, there are three distinct issues. They are:
(a) Whether intelligence as a construct in under-researched in IO. This was discussed in a focus article by Scherbaum et al. (2012a) and commentary thereon.

Hanges et al. (2012b) concluded:

Quote:Both we and the majority of the commentaries clearly agree that intelligence should be a hotter topic than it currently is in the I–O journals. What does the limited publication of this type of research say about our field and its contribution toward understanding one of our most important constructs?

(b) Whether intelligence as an assessment is being under-employed by practitioners.

© Whether intelligence as a concept is not well being discussed in classes and in text books.

Your concern is, of course, ©. It would be interesting to know how © relates to (a) and (b), but I appreciate that that question falls outside of the scope of the paper. In general, I feel that this is a worthwhile paper which deserves publication.

I have a few quibbles and comments, however.

(P1) The discussion of FSIQ (a measure/manifest variable), g (a construct/latent variable), and intelligence (a concept which, in IO, either refers to stratum III in the hierarchy of cognitive abilities or to cognitive abilities in general) is less than perfectly clear. This is perhaps unavoidable given the condensed discussion.
This issue does not have bearing on the main point of the paper since the concern is whether FSIQ/GMA/GCA/g is given its due relative to other constructs. I just make note of the matter. I can discuss specific statements and transitions that I found to be less than crystal clear, if the author wishes. To restate, I am just offering my impression here, not requesting any changes.

(P2) The authors state:

Quote:Agreement now exists regarding the factorial structure of human mental abilities: The Cattell-Horn-Carroll (CHC) theory of intelligence (Carroll, 1993; McGrew, 2009).

I am not aware of a survey which shows this. I would rephrase the sentence along the lines of Lievens and Reeve (2012):

Quote:Most experts today accept some form of a hierarchal model, with a single general cognitive ability factor at the apex (referred to as ‘‘g’’ or ‘‘general mental ability’’) in large part due to the exhaustive work of John Carroll (1993).

This is a safer statement since there are multiple models in which g is at the apex e.g., VPR and it is seemingly not established that "most" agree with the CHC one.
(P3) The authors state:

Quote:Consider the following three quotations, each from a different OB textbook:

--Thus, managers must decide which mental abilities are required to successfully perform each job. For example, a language interpreter…would especially need language fluency (Gibson, et al., 2012, p. 90).

--Different jobs require various blends of these [mental] abilities. As some obvious examples, writers have to be adept at word fluency, statisticians have to be good at numerical ability and numerical reasoning, and architects have to be skilled at spatial visualization (Greenberg, 2013, p. 131)."

Though misleading, I am not sure that these first two statements are in fact incorrect. Imagine that I stated, "Above and beyond g, managers must decide which specific mental abilities are required to successfully perform each job." Would you say that this was incorrect, given that specific abilities do contribute some above and beyond general ability? If not then (1) (as quoted) is not. The same can be said for (2). (I do not have the full quotes, of course.) I agree that the third statement is incorrect because of the condition "as long as". Perhaps you can find more clearly erroneous statements.

(P4) You do not mention if the textbooks are at the undergraduate or graduate level. If the concern is with how IO practitioners -- who would have graduate degrees -- view IQ, this distinction is relevant. Please clarify.

(P5) Regarding the analysis, could you give a couple of examples of paragraphs/excerpts at least in the supplementary/excel file?

e.g., example excerpt -- reason coded thusly.

This gives readers/researchers a better sense of how you grade the paragraphs and what you deem to be accurate/critical, etc. This is important as perspectives may differ. For example, I would not have considered statements (1-2) in (P3) strictly to be inaccurate (as opposed to being incomplete).

Please address P2, P3, P4, P5. I have no other comments. Thank you for your submission.

--John

References:

Hanges, P. J., Scherbaum, C. A., Goldstein, H. W., Ryan, R., & Yusko, K. P. (2012). I–O Psychology and Intelligence: A Starting Point Established.Industrial and Organizational Psychology, 5(2), 189-195.
Lievens, F., & Reeve, C. L. (2012). Where I–O psychology should really (re) start its investigation of intelligence constructs and their measurement.Industrial and Organizational Psychology, 5(2), 153-158.
Miele, F. (2002). Intelligence, race, and genetics: Conversations with Arthur R. Jensen. Westview Press.
Scherbaum, C. A., Goldstein, H. W., Yusko, K. P., Ryan, R., & Hanges, P. J. (2012a). Intelligence 2.0: Reestablishing a research program on g in I–O psychology. Industrial and Organizational Psychology, 5(2), 128-148.



Thanks John. I'll have a full reply later, but I wanted to make this point before heading off to work (regarding "accuracy" / your P3).

A much longer version of this paper was rejected at multiple management education journals. To argue against those quotes, we had the following:

"IQ test scores robustly predict job performance. Indeed, from a scientific perspective, one no longer needs job analysis when predicting job success from IQ test scores (Schmitt & Hunter, 1998). With meta-analysis, the stage was set to falsify the specificity doctrine. After roughly three decades of accumulated research data, Schmidt and Hunter (2004) concluded:

It has been found that specific aptitude tests measure GMA [i.e., g]. In addition to GMA, each measures something specific to that aptitude (e.g., specifically numerical aptitude, over and above GMA). The GMA component appears to be responsible for the prediction of job and training performance, whereas the factors specific to the aptitudes appear to contribute little or nothing to prediction. (p. 167)

Likewise, Gottfredson (2002) summarized this literature thusly:

(1) Specific mental abilities (such as spatial, mechanical or verbal ability) add very little beyond g to the prediction of job performance. g generally accounts for at least 85-95% of a full mental test battery’s (cross-validated) ability to predict performance in training or on the job. (p. 342)

(2) Tests of specific aptitudes…seldom add more than .01 to .03 to the prediction of job performance beyond g…The finding should not be surprising in view of the…“positive manifold.” (p. 350)

(3) The point is not that g is the only mental ability that matters. It is not. Rather, the point is that no other well-studied mental ability has average effect sizes that are more than a tiny fraction of g’s (net of their large g component), and none has such general effects across the world of work. (p. 351)"

In this context, we'd argue that the textbook quotes are indeed inaccurate.

I'm not sure it's worth putting this section back in though unless you find it helpful.


eta. Also, we included generic examples of "accurate" and "critical" in the version here:

Examples of accurate / inaccurate coverage of IQ included mention that it is a strong predictor of job performance, versus mention that specific mental abilities predict better. Examples of accurate / inaccurate coverage of EQ included mention of its criterion validity, versus mention that it predicts job performance better than does g.

Given this, would you still prefer we get back into the texts and place specific quotes for some of the items?
 Reply
#18
While non-GCA abilities generally do not have much predictive power for job performance, ability tilt seems to be a decent predictor of occupations and majors. This perhaps reflects more interests than abilities. Here's a recent paper on this matter:

Coyle, T. R., Snyder, A. C., & Richmond, M. C. (2015). Sex differences in ability tilt: Support for investment theory. Intelligence, 50, 209-220.

As an example where non-GCA ability was found to have predictive power, see :
Deary, I. J., Strand, S., Smith, P., & Fernandes, C. (2007). Intelligence and educational achievement. Intelligence, 35(1), 13-21.

Where a verbal residual non-GCA factor (?) had predictive power of .22 for English, but .00 for Math.

Still, the predictive ability of GCA factor was ~.69 here, and the non-GCA residual/factor was .13, or a ratio of 5.3/1.
 Reply
#19
(2015-May-21, 17:10:40)Emil Wrote: While non-GCA abilities generally do not have much predictive power for job performance, ability tilt seems to be a decent predictor of occupations and majors. This perhaps reflects more interests than abilities. Here's a recent paper on this matter:

Coyle, T. R., Snyder, A. C., & Richmond, M. C. (2015). Sex differences in ability tilt: Support for investment theory. Intelligence, 50, 209-220.

As an example where non-GCA ability was found to have predictive power, see :
Deary, I. J., Strand, S., Smith, P., & Fernandes, C. (2007). Intelligence and educational achievement. Intelligence, 35(1), 13-21.

Where a verbal residual non-GCA factor (?) had predictive power of .22 for English, but .00 for Math.

Still, the predictive ability of GCA factor was ~.69 here, and the non-GCA residual/factor was .13, or a ratio of 5.3/1.


I don't deny the above, but we're talking specifically about job performance (versus career entry or educational achievement) here, and the meta-analytic literature on it seems clear.

Still we can tone down the wording re "accurate" if you all feel it's appropriate.
 Reply
#20
I am okay with the wording as it is. I have already approved the paper. In my opinion, peer review should not so much concern these judgment calls and interpretation and more concern the analytic methods. I have covered the analyses in detail and they seem to be correct. The scoring and interpretation of the text places are more a matter of judgment call.

One thing thought. The inter-rater reliability correlation. The raw data to calculate this is not given. Do you still have it?
 Reply
 
Forum Jump:

Users browsing this thread: 1 Guest(s)