Hello There, Guest!  

[ODP]Putting Spearman’s Hypothesis to Work: Job IQ as a Predictor of Employee Racial

#11
My problem is that Hispanic is a category not mutually exclusive with Black, White, or Asian. The later three categories, however, are mutually exclusive with each other.

Specifically, in the BLS data set, anyone in the White category (e.g.,) is not also counted in either the Black or Asian category. In the Hispanic category, though, everyone is also counted as some other race (e.g., White and Hispanic).

So Black, White, and Asian (plus misc other races with small N) sum to 100% of the labor force. Hispanic labor force participation was then coded and reported separately as a subset of the original 100% of the races.

I'm thus really not comfortable coding, reporting, and interpreting data from this fourth group, whose composition is an unknown blend of the other three (race) groups, and whose numbers are also represented as counts in other race categories.

I hope I'm explaining this well,

Bryan
 Reply
#12
(2016-May-31, 12:49:31)bpesta22 Wrote: My problem is that Hispanic is a category not mutually exclusive with Black, White, or Asian. The later three categories, however, are mutually exclusive with each other.

I'm thus really not comfortable coding, reporting, and interpreting data from this fourth group, whose composition is an unknown blend of the other three (race) groups, and whose numbers are also represented as counts in other race categories.


Hi Brian,

My mind must be fuzzy. Maybe Emil can jump in and help explain.

As I noted, "Hispanics" are generally delineated that way. What is somewhat unusual is the delineation of the Black, White, and Asian groups, from which, in this case, Latin American origin people are NOT excluded. But, since you are using simple correlations, why would the atypical delineation of the latter groups matter?

Currently you are using:
e.g., White/(White+Asian + Black)
Which is:
(non-Hispanic White + Hispanic White)/(total population - "other races")

But ideally, if you had the data, you would use:
e.g., non-Hispanic White/(total population - "other races")


And I'm just asking for, additionally:
Hispanic/(White+ Asian + Black)
Which is:
Hispanic/(total population - "other races")

The denominator is the same!
 Reply
#13
(2016-Jun-01, 07:40:31)Chuck Wrote:
(2016-May-31, 12:49:31)bpesta22 Wrote: My problem is that Hispanic is a category not mutually exclusive with Black, White, or Asian. The later three categories, however, are mutually exclusive with each other.

I'm thus really not comfortable coding, reporting, and interpreting data from this fourth group, whose composition is an unknown blend of the other three (race) groups, and whose numbers are also represented as counts in other race categories.


Hi Brian,

My mind must be fuzzy. Maybe Emil can jump in and help explain.

As I noted, "Hispanics" are generally delineated that way. What is somewhat unusual is the delineation of the Black, White, and Asian groups, from which, in this case, Latin American origin people are NOT excluded. But, since you are using simple correlations, why would the atypical delineation of the latter groups matter?

Currently you are using:
e.g., White/(White+Asian + Black)
Which is:
(non-Hispanic White + Hispanic White)/(total population - "other races")

But ideally, if you had the data, you would use:
e.g., non-Hispanic White/(total population - "other races")


And I'm just asking for, additionally:
Hispanic/(White+ Asian + Black)
Which is:
Hispanic/(total population - "other races")

The denominator is the same!


He is not using ratios, he is using the plain percentages.

"White%" not for instance "White%/Black%".
 Reply
#14
(2016-Jun-01, 10:46:09)Emil Wrote: He is not using ratios, he is using the plain percentages.

"White%" not for instance "White%/Black%".


Emil,

The raw %s are out of the total population:

%Hispanic/total population
%White*/total population (*White_Hispanic + White_nonHispanic)

So one would still have the same denominator, allowing for statistical comparability.

Regardless, I won't push the matter. Doing so would be akin to requesting that the authors also look at e.g., gender differences by job cognitive complexity and people orientednees. If someone wants to conduct a followup, and expand the scope of analysis, they can.

Bryan,

Did you post an updated copy somewhere? I would like to peruse the rewritten sections.
 Reply
#15
(2016-May-31, 00:49:59)Chuck Wrote:
(2016-May-30, 23:33:03)Emil Wrote: Asians, as a group, does have a genetic basis


No.

"Asian", in this context, is a class (one which includes: South Asians and East Asians) in a socio"racial" classification, one which has the following other divisions: Blacks and "Whites" (meaning: West Eurasians). By what set of genes could one arrange (classify) almost all of our "Asians" into their census class as opposed to that of Blacks and West Eurasians?

(One would have to reduce the class k to 2: (a) Blacks and (b) (East + South) + West Eurasians (i.e., out of Africans)).


After internal discussion. I now agree with John. Indians go into the west Eurasian cluster and thus cannot be grouped with the other Asians genetically without violating the rules of cladistics.
 Reply
#16
Bryan,

Quotes are yours unless otherwise specified.

Quote:REPLY: We did not cross-check jobs, but instead relied on the Wonderlic’s ability and expertise at generating estimated IQs for different jobs. Also, we didn’t have older Wonderlic data, and suspect—as you point out—that matching jobs might be frustrating.

Older Wonderlic data is given by Gottfredson's classic 1997 paper. It was in my link.

Old Wonderlic data from Gottfredson.

Quote:REPLY: Would it be ok to include “(but, see, <your link>)” immediately after the word “version” in our quote above? If not, we can delete this entire section.

No. Please see the discussion about your previous submission. The BDS does not have twice the g-loading of FDS. Dalliard and myself compiled several studies to show this, yet you made the same exact claim in your next submission. I find that odd.

Emil, previously Wrote:This (the slightly higher mean IQ among the occupations) corresponds to the low correlation found between being out of a job and IQ at the individual level. You could back-estimate this correlation using this mean. Just a minor check.


Quote:REPLY: Not sure how to do this. If desired for our paper, could you please advise?

One could use a simple cutoff based model, which assumes that everybody above a certain cutoff are in work and everybody below is not. One could also use a more gradual change, but I didn't have a ready made function for this.

These are aggregated data, so one should use the weighted mean IQ, weighted by occupation population. These are not given in your datafile, so I could not do this.

I tried with the given number, but this produced a correlation of .67, which is much too strong. So my proposed method doesn't work. Just ignore this.

(code for this is found in my R code)

Emil, previously Wrote:It would be better to just average the BLS values across years, no?


Quote:REPLY: Correlations seem more intuitive? The rank ordering of jobs by ethnic composition is pretty much the same across a two year span.

You misunderstood. I proposed that you use the average of the BLS 2014 and 2012 values to remove some of the slight 'measurement error'.

I did this in my replication.

Quote:REPLY: In addition to your hypotheses, we wonder about a potential floor effect on the percentage of Asians in each job. We’d prefer not to complicate the manuscript by addressing why the %Asian correlation was lower than might be expected (it was nonetheless non-trivial and significant). This might be an interesting question for an additional study, but the present study just predicted a significant/non-zero correlation between %Asian and IQ. Although the correlation was smallish, its existence was predicted by Spearman’s hypothesis (and testing this hypothesis was the purpose of our study). Also, since our data are just for the USA, wouldn’t a group six points above average IQ produce smaller over/under-representation relative to a group 15 points below average IQ?

1)
Not controlling for known confounds (interests in this case) which you already have the data for is not really defensible. After all, you are interested in the effect of cognitive ability itself, not whatever it is that it happens to be correlated with. If you use correlations, you will get a confounded estimate of the influence of cognitive ability itself.

2)
Merely looking for p < alpha vs. p > alpha results is not a good way of doing science. What matters is whether the numbers fit in size. Please see: http://www.phil.vt.edu/dmayo/personal_we...esting.pdf

3)
By your reasoning, White% should produce the weakest correlation, yet it does not.

Quote:REPLY: Our correlational values are effect sizes, we think. Schmidt and Hunter’s paper was a meta-analysis and so reported effect sizes. Beyond reporting correlations, I’m not sure that Gottfredson calculated effect sizes. If I’m misinterpreting what you’re asking for here, please let me know.

Your correlations are effect sizes, yes. However, I asked for "the effect sizes of the prior research so readers can see whether the effect sizes are similar".

You present some new results. What the readers need to know is whether they fit in size with the previous results. For instance, if you find r = .20 and previous studies have found r = .95, something is wrong somewhere.

The data variable (complexity) was correlated with mean IQ at .86 in your study. You cite:

Gottfredson, L. S. (1986). Occupational aptitude patterns map: Development and implications for a theory of job aptitude requirements (Monograph). Journal of Vocational Behavior, 29, 254-291.

Gottfredson, L. S. (2003). g, jobs, and life. In H. Nyborg (Ed.), The scientific study of general intelligence: Tribute to Arthur R. Jensen (pp. 293-342). New York: Pergamon.

However, I could not find any complexity x mean IQ correlation in these papers. She does give job mean IQs and presents factor analysis results of job attributes, but does not appear to actually correlate them. Maybe I missed the number somewhere?

--

Data analysis
I replicated the authors' analysis and also did the required additional multiple regression analyses. I furthermore did some path models too, which allow for more precise causal modeling.

I used the average race% values from the BLS (2012 and 2014) to increase reliability. This results in the following correlations:

Code:
iq white black asian other  data people things
iq      1.00  0.48 -0.61  0.23 -0.51 -0.86  -0.53   0.04
white   0.48  1.00 -0.85 -0.38 -0.49 -0.52  -0.37  -0.04
black  -0.61 -0.85  1.00 -0.14  0.47  0.66   0.30   0.09
asian   0.23 -0.38 -0.14  1.00 -0.14 -0.21   0.13  -0.07
other  -0.51 -0.49  0.47 -0.14  1.00  0.45   0.24  -0.04
data   -0.86 -0.52  0.66 -0.21  0.45  1.00   0.50  -0.01
people -0.53 -0.37  0.30  0.13  0.24  0.50   1.00  -0.24
things  0.04 -0.04  0.09 -0.07 -0.04 -0.01  -0.24   1.00


Note that I have included "other", which is the residual group. This is some mix of Native Americans, Hispanics, mixed race persons and so on. The mean IQ of this group is usually found to be somewhat below the population mean (around 95), so one would expect a negative correlation, which is also found.

John prefers the rationized group proportions (for the lack of a better term). In this way the influence of the "other" group is removed. So, for Whites this is simply White% / (White% + Black% + Asian%). Same for the others. These correlations are:

Code:
iq white_ratio black_ratio asian_ratio  data people things
iq           1.00        0.44       -0.61        0.22 -0.86  -0.53   0.04
white_ratio  0.44        1.00       -0.84       -0.42 -0.49  -0.35  -0.04
black_ratio -0.61       -0.84        1.00       -0.14  0.66   0.30   0.09
asian_ratio  0.22       -0.42       -0.14        1.00 -0.20   0.13  -0.07
data        -0.86       -0.49        0.66       -0.20  1.00   0.50  -0.01
people      -0.53       -0.35        0.30        0.13  0.50   1.00  -0.24
things       0.04       -0.04        0.09       -0.07 -0.01  -0.24   1.00


So this correction made little difference and seems unnecessary.

Of more interest are multiple regressions where we take the other job data into account. I used cognitive ability, people and things to predict each of the race%.

For Whites:

Code:
$coefs
        Beta   SE CI.lower CI.upper
iq      0.39 0.09     0.20     0.57
people -0.19 0.10    -0.37     0.00
things -0.09 0.08    -0.26     0.07

$meta
            N            R2       R2 adj. R2 10-fold cv
       124.00          0.26          0.24          0.19


Blacks:

Code:
$coefs
        Beta   SE CI.lower CI.upper
iq     -0.60 0.09    -0.77    -0.43
people  0.02 0.09    -0.16     0.19
things  0.12 0.07    -0.03     0.26

$meta
            N            R2       R2 adj. R2 10-fold cv
       124.00          0.38          0.36          0.34


Asians:

Code:
$coefs
       Beta   SE CI.lower CI.upper
iq     0.41 0.10     0.21     0.61
people 0.34 0.10     0.14     0.55
things 0.00 0.09    -0.17     0.17

$meta
            N            R2       R2 adj. R2 10-fold cv
       124.00          0.14          0.12          0.03


Notice the very low cross-validated R2. This is suspicious.

Code:
$coefs
        Beta   SE CI.lower CI.upper
iq     -0.54 0.09    -0.72    -0.36
people -0.05 0.10    -0.24     0.14
things -0.03 0.08    -0.19     0.13

$meta
            N            R2       R2 adj. R2 10-fold cv
       124.00          0.27          0.25          0.24


So, to recap, the standardized betas for the races with the other job predictors were:
White: .39
Black: -.60
Asian: .41
Other: -.54

Thus, the Asian result is not really an outlier once one takes into account their higher preference for working with people (people beta = .34). Note that White has a people beta at -.19, but it's not a reliable finding (cf. the confidence interval). Still, this means that Whites and Asians are pretty different on this preference, apparently.

Path models
Here one can stipulate the causal order of variables (kind of). I specified that the interests are caused by cognitive ability and that the race% are caused jointly by cognitive ability requirement and interests.

Here's the plots:

White:
   

Black:
   

Asian:
   

Other:
   

Looking at these we see that the results for the cognitive ability paths are consistent with the other results. Direct paths: .38, -.51, .23, -.51. For Asian and White there are opposite direction indirect paths thru the people interest as seen before.

--

Code for my replication is in the OSF folder "Emil_replication".

https://osf.io/pxmjc/files/
 Reply
#17
(2016-Jun-02, 00:54:25)Emil Wrote: Bryan,

Quotes are yours unless otherwise specified.

Quote:REPLY: We did not cross-check jobs, but instead relied on the Wonderlic’s ability and expertise at generating estimated IQs for different jobs. Also, we didn’t have older Wonderlic data, and suspect—as you point out—that matching jobs might be frustrating.

Older Wonderlic data is given by Gottfredson's classic 1997 paper. It was in my link.

Old Wonderlic data from Gottfredson.


I cannot get back to revising this til the weekend (Chuck: I was waiting to write the formal revision until more reviews were in).

To clarify just the above, are you asking me to use my fingernail on the chart to estimate the mean IQs for each job in the figure, and then attempt matching them to my data set of 100+ jobs, just to see what the correlation might be?

I see also the figure says "reprinted by permission of the publisher".
 Reply
#18
(2016-Jun-02, 03:58:29)bpesta22 Wrote: I was waiting to write the formal revision until more reviews were in


Take your time.

Quote:[A]re you asking me to use my fingernail on the chart to estimate the mean IQs for each job in the figure, and then attempt matching them to my data set of 100+ jobs...

This image crossed my mind, too.

Bryan,

I have some suggestions for a possible future paper, which could build off of this and be published in another journal.

(1) Approach this from the reverse direction and try to explain the employment differentials e.g., Asian/White using cognitive complexity and work-type orientation. That would involve just a few extra computations.

(2) There is a table, "Median usual weekly earnings of full-time wage and salary workers by occupation". Spearman's hypothesis might also explain pay inequality within job types. I recall reading -- Roth et al. (not sure which paper) -- that there are ability differences between groups within employment categories by cognitive complexity. Thus, one might expect cognitive complexity to explain variance in -- see: http://www.bls.gov/opub/ted/2011/ted_20110914.htm -- pay gaps.

(3) "Labor Force Characteristics by Race and Ethnicity" seems to be yearly; thus one could incorporate more years; for ease, one could write a pithy r-code and use a reader program to scan PDFs and spit the data into an excel file. Emil and I did this for an unpublishable analysis of National Merit/Scholarship data. One could probably use a modified version of that code.

(4) Sex data x occupations seems to be available online, though I don't know if the jobs match. See here: http://www.bls.gov/cps/cpsaat11.htm
"Employed persons by detailed occupation, sex, race, and Hispanic or Latino ethnicity". One could run a version of 1 for sex.

Just some thoughts.
 Reply
#19
(2016-Jun-02, 03:58:29)bpesta22 Wrote:
(2016-Jun-02, 00:54:25)Emil Wrote: Bryan,

Quotes are yours unless otherwise specified.

Quote:REPLY: We did not cross-check jobs, but instead relied on the Wonderlic’s ability and expertise at generating estimated IQs for different jobs. Also, we didn’t have older Wonderlic data, and suspect—as you point out—that matching jobs might be frustrating.

Older Wonderlic data is given by Gottfredson's classic 1997 paper. It was in my link.

Old Wonderlic data from Gottfredson.


I cannot get back to revising this til the weekend (Chuck: I was waiting to write the formal revision until more reviews were in).

To clarify just the above, are you asking me to use my fingernail on the chart to estimate the mean IQs for each job in the figure, and then attempt matching them to my data set of 100+ jobs, just to see what the correlation might be?

I see also the figure says "reprinted by permission of the publisher".


Bryan,

Yes, that's the idea. It would be another validity check of your mean IQs by job. Of course, you may find that the jobs do not match very well and so this test is not possible. Gottfredson's list has a lot fewer jobs than your list, I think.

Due to the litigious nature of the US, many people ask for permission where none is needed. Reprinting a figure is clearly fair use.
 Reply
#20
(2016-Jun-02, 04:30:19)Chuck Wrote:
(2016-Jun-02, 03:58:29)bpesta22 Wrote: I was waiting to write the formal revision until more reviews were in


Take your time.

Quote:[A]re you asking me to use my fingernail on the chart to estimate the mean IQs for each job in the figure, and then attempt matching them to my data set of 100+ jobs...

This image crossed my mind, too.


I guess my problem is I don't see the value of finger-nailing Wonderlic's 1992 IQ scores by job to see what they correlate with Wonderlic's 2002 IQ scores by job.

Quote:Bryan,

I have some suggestions for a possible future paper, which could build off of this and be published in another journal.

(1) Approach this from the reverse direction and try to explain the employment differentials e.g., Asian/White using cognitive complexity and work-type orientation. That would involve just a few extra computations.

(2) There is a table, "Median usual weekly earnings of full-time wage and salary workers by occupation". Spearman's hypothesis might also explain pay inequality within job types. I recall reading -- Roth et al. (not sure which paper) -- that there are ability differences between groups within employment categories by cognitive complexity. Thus, one might expect cognitive complexity to explain variance in -- see: http://www.bls.gov/opub/ted/2011/ted_20110914.htm -- pay gaps.

(3) "Labor Force Characteristics by Race and Ethnicity" seems to be yearly; thus one could incorporate more years; for ease, one could write a pithy r-code and use a reader program to scan PDFs and spit the data into an excel file. Emil and I did this for an unpublishable analysis of National Merit/Scholarship data. One could probably use a modified version of that code.

(4) Sex data x occupations seems to be available online, though I don't know if the jobs match. See here: http://www.bls.gov/cps/cpsaat11.htm
"Employed persons by detailed occupation, sex, race, and Hispanic or Latino ethnicity". One could run a version of 1 for sex.

Just some thoughts.

Thanks Chuck,

I think there's a possible research stream in these and related data. I'm especially interested in potential sex differences by job IQ, and then in pay, but I haven't had time to do anything with it.

I realize there’s many additional analyses that could be conducted, together with extra columns of data that could be coded. I appreciate the additional analyses Emil provided here, but including them would make this a paper I didn’t intend to write (unless a change is required because my method was suspect). So, I’m trying to determine where compliance with a reviewer comment is necessary from a basic science point of view, versus cosmetic and/or an incremental change to the paper's value (at the expense of its brevity, which I see as a strength).

Could we perhaps summarize (beyond the revisions I already agreed to make in my Word-file replies) what I must do next? I’m trying to avoid both blatant disregard for the scientific method and being the archeologist studying his shovel.
 Reply
 
Forum Jump:

Users browsing this thread: 1 Guest(s)