The Predictive (In)Validity of IQ

Whenever the concept of IQ comes up on the internet, you will inevitable witness an exchange like this:

Person 1: IQ is useless, it doesn’t mean anything!

Person 2: IQ is actually the most successful construct psychology has ever made: it predicts everything from income to crime

On some level, both of these people are right. IQ is one of the most successful constructs that psychology has ever employed. That’s an indictment of psychology, not a vindication of IQ.


The most-often cited predictor of IQ is income and wealth. It’s purported that the relationship between income and wealth is not only causal, but quite strong in comparison to other social science predictors.

However, the relationship between income and IQ still remains quite of an open question, given that the publication bias (Ferguson & Brannick 2012) and improper statistical techniques (Berk 2004Micceri 1989; Turvey 1960). However, there are a few parts of the literature we can examine to get a good idea of the type of relationship that is actually out there in the world.

First, we should review the psychometric side of the literature. Strenze (2007) reports a meta-analysis of the relationship between intelligence and socioeconomic success, with correlations for income ranging from .15 to .25. Bowles et. al (2001) reported a mean coefficient of .15. Zagorsky (2007) reports fairly small regression coefficients following controls for confounds, but larger correlations (r≅.30 for income, r≅.15 for wealth). However, when one looks at the cloud of data points, we’d all be right to be skeptical that any information can be extracted.


We could also think that such a relationship may only be present in certain sections of the distribution (Taleb 2019).

However, in light of the statistical issues mentioned above, it’s worth reviewing the critical literature. Some research has shown associations between IQ and income and wealth that are approximately zero. For instance, Heinek & Anger (2010) use German panel data and estimate regression coefficients of cognitive ability on wages of about 0.02 for males, and figures not different from 0 for females (Table 1). Hauser (2010) reports there is no effect of intelligence on income net education, so any purported effects have to be mediated through education, a highly bureaucratic and credential-laden institution. Borghans et. al (2016) reports IQ only explains ~2.5% of the variance in income, indicating a correlation of about ~.15 (fig 4). Finally, Taleb (2019) reports relationship between IQ and wealth and income with R^2 values of about 0.01 to 0.02.

Even in the research where associations are moderate, there are reasons to believe these are not sociologically-independent effects, but are highly contingent on the social organization of society. The use of tests similar to IQ tests such as SAT scores, ACT scores, GRE scores, etc, signals alleged information to employers who can then act upon that information and make decisions about wages. This induces a sociologically-mediated relationship between income/wealth and IQ (Byington & Felps 2010) that does not support the idea that IQ is a meaningful predictor of wealth, but a self-fulfilling one.

Grades/Academic Achievement

The history of IQ is long and sordid (Murdoch 2007), but its origins actually surprising arise in the context of educational psychology. Alfred Binet created IQ tests to help identify students struggling in school so that they could receive more attention and help: that his ideas have been twisted to justify the segregation of individuals into schooling ‘tracks’ based on their IQ scores is an ironic twist (Williamson 2018). Regardless, the relationship between grades and IQ (Roth et. al 2015) [1] is much less than it seems, because it is, in essence, tautological (Baird 1985; McClelland 1973Richardson 2017). IQ tests and school tests (that inform grades) share the same kind of content, meaning the fact that they correlate is not because they are independent constructs that have causal influences on each other, but that they both—in part—measure the same thing. However, research using more controls has indicated that covariance between traits like personality and grit may spuriously inflate the variance attributable to IQ (Borghans et. al 2016; Duckworth & Seligman 2005; Duckworth et. al 2011Duckworth et. al 2012; McIntosh & Munk 2014). Even more, it has also been shown that the relationship between IQ and grades is dwarfed by even that of teachers judgements of academic achievements [2] (Hodge & Coladarci 1989).

There are also reasons to believe that the relationship between IQ scores and grades may be socially and culturally mediated, given the fact that the relationship is significantly attenuated in other countries (Ogunlade 1978) [3].


The relationship between crime and IQ has been long debated and discussed in the sociology literature (Hirschi & Hindlang 1977). The literature generally shows a relationship, though there are reports here and there that do not (McCartan and Gunnison 2004; Umbrasas 2018; Wallinius et. al 2019).

Lynam et. al (1993) lists three possible explanations for the relationship that have been put forth by theorists throughout the literature:

  1. A spurious relationship caused by a third variable correlated with both IQ and delinquency/crime such as:
    1. Differential detection by the level of IQ; low IQ youth have higher reported rates of delinquency because they are more likely to be caught by the authorities (Feldman 1977Stark 1975Sutherland 1931;).
    2. Confounding by socioeconomic status and other structural variables (Pfohl 1985).
    3. Or confounding by some other test-related variables like test motivation (Tarnopol 1970).
  2. A causal relationship from IQ to delinquency/crime such as:
    1. IQ causing academic performance, which is then related to delinquency through its associations with social bonds (Maguin & Loeber 1996McGloin et. al 2004; Ward & Tittle 1994).
    2. Some sort of biological relationship between intelligence and crime, wherein smaller brains are less intelligence and also more amenable to commit crime (Ellis & Walsh 2002), or a hormonal mediation (Ellis 2005), or r/K life history theory (Rushton & Whitney 2002).
    3. Some sort of evolutionary relationship (Kanazawa 2010).
  3. A causal relationship from delinquency/crime to IQ such as:
    1. The dangerous lifestyle delinquents engage in causes them to have lower IQs (Hare 1984; Shanok & Lewis 1981).

Differential Detection

Moffitt & Silva (1988) claim that the differential detection hypothesis is not supported by the data, but it seems that more recent and representative data shows that it is supported to a certain extent (e.g. that it can account for only part of the relationship) (Yun & Lee 2013), which has been replicated with more controls (Boccio et. al 2018Yun et. al 2013). Moreover, in a reanalysis of the National Longitudinal Study of Youth in a review of Murray and Herrnstein’s The Bell Curve, Cullen et. al (1997) report that differential detection does occur in this cohort.

Third Variable Confounding

The relationship between lead and intelligence (Reyes 2012), and lead and crime (Aizer & Currie 2017; Bellinger 2008; Marcus et. al 2010Nevin 2007; Olympio et. al 2009Reyes 2014; Reyes 2015Stretesky & Lynch 2004) have both been well-established (Brady 1993).

Other environmental exposures like pollution have also been posited to contribute to both IQ (Zhang et. al 2018) and crime (Herrnstadt & Muehlegger 2015), and there have been demonstrated moderators of the relationship (Bellair & McNulty 2009).

Other research has demonstrated that following the inclusion of a more robust set of structural controls, IQ contributes to less than 5% of the variation in delinquency (Menard & Morse 1984). Moreover, IQ is usually one of the smallest contributors to overall variance, explaining less than 1% in many meta-analyses (Cullen et. al 1997).

Even more, other research has also shown a lack of a longitudinal correlation following the adjustment of the relationship for confounding factors (Fergusson, Horwood & Ridder 2005) [4], indicating that more research should be done to more robustly test hypotheses of confounding.

Job Performance

Another variable commonly cited as evidence for the predictive validity of IQ is its relationship with job performance [5] [6] (Neisser et. al 1996). The relationship is typically cited in the range of 0.3 to 0.5 in most meta-analyses [7], indicating about 9-25% of the variance in ‘job performance’ [8] would be explained by IQ scores [9] (Hunter & Hunter 1984; Neisser et. al 1996). However, there are several issues with the reporting of these correlations. The first issue is that most are upward adjustments from true correlations ranging from .10 to .20 [10] (Richardson & Norgate 2015). Psychometricians have built a leaning tower of Pisa of psychometric adjustments that allows them to boost correlations to absurd figures [11] like corrections for reliability/measurement error [12], restriction of range [13]. The true figures for the relationship between IQ tests and job performance range from r=0.04 (explaining 0.16% of the variance) to r=0.1 (explaining 1% of the variance), indicating very small predictive validity of IQ overall [14] [15] (Richardson & Norgate 2015). Research is often selectively interpreted and reported, meaning that the so-called ‘common knowledge’ or ‘consensus’ that there is a meaningful relationship between IQ tests and job performance should not be taken seriously [16]. Other research shows that the relationship is contingent upon other characteristics and can reverse in many circumstances [17] (Verbeke et. al 2008). Moreover, whatever correlations remain can be explained as the result of socially contingent institutional practices (Byington & Felps 2010) or the result of common confounds like education (Ritchie et. al 2018).

Occupational Attainment

Another alleged relationship is between IQ scores and ones occupational attainment (Gottsfredson 1998; Hegelund et. al 2018). This, however, has been shown to operate entirely through the education system (Bajema 1968). Hauser (2010) has examined the claims of IQists like Jensen with respect to the relationship between occupational attainment and IQ and found their evidence to be lacking, claims to be contradicted, and predictions to be falsified. The similarity between IQ tests and grades (see above) produces an almost self-fulfilling relationship between the two; getting good grades is used as a means of stratifying individuals into social classes and occupations (Byington & Felps 2010; Richardson & Norgate 2015). Moreover, there are feedback effects in the relationships that do exist that seek to magnify the effect of intelligence on occupational attainment far beyond its meager role: the fact that cognitive environments affect continual cognitive development has long been noted (Dickens & Flynn 2001; Flynn 2007).


A burgeoning outgrowth of differential psychology and psychometrics into the health sciences, “cognitive epidemiology” has advocated the view that there is some association between IQ and general health + health behaviors (Deary & Batty 2007). However, there has not been very much robust research showing that these relationships exist, to the extent that they exist are the result of IQ tests, and to the extent that they are causally associated with IQ, how contingent they are upon certain social formations and institutions. For instance, Modig & Bergman (2012) failed to find an association between IQ scores at ages 10, 13 and 15 and health variables later on in life. Hemmingsson et. al (2009) found the association between IQ and mortality disappears after controlling for attained socioeconomic status in younger males, Hemmingsson et. al (2007) replicated the attenuation for cardiovascular diseases, and Kilgour et. al (2010) found that using finer metrics for socioeconomic status attenuates or removes the relationship. Even advocates of cognitive epidemiology are forced to admit that the association is limited to specific subgroups, moderated and mediated by many variables and isn’t associated with many specific health behaviors (Hagger-Johnson et. al 2010). So far, two Mendelian randomization studies have failed to find associations between cognitive ability and physical health (Davies et. al 2019; Hagenaars et. al 2017) or smoking (Sanderson et. al 2018). For example, Davies et. al (2019) noted that “There was little evidence of substantial direct effects of intelligence on any outcome except negative effects on moderate and vigorous physical activity”. Moreover, other research has driven that the relationship is quantitatively small, overwhelmed by other factors and driven by the left tail of the distribution (Öhman 2015), and that the relationships are inconsistent at best (Wallin et. al 2014). However, most statistical epidemiological analyses suffer from the failure to control for sufficient covariates, poorly specified models and other statistical issues like collider bias (Richardson et. al 2019), meaning even the positive results for the association between intelligence and mortality can’t be taken for granted.


The relationship between IQ and risk aversion is entirely spurious (Andersson et. al 2016). The relationship between abortion numbers and cognitive ability also does not replicate (Woodley of Menie, Sänger & Meisenberg 2017), nor does the alleged relationship between intelligence and age at first intercourse (Garrison & Rodgers 2016).

Demonstrating Predictive Validity

If IQists want to convince the skeptics that IQ plays an important role in social outcomes, then they’ll have to do a few things.

  1. Show that their metrics of IQ are associated with the outcomes in the first place, using proper statistical techniques.
  2. Show that the metrics of IQ are substantially associated with the outcomes, without using an endless list of statistical massaging techniques to boost the correlation.
  3. Show that basic confounds do not cause the relationship (fine indices of socioeconomic status, other psychology variables, etc).
  4. Demonstrate causality using robust techniques (Mendelian randomization, instrumental variables, regression discontinuity, differences in differences, etc).
  5. Identify the mechanism that connects IQ scores and the social outcome.
  6. Demonstrate the erdogicity assumptions holds (Fisher et. al 2018).

Thus far, not a single one of the outcomes IQ is allegedly associated with has passed step 2 (very few have passed step 1), let alone all 6.


[1] We should note that this “meta-analysis” is exactly the kind of paper mentioned above that uses statistically inappropriate techniques to boost the correlation to the maximum it can be. For one, corrections for “unreliability” and “measurement error” are unjustified in virtue of the fact that classical test theory is false (Schonemann 1996b) and does not always reduce correlations (Nimon et. al 2012; Osburn 2000; Seymour 1987; Wigley III 2013Winne & Belfry 1982; see also here). Moreover, they used corrections for range restriction without testing the necessary (and almost always false) assumptions (Schonemann & Heene 2009). Even more egregiously, this paper performs an inappropriate meta-analysis procedure wherein extremely heterogeneous results (both grade metrics, intelligence tests, cultural background, age group, etc all varying) are aggregated into a single meta-analysis effect-size estimate following correct for alleged artifacts, and then boosted even farther using a trim-and-fill, which is especially inappropriate in the case of between-study heterogeneity (Peters et. al 2007).

[2] See also de Boer et. al (2010), Bothner et. al (2011)Fischbach et. al (2013), Kuklinski & Weinstein (2003)Lovagalia et. al (1998)Rubie-Davies et. al (2014), McKown & Weinstein (2008).

[3] See also Wicherts et. al (2010).

[4] The paper here did find a longitudinal association between IQ and educational attainment and income, but we should note a few things. The first is that the betas all decreased following adjustment for confounds, and the welfare duration outcome became insignificant. Moreover, their longitudinal cohort started at ages 8-9, which is already considerably into the academic pipeline, meaning the educational confound may be found in earlier years where early performance helps carve out a child’s trajectory. Finally, it is unclear as to how to convert the observed betas into correlations or coefficients of determination to assess the significance of the relationship given they don’t report the standard deviations for their IQ metric nor the socioeconomic outcomes.

[5] One important unanswered question is what “job performance” actually is (Murphy 1989) and how much of a organizational and socially relevant “trait” it actually is (Richardson & Norgate 2015). Most metrics used in predictive validity studies decay relatively in their reliability rapidly over time (Dahlke et. al 2018; Kolz et. al 1999Sturman et. al 2005).

[6] See also Markell & Cortina (2017)Samuelson et. al (2017); Tett et. al (2017).

[7] See Ioannidis 2016; Møller et. al 2018Murphy (2017)

[8] See Adler et. al (2016); Cleveland et. al (2017)Murphy (2019).

[9] See also Sternberg & Wagner (1993).

[10] One might also note that personnel/IO psychology is fraught with publication bias (Morgensen et. al 2007).

[11] See footnote [1].

[12] The commonly cited correction for interrater reliability is based upon dubious assumptions (LeBreton et. al 2014Murphy & DeShon 2000a; Murphy & DeShon 2000bStemler 2004) [despite claims that rater effects are irrelevant (Javidmehr & Ebrahimpour 2015), they are not (Hoffman et. al 2010; see also Gentry et. al 2012; Putka et. al 2014); Visewesaran et. al (2005)] like the quantity of the correlation (Dierdorff & Wilson 2003; Putka & Hoffman 2014; though see Salgado et. al 2016), so much so that alternative statistics have been developed (Brown & Hauenstein 2005; Putka et. al 2008).

[13] Range restriction can actually affect reliability coefficients such that validity is overestimated. Although there exist corrections (Sackett et. al 2006), these are rarely employed.

[14] It is also important to note that the predictive validity also tends to decrease over time (Keil & Cortina 2001).

[15] Traditional IO Psychology explanations for this include measurement error, but recent research has demonstrated that this is an unsupported claim (Murphy 2008).

[16] For instance, Barros et. al (2014) reports that GMA only correlates with their metric of job performance with r=0.15, and a \beta=0.13 in the regression model, indicating an incremental R^2=0.02. Moreover, a close look at table 3 will show a negative correlation between GMA and objectively measured sales volume, which didn’t reach significance in either the correlation matrix or the regression model. Table 5 also indicates that the predictive validity also decreases over time (see [5]).

[17] See other confounds like job happiness (Ritekka 2008), age (Ng & Feldman 2008), coaching (Liu & Batt 2010), education (Ng & Feldman 2009), personality (Rothmann & Coetzer 2003; though see Murphy et. al 2013Murphy 2019) and others (Shaw & Gupta 2015).

One thought on “The Predictive (In)Validity of IQ”

Comments are closed.