Introduction
THE RESEARCH ON INTELLIGENCE TESTING has been unprecedented in its magnitude and profoundly negative for minorities, and it has, for the most part, ignored women altogether. Intelligence, whatever it is, has begun to get in the way of thinking about thinking. The problem began when researchers started acting upon what they thought they knew: that IQ tests measured intelligence and that research concerning it could be used to direct social policy. It can be argued that IQ tests are the simple by-product of an exclusionary theoretical perspective of intelligence that is sustained through the use of statistics. The purposes of this paper are to examine the notion of intelligence, to advance ideas about why traditional conceptions of it have such an adverse impact upon minorities, to explore the reasons why these conceptions exclude women, and to suggest that the most common conception, IQ, not only misrepresents but also limits understanding of intelligence.
Originally, IQ tests were atheoretical and designed to identify low ability students in need of special schools. Alfred Binet, who created the first IQ test, must have understood the ethical issues involved in the identification and categorization of people without the subsequent provision of educational services. He prescribed methods to assist the students he was commissioned to identify and taught his university students to do the same (Allen, 1995). Today one is hard-pressed to find research designed to assist individuals cope with the problems that intelligence tests identify. This suggests a fundamental question, "If IQ testing and its concomitant research do not contribute to learning or teaching or schooling, what is the point?"
There is one, of course. It is discrimination. Some researchers defend discrimination in the statistical and empirical sense but it is discrimination nonetheless. This permits researchers both to use IQ to explain why social programs do not work and to defend the notion that some cultures and races are genetically inferior as evidenced by the fact that they, as a group, score lower on standardized IQ tests. After all, according to the researchers, these conclusions are derived from simple statistical facts. Eysenck did this in 1973, and Herrnstein and Murray (1994) do so now in The Bell Curve. Eysenck (1973) stated:
Psychologists have created a paradigm, or model, which embraces many divergent facts; this paradigm is quantitative in nature, and permits of deduction and testing. The essential features of this paradigm are that intelligence can be conceived as "innate, general, cognitive ability"; these three adjectives have been criticized and subjected to many empirical tests, which on the whole, and with certain essential qualifications, have shown them to give a good account of the facts. (p. 486)
What are those facts? Herrnstein and Murray state, as fact, that intelligence and social problems are related and that IQ is the best explanation of that relationship. Kamin (1995), a critic of the late Richard Herrnstein and Charles Murray, points out that "The Bell Curve's basic thesis is that intelligence and its correlates -- maturity, farsightedness, and personal competence -- are important in keeping a person employed and in the labor force" (p. 94). Differences in employment between groups can be explained by differences in IQ and its correlates. The fact that blacks differ from whites in the degree to which social problems affect them can be explained by the fact that they, as a group, have lower IQ scores. In the "Afterword" to The Bell Curve, Murray writes:
The relationship between IQ and social behaviors that we present in this book are so powerful that they will revolutionize sociology. They are not only "significant" in the standard statistical sense of that phrase but are powerful in a substantive sense -- often much more powerful than the relationships linking social behaviors with the usual suspects (education, social status, affluence, ethnicity). (p. 569)
So powerful are the relationships that the authors feel compelled to make policy recommendations such as eliminating social programs like affirmative action. The reason is that, given the effects of IQ, affirmative action programs do no good. Indeed, they are harmful. Social programs promote a dilution of the intellectual pool available for certain jobs by allowing a disproportionate number of intellectually modest individuals to enter those positions. Murray and Herrnstein point out that more blacks than would be expected, given the IQ range, are working in professional and technical jobs where the assumption is that employees are normally drawn from people with IQs of 98 or higher and in clerical jobs where the assumption is that they are drawn from those within the range of 86 to 123 (p. 489).
However, there is another fact that emerges from all of the research on intelligence. There are no statistically significant differences in intelligence between men and women. That is, women, as a group, do not vary significantly from men in IQ scores and its correlates -- maturity, farsightedness, and personal competence. Given Murray and Herrnstein's confidence in the power of the relationship between IQ and social problems, one expects -- indeed would predict -- that there are no differences between men and women in the social problems that so clearly differentiate blacks from whites. The lack of difference between men and women in poverty, schooling, unemployment, idleness, family matters, welfare dependency, parenting, and crime should bear witness to the strength of their relationship to IQ.
But there are differences, in fact, between men and women in these areas! Why were they ignored by the authors of The Bell Curve? If gender is a variable of no significant import, why is race so important? More importantly, if the effect of IQ is so powerful, and there are no differences between males and females, why do the recommendations disproportionately and negatively affect females? Herrnstein and Murray (1994) say that the only authentic policy implication from their research is to "return as quickly as possible to the cornerstone of the American ideal -- that people are to be treated as individuals not as members of groups" (p. 562). This is an interesting and informative statement. The authors' observations, conclusions, and subsequent recommendations were based on the statistical treatment of people as members of groups not as individuals. This is clearly antithetical to the American ideal they espouse. An explanation might be that the researchers do not perceive the incongruence between their American ideals and their actions.
The rest of this paper will explore the logic that supports the use of IQ testing to discriminate and to exclude. It will discuss the norm, those who constitute it, and how it is perpetuated by a circular search for sameness. It will be further argued that default assumptions allow researchers to remain unaware of the subtle, often unrecognized, adverse impact of their research. The paper will conclude with recommendations that encourage a broader understanding of intelligence.
Normalcy and a Circular Search for Sameness
In order to understand normalcy, it is helpful to examine society's overall perspective toward individuals or groups who deviate from the norm. Quite simply, the perspective toward those who are different is negative, rooted in misunderstanding and exhibited in fear or prejudice. The "cognitive elite" described by Herrnstein and Murray (1994, p. 25), which includes most members of the research community, are not immune to this negative perspective since they contribute to its development. The cognitive elite, after all, have the privilege of formulating the definitions of normalcy and deviance. It is this overall orientation to the norm, characterized by the definitions of deviation and normalcy, that explain the discrimination against blacks and the exclusion of females in the theoretical development of the concept of intelligence. The simple fact is that they were not, and still are not, part of the research establishment that formulated the original definitions.
Deviation from the norm is not new. Deviation theories simply help provide new perspectives on old conclusions. Take for example one researcher's comment on female intellectual talent:
Although much attention has been given to the problems of gifted girls that arise from sex differences and the limits placed on their career choices, little has been done to trace the source of their problems to the developmental stages that are directly related to the normal growing up process. In the main, these problems are tied directly to underachievement. As discussed, the roots of underachievement are bound to genetics and inherited pre-dispositions, the environment and its cultural correlates, and interactions of these dimensions. Consequently, problems affecting gifted girls relate to the main and interaction effects of these variables. (Khatena, 1992, p. 255)
The perception that half of any population has problems or is abnormal should call the definition of normal into question but it has not. The development of deviation theories formulated to explain abnormalcy are still generally unappreciated for their contribution to understanding how normalcy is established. Figueira-McDonough and Sarri (1987) make the point:
Attribution, conflict, and control theories of deviance are particularly useful for an interpretation of women's "inferior" status in society. Collectively these theories construe deviance as a social definition which results in a negative personal and social status. Thus they provide the rationale for an analysis of social control as a strategy to bar powerless groups (in this case women) from access to a variety of resources.... The basic postulate they share is that deviance is not a property inherent in certain forms of behavior but a property conferred upon persons by a social audience (Becker 1963; Erickson, 1962). If definitions of deviance are socially created, the relevant questions become: by whom, for what, and how? (p. 12)
Those who assume that they are normal, or at least not deviant, create the definitions and they do so by default. In the case of IQ, it is the predominantly white, male researcher perspective and the concomitant exclusion of females and minorities, by default, that needs to be examined for its misrepresentation of intelligence writ large. It is this collective misrepresentation that promotes a retreat to the security of normalcy, rationalizes deviance, and sustains resentment toward those who suggest that the definition of "normal" might be abnormal. Minorities and females are no strangers to the resentment that attaches to their difference from some ill-defined norm. Clearly, society in general and the research community especifically have difficulty dealing with groups who differ from the norm.
The underlying thread that holds our collective fixation on normalcy together can be found in Hofstadter's (1979) book Godel, Escher, Bach: An Eternal Golden Braid. He states:
It would be nice if we could define intelligence in some other way than "that which gets the same meaning out of a sequence of symbols as we do". For if we can only define it this one way, then our argument that meaning is an intrinsic property is circular, hence content-free. (p. 172)
Theoretical development of and research on intelligence is held together with this circularity. The researchers who defined intelligence as IQ cannot measure what they cannot comprehend. The definition then is necessarily limited by their own understanding of it. That is, unless someone gets the same meaning out of a sequence of symbols as they do, he or she is, by the researcher's definition, not intelligent. This negates the notion of innate intelligence as hereditarians describe it precisely because it is contingent upon an external source, namely the researcher, for its definition. The important point here is that if intelligence is not innate, the power of the hereditarians' argument is diminished considerably. The most that they can assert is that they have identified a few mental processes which they convert into an intelligence quotient that varies among groups of people. It then becomes obvious that the variation, and its potential to discriminate between groups, is more important than what varies -- the mental processes they call IQ.
Understanding the perspectives of the researchers is important then because it reveals the limitations of their definitions of intelligence and how statistics can mask them. Rather than using the normal distribution to examine real characteristics (like height and weight) that fall along the normal curve, researchers create the conditions so that the characteristics they choose to represent intelligence fall along the normal curve (Mensch & Mensch, 1991). For example, some popular characteristics of intelligence include memory, vocabulary, and appreciation of analogies. These characteristics are grouped, measured, and examined for the degree to which they are distributed along the normal curve among samples of a population. If the measure of one characteristic does not quite fit, it can be eliminated. Parsimony or elegant fit is the criterion for establishing the validity of the construct (in this case IQ) not the multiple factors that actually constitute it.
That is why attempts to refute statistics, while justified in some cases, miss the point. The point is not that the researchers cannot compute. The point is that what researchers do compute is their picture, not the whole picture. Walter Lippmann (1995) pointed out, and it is applicable here, "What their footrule does not measure soon ceases to exist for them" (p. 566). It might be the case that the statistics are correct as far as they go given their limited definition. More importantly, understanding this circularity might serve as a basis for a theory to explain the racial and cultural differences in IQ scores based on subtle default assumptions (Hofstadter, 1979) that mask discrimination and exclusion.
Default Assumptions
Hofstadter (1985) wrote that "a default assumption is what holds true in what you might say is the 'simplest' or 'most natural' or 'most likely' possible model of whatever situation is under discussion" (p. 137). He adds that what is crucial about default assumptions is that they are made "automatically, not as a result of consideration and elimination" (p. 137). He further explains that this efficient cognitive strategy enables people to cope in a complex world where probably functions as well as, if not better, than definitely. Consider the difficulty of having to test the strength of the sidewalk to hold weight every time one decided to walk about or having to make sure the grainy substance in the shaker is salt and not sugar. Conversely, consider the difficulty posed by never questioning the default assumption that males, specifically white males, are the standard bearer for normalcy.
Research on intelligence has made males the generic student. Society has also accepted males as the standard bearer of normalcy in other areas as well, such as psychology, education, and medicine (Broverman, Vogel, Broverman, Clarkson, & Rosenkrantz, 1972; Faludi, 1991; Sadker & Sadker, 1994). Much of what is known about intelligence is premised upon the assumption that observations of males and conclusions about them are generalizable to everyone, making gender a variable of no significant import. For example, in 1922 results from research on the IQs of 25 Italian children were confirmed by the "massive scholarly work of a student at Columbia, who examined 500 cases each of Jewish, American, and Italian boys and 225 negro boys" (Kamin, 1995, p. 502). The assumption was that "massive" data from 1725 boys would confirm a small amount of data on 25 children.
The army "Alpha" and its supplementary test the "Beta" were administered to two million men in 1917. These data and a reanalysis in 1923 by the National Research Council under the direction of Carl Brigham (Kamin, 1995) were used to make policy recommendations to Congress on a number of social issues, including immigration. In other words, important social policy that affected both males and females was premised on the assumption that data from the intelligence tests of two million young army recruits were representative of society in general. All major works on genius or eminence have been concerned primarily with men but applied to both men and women (Cox, 1926; Gardner, 1993; Oden, 1968; Scheuneman, 1986; Shakeshaft & Hanson, 1986; Shakeshaft & Palmieri, 1978; Snyderman & Rothman; Stanley, 1988; Subotnik & Arnold, 1994; Terman, 1925; Terman, 1954; Terman & Oden, 1947; Terman & Oden, 1959; Watley, 1969).
The fact of the matter is that the researchers who conduct these studies do not consider gender a variable worth consideration. When they do it is usually to support a particular perspective. Cox's (1926) estimation of IQ from encyclopedias, biographies and letters is an example. She generalized her findings from male samples to all gifted children. Terman's (1925) work also reveals Hofstadter's default assumptions when he explains questionable sampling results. "In the selected groups (nominated by teachers) boys were more numerous than girls. This is not regarded as due to biased selection procedures, but either to variability or the differential death rate of embryos" (p. 76).
Questions about Terman's research samples only arose when females outperformed males, usually in writing samples. This brings out the essence of default assumptions. It is obvious that the questions were framed and conceived from the underlying assumption that the most talented, being females, did not fit the norm for distribution of talent along gender lines. However, the fact that an overwhelming preponderance of male talent seldom generated questions about female representation suggests that it was assumed to be the norm or that males inherited the "abstract power of the generic" (Hofstadter, 1979) genius. Khatena (1992) writes about Terman's research:
Although Terman recognized the importance of nonintellectual factors, such as will and motivation to life success, in his A and C studies (Terman & Oden, 1947) he paid nearly no attention to socioeconomic variables -- an omission that can be traced back to sampling bias and the selection procedure used to determine the gifted group. In all fairness, the adverse effects of socioeconomic influences were not well documented at the time of his study.... Genetic Study of Genius is a work that is descriptive and factual rather than speculative and theoretical; there was no attempt to generate hypotheses about gifted people for investigation. (p. 36)
Significantly, the A and C studies, which examined the nonintellectual factors that influenced life success, were conducted on males only. In other words, it was assumed that the "most natural" or "most likely" persons to experience life success were males and, by default, not females. The power of this default assumption is illustrated by the fact that not only did the variable of gender go unrecognized by Terman and Oden in 1947 but also by Khatena in 1992.
Default assumptions allow researchers, and many others, to ignore female income and employment as well as male poverty, parenting, and illegitimacy. This limited perspective is pervasive. Researchers need to remember that it is inappropriate to conduct studies on predominately male or female populations and then to treat the results as generalizable to all segments of society. Black males alone do not constitute the Black race just a white females alone do not constitute the White race. This conceptual leapfrogging is unfortunate and fundamentally exclusionary. This will become more apparent as researchers and theoreticians admit that the notion of race is confounded by socio-economics, education, cultural norms, predjudice, religion, and skin color, among numerous other things. Researchers must give as much thought to the variables, and subsequently groups of individuals, they exclude, consciously or not, as those they choose to study.
[....] Part 2 deleted.