Leda Cosmides and the Wason Selection Task

The Wason selection task was devised by Peter Wason in 1966. Karl Popper had postulated that science is based on hypothetico-deductive reasoning, in which the key step is the search for counter-examples, that is, for evidence contradicting a given hypothesis. Wason wanted to explore the possibility that learning in ordinary life is really science in embryo: the formation of hypotheses and the search for evidence to contradict them. The Wason selection test therefore evaluates subjects' ability to find facts that violate a hypothesis, specifically a conditional hypothesis of the form If P then Q.

In Wason's test, four "facts" are presented in the form of cards. Each card has one piece of information on one side, and another piece of information on the other side. The "conditional hypothesis" to be evaluated has to do with the relationship between the information on the two sides of the cards. The subject is shown four cards with one side up and the other side down; the task is to decide which cards should be turned over to evaluate the hypothesis.

For example: the hypothesis might be "Assume cards have a letter on one side and a number on the other. If a card has D on one side, then it must have 3 on the other side."

Two examples are given here and here.

The correct answers, and a sketch of the analysis, are presented here.

In class, about a third of of those present got the first one right, while almost everyone got the right answer for the second example. The class may be been aided by reading about the phenomenon in the Gazzaniga chapter in the course pack -- we would have expected about 25% to get the first one right, and 75% to get the right answer in the second case. In any event, the difference between the two types of examples is typical.

In thousands of replications over 25 years, it's been shown that most people are basically quite bad at this task. For unfamiliar relations -- like the first case -- less than a quarter of the subjects consistently give the correct answer, even if the subjects are Ivy League undergraduates. The commonest responses are just the P card, or the P card combined with the Q card (and these were also the commonest mistakes in class). Few people see the relevance of the not Q card. By the way, this suggests that scientific reasoning is not much like reasoning in everyday life: the basic mode of scientific reasoning is completely alien to most people.

However, for some versions of Wason's task -- like the second example -- people are a lot better: up to 75% correct.

What's the difference between the first case, where people are really bad at the task, and the second case, where people are pretty good at the task?

Several different sorts of answers come to mind, and most of them were explored by psychologists over the couple of decades since Wason first published.. The first problem is abstract, while the second one is concrete. The first problem is unfamiliar, while the second is familiar. In her 1985 PhD dissertation, the psychologist Leda Cosmides suggested that neither of these is the crucial difference; rather, she argued, the second case involves the detection of cheating with respect to a social contract.

Cosmides did a clever series of experiments to test these hypotheses (and several other hypotheses that we don't have time to discuss here).

In one set of experiments, for instance, she compared four conditions:

For instance, a hypothesis about an unfamiliar situation might be "if a man eats cassava root, he has a tattoo on his face."

The social contract narrative about this situation is then something like "Cassava root is a prized aphrodisiac. Having a facial tattoo means one is married. Unmarried men are not permitted to eat cassava root because it might lead to licentious behavior."

An abstract hypothesis is one like the case given earlier, involving letters and numbers.

A familiar hypothesis is one like "If one goes to Boston, one takes the subway."

Orders of presentation and so on are counterbalanced across subjects in the usual way. Cosmides' results for this experiment were:

Hypothesis type: Unfamiliar social contract Unfamiliar descriptive Abstract Familiar descriptive
Percent P&not-Q
75%
21%
25%
46%

Such results suggest that concreteness in itself is no help; familiarity is somewhat helpful; but social contract narratives, even when their content is unfamiliar and even bizarre, are a big help.

In fact, Cosmides argued, the fact that social-contract reasoning helps people to get the logically correct answer in this case is completely accidental. People are not reasoning logically at all -- rather they are looking for a balance between costs and benefits in social exchange ("you give me X, I give you Y"), or in the calculus of social status ("you're in social category X, so you're entitled to benefit Y"). They're especially sensitive to cheaters and poseurs -- those who take a benefit without paying the appropriate cost, or having the appropriate status. Sometimes this sensitivity to social cheating happens to correspond to logical inference, but often it doesn't.

Several Wason selection experiments suggest this result. One was done by Gigerenzer and Hug, and depends on a shift in perspective. Subjects are given social-contract rules such as "If an employee gets a pension, then that employee must have worked for the firm for at least 10 years." However, some subjects were told a story in which they are the employer, while others are told a story in which they are the employee.

In this case, what counts as cheating depends on one's perspective. From the point of view of the employer, a pension is a cost, while a decade or more of work is a benefit; from the point of view of the employee, a pension is a benefit, while a decade of work is a cost. Thus the same event "the employee gets a pension" can be viewed as a cost or a benefit, depending on the perspective taken. The definition of cheating is "taking a benefit without paying the cost," from both perspectives, but applying this definition depends on what is a cost and what is a benefit. For an employer, cheating is when an employee gets a pension but has not worked for at least a decade. For an employee, cheating is when an employee has worked for a decade but does not get a pension.

The schema for the experiment was as follows:

Example of a rule: if an employee gets a pension (P), that employee must have worked at least ten years (Q).

 

Example of a card layout:

pension no pension worked 12 years worked 8 years
P not P Q not Q

Results:

Perspective:
Percent P & not-Q
Percent not-P & Q
Employer
75%
0%
Employee
15%
65%

In other words, most subjects are hypothesizing a sensible social contract that is not at all the same as what is actually stated -- the proposed rule in the cited case does not promise a pension to anyone -- and working to detect "cheating" based on the definition of costs and benefits from the perspective they have been asked to take.

There is some operation of logic, but it is small -- about 10-15%.

Cosmides argues that this kind of cheater-detection is something that people -- like other primates -- are very good at, and that we are good at it because it is important to us, not only individually but also collectively and historically. It's important because the evolution of a stable propensity for altruism requires high-accuracy detection and punishment of cheaters. In a society in which individuals are free to choose different strategies about cooperation based on past experience, individuals who always cooperate will tend to be mercilessly fleeced, while individuals who never cooperate will tend to be shunned. Those who pursue a "tit for tat" strategy will do better than either -- however, this requires telling tits from tats.

Cosmides offers some arguments that the "learning" involved here has occured on an evolutionary time scale, rather than (or at least in addition to) on the scale of each individual's life.

What has evolved here, if she if right, is not a hoof or a horn or an eyeball, but a complex and abstract behavioral propensity. Nevertheless, it has arguably been shaped by selective forces as precisely as physical characteristics of the phenotype have -- it is just harder to characterize, because we can only discover its properties by doing experiments, rather than by simple dissection of physical objects.

Evolutionary Psychology

The term "evolutionary psychology" refers to the study of adaptions like "cheater detection": cognitive or behavioral propensities rather than anatomical or physiological ones. Of course all anatomical adaptations have cognitive and behavioral correlates, and vice versa, as Darwin knew very well. It is an odd and interesting fact, then, that the cognitive and behavioral side of evolution was increasingly neglected after about 1900, especially with respect to humans.

The term "evolutionary psychology" was coined by Cosmides and her collaborators during the late 1980's, and has come into common use in the past five years or so. During this same period, the outlook of some psychologists and neuroscientists has been changed, to take a more evolutionary perspective. The consequence is partly just to ask certain questions: "what species characteristics might lie behind the way humans think, feel and behave?" "what were/are the selective pressures, and what cognitive and behavioral structures did they operate on?"

The result of asking these questions may also be a different set of ideas about the phenomena to be explained -- about human nature itself. Wason's research lead some psychologists to the conclusion that "People do not naturally think like scientists -- most people are really bad at simple logic. Perhaps this is because our minds work mainly by simple association of positive instances." The evolutionary-psychology reinterpretation is, "People are naturally good at detecting cheaters -- because this is an essential adaptation for the reciprocal altruism that is at the foundation of hominid social organization. People are not nearly as good at general hypothesis testing, because there has never been any similar selective urgency -- our ancestors did not compete for mates by solving physics problem sets -- but there may well be other adaptations for reasoning about other specific sorts of things."

Here is the abstract for a recent talk by Cosmides and her long-time collaborator, the anthropologist John Tooby, which presents the view of human nature that emerges from this perspective:

The study of the human mind has recently been moved into the natural sciences through biology, computer science, and allied disciplines, and the result has been the revelation of a wholly new and surprising picture of human nature. Instead of the human mind being a blank slate governed by a few general purpose principles of reasoning and learning, it is full of "reasoning instincts" and "innate knowledge" -- that is, it resembles a network of dedicated computers each specialized to solve a different type of problem, each running under its own richly coded, distinctly nonstandard logic. The programs that comprise the human mind (or brain) were selected for not because of their generality, but because of their specialized success in solving the actual array of problems that our ancestors faced during their evolution, such as navigating the social world, reasoning about macroscopic rigid objects as tools, "computing" or perceiving beauty, foraging, understanding the biological world, and so on.

This is a good example of what is sometimes called "megaphone science," that is, scientific popularization by the methods of politics. As usual with political sloganeering, there is some gross exaggeration.

For example, it is misleading to call this perspective "a wholly new and surprising picture of human nature". The computer metaphors are recent, because computers are recent, but such metaphors are not specific to this viewpoint. The idea that the mind has been pre-programmed by past lives goes back at least to Plato. The "blank slate" metaphor for human learning was introduced in the 17th century by John Locke, precisely because the contrary view was prevelent at the time. 19th century "faculty psychology", including phrenology, proposed an explicit and detailed picture of a set of "reasoning instincts", along with information about the location of each in the brain. The idea of cognitive and behavioral adaptations, including for humans, is explicit in Darwin. More recent related ideas include (human) ethology; sociobiology; Fodor's "modularity of mind;" Dawkin's "extended phenotype." Thus arguments on this issue have gone back and forth in western thought for more than two millenia.

As usual with effective politics, there is also a truth behind the slogans. In this case, the truth is that social science (and to a large extent psychology, philosophy and the humanities) have been dominated during the 20th century by various more or less extreme forms of the "blank slate" view. Even in neuroscience and the more biological end of psychology, there has been relatively little emphasis on an evolutionary perspective. As Gazzaniga et al. write in the Cognitive Neuroscience chapter in the course pack:

There will come a time when the subject matter of this chapter will be presented in the first chapter of a text on cognitive neuroscience [as opposed to the next-to-last -- myl]. The reason for its current position . . . is that most practitioners . . . do not yet fully appreciate the insights offered by an evolutionary perspective. In part, this has to do with the very history of neuroscience and psychology. Both fields have been dominated by . . . a belief that associationism is how we learn and remember . . . and that most brains can learn anything. . . . The past 100 years of research do not support this view. To learn why, we must examine the current cognitive neuroscience enterprise from an evolutionary perspective. What are brains for, why were they built the way they are, and, in a mechanistic sense, how should we view the relation between neuroscientific data and behavior?

We have assigned this chapter because it presents the basic Darwinian, genetic and ethological foundations, and a careful survey of the range of human cognitive and behavioral characteristics for which an evolutionary analysis has been enlightening. As this chapter suggests, both the basic ideas and the specific applications remain controversial among scientists; but there is a growing consensus that the perspective is a valuable one and that its applications will prove to be valid and scientifically fruitful.

The standard social science model

As Cosmides and Tooby have pointed out in more measured and carefully reasoned work, their viewpoint is strongly at variance with the viewpoint of most respectable 20th-century social scientists, and also most contemporary humanists.

They quote Emil Durkheim, writing in 1895:

...one would be strangely mistaken about our thought if ... he drew the conclusion that sociology, according to us, must, or even can, make an abstraction of man and his faculties. It is clear. . . that the general characteristics of human nature participate in the work of elaboration from which social life results. But they are not the cause of it, nor do they give it its special form; they only make it possible. Collective representations, emotions, and tendencies are caused not by certain states of the consciousness of individuals but by the conditions in which the social group, in its totality, is placed. Such actions can, of course, materialize only if the individual natures are not resistant to them; but these individual natures are merely the indeterminate material that the social factor molds and transforms. Their contribution consists exclusively in very general attitudes, in vague and consequently plastic predispositions which, by themselves, if other agents did not intervene, could not take on the definite and complex forms which characterize social phenomena.

In a book chapter published in 1992, Barkow, Cosmides and Tooby refer to this as the "standard social science model," and sketch its logic as follows: (Cosmides and Tooby, "The Psychological Foundations of Culture," in Barkow, Cosmides and Tooby, The Adapted Mind (1992)):

  1. Rapid historical change and spontaneous "cross-fostering experiments" dispose of the racist notion that intergroup behavioral differences are genetic. Infants everywhere have the same developmental potential.
  2. Althoughs infants are everywhere the same, adults everywhere differ profoundly in their behavioral and mental organization. Therefore, "human nature" (the evolved structure of the human mind) cannot be the cause of the mental organization of adult humans, their social systems, their culture, etc.
  3. Complexly organized adult behaviors are absent from infants. Whatever "innate" equipment infants are born with must therefore be viewed as highly rudimentary -- an unorganized set of crude urges or drives, along with a general ability to learn. Infants must acquire adult mental organization from some external source in the course of development.
  4. The external source is obvious: this organization is manifestly present in the behavior and the public representations of other members of the local group. "Cultural phenomena are in no respect hereditary but are characteristically and without exception acquired." "Undirected by culture patterns -- organized systems of significant symbols -- man's behavior would be virtually ungovernable, a mere chaos of pointless acts and exploding emotions, his experience virtually shapeless" (Geertz 1973). This establishes that the social world is the cause of the mental organization of adults.
  5. The cultural and social elements that mold the individual precede the individual and are external to the individual. The mind did not create them; they created the mind. They are "given, and the individual finds them already current in the community when he is born." (Geertz 1973). The causal flow is overwhelmingly or entirely in one direction: the individual is the acted upon and the sociocultural world is the actor.
  6. Therefore, what complexly organizes and richly shapes the substance of human life -- what is interesting and distinctive and worthy of study -- is the variable pool of stuff that is referred to as 'culture". But what creates culture?
  7. Culture is not created by the biological properties of individual humans -- human nature.
  8. Rather, culture is created by some set of emergent processes whose determinants are realized at the group level. The sociocultural level is a distinct, autonomous and self-caused realm. "Culture is a thing sui generis which can be explained only in terms of itself .. Omnis cultura ex cultura." (Lowie 1917). Alfred Kroeber "The only antecedents of historical phenomena are historical phenomena." Emil Durkheim "The determining cause of a social fact should be sought among the social facts preceding it and not among the states of individual consciousness." Geertz "Our ideas, our values, our acts, even our emotions, are, like our nervous system itself, cultural products -- products manufactured, indeed, out of tendencies, capacities, and dispositions with which we were born, but manufactured nonetheless." (1973).
  9. Therefore, the SSSM denies that "human nature" -- the evolved architecture of the human mind -- can play any notable role as a generator of significant organization in human life... In so doing, it removes from the concept of human nature all substantive content, and relegates the architecture of the human mind to the narrowly delimited role of embodying "the capacity for culture."

As T&C points out, tThe first thing to say about the SSSM is that a lot of it is right, on anybody's account. For instance, its arguments against the racist anthropology of the late 19th century -- or of the Nazis in the 20th -- seem not only ethically but also scientifically correct.

During the 19th century, the nature and origins of the world's cultures were increasingly interesting to Europeans. Most investigations assumed a linear conception of history progress, with human societies evolving from "savagery" through "barbarism" and finally to "civilization;" many further assumed that different cultural patterns at a given stage reflected racial as well as environmental and historical influences, and there was a great deal of interest in "national character". The Darwinian concept of evolution, as well as Darwin's simultaneously empirical and generalizing perspective, fit well with this endeavor, and became central to it as the century passed. It's easy to caricature its practitioners as imperialist boors, but in fact many of them were intelligent, observant and sensitive as well as adventurous, and they often identified more strongly with the far-away peoples they studied than with their own societies. Wilhelm von Humboldt and Sir Richard Burton are good examples. Nevertheless, some certainly were indeed imperialist boors, and there is a web of connections between this work and Nazi pseudoscientific racism.

Many aspects of the SSSM, especially in anthropology, developed around 1990 in explicit opposition to this tradition. Thus Cosmides & Co., in one sense, are not really suggesting a totally new perspective, but rather are turning the clock back to 1900 and taking a different path. In fact Tooby and Cosmides put it this way themselves:

After a century, it is time to reconsider this model in the light of the new knowledge and new understanding that has been achieved in evolutionary biology, development, and cognitive science since it was first formulated.

T&C identify three major defects of the SSSM:

  1. Naive and erroneous theories of development (viz. teeth, breasts, which are not present at birth but are not purely ex cultura either).
  2. Faulty analysis of nature-nurture issues: the phenotype cannot be partitioned into genetic and environmental traits; the fact of cultural variation is consistent with a genetic substrate.
  3. Wrong (and probably nonsensical, impossible) psychology: "a psychological architecture that consisted of nothing but equipotential, general-purpose, content-independent or content-free mechanisms could not successfully perform the tasks the human mind is known to perform or solve the adaptive problems humans evolved to solve." It cannot account for the behavior observed, and it is not a type of design that could have evolved.

They argue that characteristic practices of 20th-century social scientists are designed to reinforce the standard model by extra-scientific means, and that this warps the empirical work of social scientists and especially the modes of analysis based on it:

Whenever it is suggested that something is "innate" or "biological", the SSSM-oriented anthropologist or sociologist riffles through the ethnographic literature to find a report of a culture wher the behavior (or whatever) varies.
. . .
Because of the moral appeal of antinativism, the process of discrediting claims about a universal human nature has been strongly motivated. Anthropologists, by each new claim of discovered variability, felt they were expanding the boundaries of their discipline (and, as they thought, of human possibility itself) and liberating the social sciences from biologically deterministic accounts of how we are inflexibly constrained to live as we do. This has elevated particularism and the celebration of variability to central values inside of anthropology, strongly asserted and fiercely defended.

The most scientifically damaging aspect of this dynamic has not been the consequent rhtorical emphasis most anthropologists have placed on the unusual . . . As Bloch says, "it is the professional malpractice of anthropologists to exaggerate the exotic character of other cultures." Nor is the most damaging aspect of this dynamic the professionally cultivated credulousness about claims of wonders in remote parts of the world, which has led anthropologists routinely to embrace, perpetuate, and defend not only gross errors . . but also obvious hoaxes . . .

The most scientifically damaging aspect of this value system has been that it leads anthropologists to actively reject conceptual frameworks that identify meaningful dimensions of cross-cultural uniformity in favor of alternative vantage points from which cultures appear maximally differentiated.

In general, there is no question that the intellectual pendulum is swinging towards some version of the "evolutionary psychology" point of view, with large potential effects in the social sciences and the humanities.

One natural question to ask is what the political implications will be. This is not to say that all science is politics, just that broad questions about human nature and its relationship to culture are likely to have a political dimension. For example, the leaders in establishing the "standard social science model" in the early 20th century, such as Franz Boas, had a very clear idea that their scientific conclusions were connected to their (liberal, cultural-relativist, anti-racist) politics. Does a return to a more biological view of culture and cognition presage a return to "racist" science, or to scientific justifications for imperial hegemonies?

Not necessarily.

In Noam Chomsky's 1971 review of Skinner's Beyond Freedom and Dignity, he presents an interesting argument that "tabula rasa" views of the human mind might be used as justification for totalitarian mind control, and suggests that scientific ideologies are sometimes a sort of Rorschach blot onto which a wide variety of political viewpoints and interests can be projected.

In any case, the movement in the direction of an evolutionary (and therefore biological) approach to human nature has so far not accumulated any particular political baggage -- unless we are still somehow too close to it to see what is happening.