Mice Managing Mistakes

Last week I attended a conference called “Lying: The Making of the World” at Arizona State University. Speakers were drawn from across both the sciences and humanities, from biology to literature, and included people likely to be familiar to readers of this blog, including Robert Trivers, Martie Haselton, Bill von Hippel, and , cough, me. The subject of the presentations varied widely, from deceptive coloration in animals to a gripping account of the hoax in which an American man posed as a lesbian Syrian woman in his blog, including posting a report that the fictional woman had been kidnapped.

Leaf insect. Looking like a leaf.

I had a number of exchanges with people at the conference that left me with a feeling that I have had before. I seem to disagree about some issues with people with whom I’m usually in reasonably close agreement and, further, it seemed difficult to identify where views diverged. So, although these issues have been addressed in published work (see the references section at the end), I thought I’d try again here because I’m still not sure exactly where the disagreements lie.

The issue at stake is one surrounding decision making, and the problem of what many in the evolutionary community have come to call “error management.” The basic question is the nature of systems designed to make good (adaptive) decisions under uncertainty given diverse cost/benefit profiles. The initial paper by Haselton and Buss has stimulated a tremendous amount of work in this area, starting what has become a robust and productive research area.

Here, I will argue here that there are two distinct and distinguishable ways to solve the basic problem of managing errors. This is my main point, simply distinguishing these two methods. As an ancillary matter, I’ll suggest that one way of doing this has advantages and should, everything else equal (a potentially important caveat), be expected in actual evolved decision-making systems.

To get at these issues, I’ll use a simple example, a mouse seeing a piece of cheese, separated from her by a potentially dangerous trench. Should she risk the jump across the chasm to get the cheese or not? I frame the decision problem like any other: the potential benefit, B, is the value in the cheese, which depends on its size. The potential cost, C, is the damage from the fall if she doesn’t make it all the way across, which depends on, say, the depth of the trench. The probability of getting the cheese, p, depends on the size of the chasm, with the probability getting smaller as the trench gets larger.

The mouse – I’ll call her Minnie – only wants to try the jump when the expected value of jumping is positive. The focus on expected value quickly eliminates from consideration a decision rule in which Minnie jumps when the chance of making it across are better than .5. Minnie is emphatically not trying to maximize the difference between the number of times she makes it across (hits, one might say) minus the number of times that the fails to do so (misses). I hope it is clear that any reasonable decision rule she uses must take into account the probability of success as well as the relevant costs and benefits.

Yay, cheese!

I also wish to be clear that I am not claiming that Minnie can estimate these costs, benefits, and probabilities with certainty. Minnie can see the cheese, and estimate its size. She can see the trench, and estimate the width and depth of the pit. From these percepts she can estimate the magnitude of all the relevant parameters, but of course with error. Nonetheless, Minnie’s mind can compute, from the percept, her best estimate of the parameters. (I leave aside for this post whether Minnie can also compute the magnitude of error in her estimate. I just note in passing that in the limit, if we suppose that Minnie literally has no idea at all of the estimate of the probability that she will be able to get across the trench, she ought to assume that the probability is .5; if one has no idea how likely two mutually exclusive events are, then the assumption should be that they will occur with equal probability. In such a case, she should jump when B > C.) In any case, Minnie has some means to estimate, from what she sees or, perhaps, smells, the costs, benefits, and odds of success.

So, how should she make a decision on any given day that she encounters the cheese sitting across the trench? One possibility is as follows. She could first estimate the benefit of the cheese (from its size), the cost of the fall (from the depth), and the probability of success (from the width of the trench), giving her the best possible estimate that she can make of B, C, and p. The quantity she wants to know is whether the expected value of the benefit of trying to jump (p * B) is greater than the expected value of the cost of trying to jump ((1-p) * C)). Minnie’s mind could be designed to compute (p*B)-((1-p) * C) and jump when this quantity (the expected value of jumping) is greater than zero. She jumps, then, based on this expected value computation. It should be clear that as B gets large (large cheese), Minnie will correctly choose to jump even when p is small – i.e., wide trenches – if B is big enough relative to C. To connect to the theory at stake here, Minnie is managing her errors in such a case by avoiding potentially costly misses, choosing not to jump with there is a very big piece of cheese across the way, even if the trench is large. Holding aside any additional considerations, no decision rule can do better than this one because all it does is estimate the expected value of jumping from the information available.

Were Minnie to run through those computations, and then add a little bit to her perception of the chances of success when the cheese is especially big, she’s making a mistake. She has already accounted for the size of the cheese in her decision rule, jumping when the cheese is big even if the odds are low. Increasing her estimate of p – increasing her “confidence” or being “optimistic” – will cause her to make some negative expected value jumps. Over time, such designs will be punished by natural selection, and such Minnie Mice will lose, on average, to appropriately confident Minnies. I invite readers who disagree with any of the material to this point to comment.

This relates to a second way she could make her choice. She could compute B, C, and p, as before. However, she updates her confidence about making it across –  her estimate of the chance of success – from p to p’, a number that exaggerates the chance of success when B is larger than C, and under-estimates the chance of success when C is larger than B, and takes this to be the chance of success. She then bases her decision to jump on this updated value, using the output, p’, as her decision rule. That is, she might, for instance, only jump with p’ is greater than .5. Note that she’s choosing to jump using the probability of success rather than, as in the prior case, the expected value of success. (A related method is to update p to p’, and again choose on the basis of the expected value. My analysis for this method would be the same as the one below.)

How does she do this? It should be clear that in this scenario Minnie wants to make exactly the same choices as the Minnie in the prior version. These decisions will, as we’ve seen, maximize expected value. So, she has to increase p’ – her belief about the chance of success – in such a way that it is greater than .5 (or whatever threshold is preferred) whenever the expected value of success is positive. She needs a transformation rule that modifies her estimate of success upward as the cheese gets larger and downward as the trench gets deeper. There are many ways to do this. One way would to be follow the same procedure as above, and if the expected value is positive, update the estimate of success to .6 (or some other value > .5). (There are other ways to compute p’ from p, B, C, and a given threshold value. This exercise is left to the reader.)

In sum, the two methods of deciding whether or not to jump are, first, to compute the expected value and choose based on the result of this computation or, second, to change one’s representation of the true probability of success to a false one, and choose using a decision rule that uses this false probability estimate, but still maximizes expected value.

Importantly, the value p’ is a false belief. It is an inaccurate representation of the probability of success. Minnie is wrong (but, to my way of thinking, in no interesting sense “self-deceived”) about how likely she is to succeed. Indeed, the second method above might strike some readers as perverse. The system has computed the expected value, and then thrown away a true estimate in favor of a false one. Disposing of the true belief in favor of the false one carries certain complications. For example, suppose a Minnie who uses this latter decision process is faced with a trench and sees a small piece of cheese on the other side. The benefit being small, she underestimates her chance of success, and correctly stays on her own side, (falsely) believing that she is unlikely to make it across. Now a cat comes along, and she must choose between crossing the trench and other escape options. She will underestimate how good an option jumping across is because of her false belief. Compare this to a Minnie who uses the first method. She has accurately estimated her chances, and will correctly choose the right escape route based on the correct chances of escape for each option.

To summarize, one way to manage errors is to compute expected value, maintaining the best possible estimates of what is true. A second way is to introduce false beliefs, and choose on the basis of one of these false computations.

I’m not saying that this second kind of system does not, or cannot, exist. There could be reasons that systems of the second type might have evolved. For example, suppose that Minnie often jumps in view of Mickey, Donald, and Goofy, and they value mice who are brave. If Minnie projects courage by making negative expected value jumps, and this translates into fitness advantages, then this benefit might offset the cost of the false belief. (Again note, however, that Minnie could simply add reputational advantages to the Benefit side of the computation, and use the first method.) I certainly take the point that there can be value in updating others’ beliefs, and that this could influence decision making.

But, everything else equal, the first system seems, to me, to be the one that we should expect to observe in organisms. False representations, such a p’, won’t be as useful in decision making as true representations, p. If multiple systems, for instance, consult this probability, then the error will introduce problems for those systems that consume this false representation. More generally, false representations are less useful than true representations for decision making purposes. Of course, false representations might be useful for other purposes, such as persuasion. I have tried to make my views on this matter as clear as I could, writing about this issue at some length in some recent published work of mine.

And, again, to reduce the chance of being misunderstood, I am not saying that all systems should function optimally or perfectly. I am saying that, ceteris paribus, the first of the two systems I discuss here should be expected.

It is for this reason that I am skeptical of arguments that suggest that people should be expected to be “overconfident” or overly “optimistic” in the service of solving the problem of managing errors. A better way to manage errors is to be correctly confident or appropriately optimistic, and choose in a way that reflects the expected value of the options available.

References

Haselton, M. G., & Buss, D. M. (2000). Error management theory: a new perspective on biases in cross-sex mind reading. Journal of personality and social psychology78(1), 81.

Haselton, M. G., & Nettle, D. (2006). The paranoid optimist: An integrative evolutionary model of cognitive biases. Personality and Social Psychology Review10(1), 47-66.

Kurzban, R. (2011). Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton University Press.

McKay, R. T., & Dennett, D. C. (2009). The evolution of misbelief. Behavioral and Brain Sciences32(6), 493. (Please see also the comments on the target article.)

McKay, R., & Efferson, C. (2010). The subtleties of error management. Evolution and Human Behavior31(5), 309-319.

Pinker, S. (2011). Representations and decision rules in the theory of self-deception. Behavioral and Brain Sciences34(1), 35-37.

Von Hippel, W., & Trivers, R. (2011). The evolution and psychology of self-deception. Behavioral and Brain Sciences34(1), 1-56.

24. October 2012 by kurzbanepblog
Categories: Blog | 13 comments

Skip to toolbar