Defining Morality and Altruism

Two noteworthy articles appeared recently in the New York Times, one by Frans de Waal on morality and one by Judith Lichenberg on altruism, both with an evolutionary slant.

Frans de Waal, as he does elsewhere, essentially equates morality and altruism. He writes that “there has been a resurgence of the Darwinian view that morality grew out of the social instincts,” highlighting how “anthropologists have shown humanity to be far more cooperative, altruistic, and fair than predicted by self-interest models” and that “our close relatives will do each other favors even if there’s nothing in it for themselves.” That is, he’s saying showing that organisms are altruistic just is the same as showing that they are moral. Some of us think that this is a mistake – donating $1 million to terrorists would seem to be “moral” if one uses this definition – so this seems to be a problem. No?

But I think there’s even a bigger problem. Both authors worry about things such as “true moral tendencies” (de Waal) and “pure altruism” (Lichenberg). That is, they are both worried about the complication of “motivation” in understanding morality, with de Waal saying: “Even though altruistic behavior evolved for the advantages it confers, this does not make it selfishly motivated” and Lichtenberg: “When we ask whether human beings are altruistic, we want to know about their motives or intentions.” In other words, when I help you, and feel good about it (which I would, of course), was I “really” altruistic? Hmmm….

Now, biologists (and economists) have largely finessed this issue, and it’s a good thing, since the language of intentions and motivations makes things difficult and messy. Biologists have historically gotten around this problem by defining altruism in terms of the fitness effects of the behavior, and this route is – more or less – the one that persists to the present day. A behavior is altruistic if it has the effect of raising the (lifetime) fitness of one organism and reducing the (lifetime) fitness of the actor.

This raises a problem. As Tooby and Cosmides pointed out, this definition classifies events such as an insect flying into a spider’s web as altruism. This illustrates that something is wrong with the definition but, oddly, the biological community doesn’t seem to care. Altruism is an absolutely central idea, and it seems to some of us that biologists really ought to worry if their definition incorrectly classifies behaviors.

Also frustrating is that the path out of this problem has already been pretty well thought out in another domain, communication. John Maynard Smith famously and usefully distinguished between signals – features selected by virtue of their information-transmission effects – and cues – features used as information but not selected to do so. Alarm calls are signals; they were selected to convey information. Cues – of which there are innumerable examples – were not so selected.

This distinction finesses the issue of motivation and intention while at the same time avoiding the misclassification of behavioral definitions. The trick, of course, is defining things in terms of their evolved function. This transforms the issue into a tractable empirical enterprise. Was a given trait or feature selected by virtue of the fact that it delivers benefits to another organism? Answering this question will rule out insects-flying-into-webs cases, but correctly capture kin selected mechanisms. Unfortunate spider-food-insects are not altruistic; momma regurgitating a worm into chick’s mouth is.

Which of course leads to the real question. Why do biologists endorse the utility of definitions in terms of evolved function in the domain of cues and signals, and then abandon it elsewhere, here in the realm of altruism? Definitions in terms of design unify our approaches to traits. Is trait X a case of Y? If X has traits that make it well designed for Ying, then, yes, it’s a Y.

Y is this so hard?

20. October 2010 by kurzbanepblog
Categories: Blog | 11 comments

Skip to toolbar