A new paper in Personality and Social Psychology Review is out, arguing that grief serves an adaptive function. Thus, the title. Today’s post discusses the argument in the paper.
(First, yes, I’ve been gone for a while. I was trapped near the inner circle of fault. But now I’m back.)
The paper, by Winegard et al., opens with the following vignette:
A bereaved wife every weekend walks one mile to place flowers on her deceased husband’s cemetery stone. Neither rain nor snow prevents her from making this trip, one she has been making for 2 years. However poignant the scene, and however high our temptation to exclude it from the cold logic of scientific scrutiny, it presents researchers with a perplexing puzzle that demands reflection. The deceased husband, despite all of his widow’s solicitude, cannot return to repay his wife’s devotion. Why waste time, energy, effort, resources—why, in other words, grieve for a social bond that can no longer compensate such dedication?
This does seem to be a good question. I mused about this a little in a discussion of love and broken hearts a year ago. The emotional pain, and everything that goes along with it, does seem puzzling. Why cry over spilt milk?
Winegard et al. locate the puzzle in costs. This is a key point given the argument that they want to make. As you can see from the opening passage, their idea is that the “time, energy, effort, resources” are being wasted on the dead, who, they correctly point out, are notorious for failing to reciprocate. (I except here of course the Dead Men of Dunharrow, who really came through in a pinch.) Time and resources are being wasted in the sense that they could be used more productively. Walking to the grave in this example, then, carries an opportunity cost, which is the next best thing that one could do with a given resource. (I’m gratified that these authors lean so heavily on the notion of opportunity costs, which I have recently written about in a quite different context.) It’s clear that the puzzle that Winegard et al. have in mind has to do with the very large opportunity costs being paid by those who grieve.
Their explanation is that bearing these costs acts as a signal. Drawing on Costly Signaling Theory (CST), they argue that paying these costs sends signals to other people regarding one’s value as a social partner. Recall from CST that for a signal to convey information, the cost must depend on the underlying quality being signaled. Crucially, the size of the cost must depend on a property of the organism doing the signaling; in the usual example of a peacock’s tail, the cost of a bit tail is marginally lower for higher quality organisms. The authors write:
We suggest that grief functions like these (and other) hard-to-fake signals because it is costly and conveys information about the underlying traits of the griever. Humans’ prolonged grief response may act as an honest signal of prosocial proclivities, most importantly, of the proclivity to form strong, non-calculated bonds.
Their claim, then, turns on the idea that grief will be less costly for people with greater “prosocial proclivities.” (As a complete aside, this is more or less what the anonymous yet obviously insightful commenter Aliera suggested in my “broken heart” post, writing: “…perhaps extreme reactions to unrequited love or rejection (in the form of creative endeavors, passionate manifestos, devotion-displays) might serve as signals of one’s ability and willingness to commit to a romantic partner in general. These signals, then, are actually – and unknowingly – directed toward new potential mates who might now consider the individual attractive as a long-term mate based on the quality, costliness, and honesty of the display.”) In any case, returning to the paper, the argument rests on the idea that less prosocial people have better things to do with their time than more prosocial people (my underlining, their italics):
A relatively low commitment social strategy, one that consists of cheating and manipulating others, may constitute a viable social strategy… If so, intense grief would cost those who pursue such a strategy more relative to those who are inclined to form strong bonds because their time, energy, and resources would be better spent searching for and exploiting less costly opportunities.
I find myself puzzled when I take this claim in juxtaposition with the opening vignette. The story about the woman turns on the idea that she could be doing something else with her time and energy. That, indeed, is supposed to be the root of the puzzle: that she is paying substantial opportunity costs by visiting the grave. These costs are supposed to animate the issue in the first place: Why are people paying such huge costs, in the form of all the things that they’re not doing because they’re grieving? It seems clear that the woman in question has more than just the two options of either grieving on the one hand or exploiting others on the other. People have many things they might be doing at any given moment besides those two activities. In short, it seems from the opening vignette that the authors not only concede but require that it be true that grieving carries very big opportunity costs, even if one is a prosocial sort of person. Yet their argument also requires that the opportunity costs of grieving people to be small, at least relative to non-grieving people.
So, while the logic of the opportunity cost argument turns on the idea that non-grieving people have bigger opportunity costs than grieving people, I see no particular reason to believe that the benefits of searching for and exploiting others – the thing that non-grievers are up to – carries especially greater benefits than other activities that either they or grievers might do.
Further, suppose that it were true that, generally, how much one grieves depends on what else one might be doing with one’s time, such that people who grieve have less they might be doing, and so are bearing lower opportunity costs by grieving. In that case, unless one grants that “searching for and exploiting others” is an especially valuable way to spend one’s time, then grieving will wind up, just like other costly signals, signaling the underlying quality that keeps the signal honest: that one doesn’t have big opportunity costs. This property – not having much else valuable that one might be doing – seems like a puzzling quality to signal, but I suppose it’s possible.
In short, if the asymmetry in opportunity costs for grievers versus non-grievers doesn’t hold, then the rest of the argument in the paper doesn’t hold. I hope I am not misunderstood. I think grieving is indeed mysterious, as my Love post implied. And of course I think the evolutionary point of view will help to clarify matters.
For my part, I’m skeptical in general of explanations that turn on the notion of Types, to use the language of game theory. It seems perfectly plausible to me that many people might form very deep attachments to particular friends, kin, and lovers, yet be very exploitative in other relationships. I have little doubt that people who viciously exploit strangers nonetheless grieve when their parents die, limiting the information that is conveyed by observations of grieving. The fact that people can vary their degree of exploitation versus prosociality over time makes me very skeptical that grief and related emotions have to do with signaling one’s Type. Nothing, as a logical matter, prevents someone from grieving at time one from being exploitative at time two.
Not that I have a much better idea. Tooby and Cosmides propose a simulation view and a recalibrational view, both of which seem plausible:
Paradoxically, grief provoked by death may be a byproduct of mechanisms designed to take imagined situations as input: it may be intense so that, if triggered by imagination in advance, it is properly deterrent. Alternatively-or additionally-grief may be intense in order to recalibrate weightings in the decision rules that governed choices prior to the death. If your child died because you made an incorrect choice (and given the absence of a controlled study with alternative realities, a bad outcome always raises the probability that you made an incorrect choice), then experiencing grief will recalibrate you for subsequent choices. Death may involve guilt, grief, and depression because of the problem of recalibration of weights on courses of action.
Winegard, B. M., Reynolds, T., Baumesiter, R. F., Winegard, B., & Maner, J. K. (in press). Grief Functions as an Honest Indicator of Commitment. Personality and Social Psychology Review.
So, I got a unique invitation from Edge.org creator John Brockman back in September of last year. He was organizing an event which he dubbed HeadCon. He invited a group of scholars to give brief talks on the topic of our choice, to be recorded close up. The camera coming in tight on people’s heads was the origin of the “Head” part of HeadCon.
Our marching orders were to talk about what was new in our respective areas of social science and why whatever was new mattered, eschewing our “canned” presentation, keeping our remarks “conversational.” The talks, along with a pretty creepy video of a face, can be found on the HeadCon web page.
I used the opportunity to talk about a topic that I’ve been oddly obsessed with over the last few years, the idea that “willpower” is fueled by a mysterious resource, and the subsidiary idea that glucose is the mysterious resource in question. (I’ve discussed these ideas in prior blog posts. I and some of my colleagues at Penn have also recently provided an alternative account, for those interested. Another account just came out in Trends in Cognitive Science.)
To get at this (narrow) topic I framed my remarks in the (broad) context of the current discussion in psychology about replication. (Indeed, the title on the web site of my talk is P-hacking and the Replication Crisis.) More or less, the theme I tried to hit – and now you don’t have to sit through the video of my presentation – is that there’s just a ton of reasons to think that the “resource model” of self-control is wrong and yet the model… Just. Won’t. Die.
At HeadCon, I told the story of my experiences with the willpower-as-resource model, which I quote here a bit, despite how unseemly it is to quote oneself.
… I started just talking informally with colleagues about this. I would go to give talks in places and, lo and behold, it turns out there’s this kind of background radiation—there’s the dark matter of psychology – which is a few people who fail to replicate and don’t publish their work and also don’t talk about it … It’s sort of like sex, it’s the thing that we’re all doing, we’re all replicating, we just don’t want to talk about it too much, right?…Once I did that I started getting the sense that I was fishing into literature where there’s no there there. … more and more work is coming out that’s very difficult to interpret under the willpower model.
I’m pleased to report that after discussing my frustration with the fact that the resource model refused to die its appropriate death, the good people at Edge chose to run with my suggestion that this could be a good Edge Question. To wit, the 2014 question was: “What scientific idea is ready for retirement?” The Times just ran a little piece on it.) (And, I admit, I basically just cribbed from my talk for my own answer. Sue me.)
In any case, it’s in this context that I thought I would discuss, briefly, a new paper out in Appetite. The (provocative) title is, Sweet Delusion: Glucose Drinks Fail to Counteract Ego Depletion, and the authors – Florian Lange and Frank Eggert – report two experiments (N = 70, N = 115) in which they tried to replicate the results of two studies (Gailliot et al., 2007; Hagger et al., 2010) purporting to show that drinking a sugary beverage improves subsequent performance on a “self-control” task. The idea is that glucose is the fuel for self-control, so drinking a sugary drink will improve performance on a task the requires self-control.
Lange and Eggert begin their paper by pointing out a number of reasons to doubt the glucose-as-resource as model, including referring to the recent paper by Schimmack, which discussed some statistical worries about one of the key papers in this literature, the 2007 paper by Gailliot et al. in the Journal of Personality and Social Psychology. Lange and Eggert add a number of other worries regarding these findings, including pointing to some errors in a recent meta-analysis by Hagger et al., (2010), writing:
In sum, the original (Gailliot et al., 2007) and meta-analytic (Hagger et al., 2010) evidence is less compelling than suggested, illustrating that the effect of sugar supplementation on ego depletion is far from being an established research phenomenon.
In the first study, subjects drank either a regular sugary beverage or a sugar-free version, and were subsequently given a task that measured discounting, willingness to forgo a smaller, sooner payoff in favor of a larger, later one, with choosing the latter sorts of payoffs considered to indicate a greater ability to exert “self-control.” They found no effect of condition, despite their calculation that “an effect of glucose consumption on ego depletion as large as reported by Hagger et al. (2010) or Wang and Dvorak (2010) could have been detected with a probability close to 1 (1-β > .99).” A second experiment, in which subjects merely rinsed with the glucose solution, instead of drinking it, yielded a similar null result. The authors conclude:
In line with an increasing body of evidence demonstrating that (a) exerting self-control is unlikely to reduce blood glucose levels (Kurzban, 2010; Molden et al., 2012), and (b) performance on a second self-control task does not vary as a function of participants’ blood glucose levels (Dvorak & Simons, 2009; Schimmack, 2012), the present results question the validity of the glucose model of self-control. Especially in view of the above-mentioned shortcomings of studies supporting the role of blood glucose in ego depletion, the model’s major strength appears to lie in its ostensible support for “the folk notion of willpower” (Gailliot & Baumeister, 2007, p. 304) and not in its empirical corroboration… These findings require models positing a major role for glucose in self-control to be fundamentally revised if not completely abandoned.
I was recently surprised to get a phone call from someone involved with organizing the 2014 American Psychological Society (APS) convention in San Francisco in May. The idea was to have a public debate between me and Roy Baumeister about the depletion model of self-control. I agreed to participate in the debate, but was recently informed that the other party declined the opportunity to participate. So, I’ll just present my own thoughts during a session at APS this year. Should be fun.
Dvorak, R. D., & Simons, J. S. (2009). Moderation of resource depletion in the self control strength model: Differing effects of two modes of self-control. Personality and Social Psychology Bulletin, 35(5), 572-583.
Gailliot, M. T., Baumeister, R. F., DeWall, C. N., Maner, J. K., Plant, E. A., Tice, D. M., Brewer, L.E., & Schmeichel, B. J. (2007). Self-control relies on glucose as a limited energy source: willpower is more than a metaphor. Journal of personality and social psychology, 92(2), 325.
Hagger, M. S., Wood, C., Stiff, C., & Chatzisarantis, N. L. D. (2010). Ego depletion and the strength model of self-control: A meta-analysis. Psychological Bulletin, 136(4), 495-525.
Kurzban, R., Duckworth, A., Kable, J. W., & Myers, J. (2013). An opportunity cost model of subjective effort and task performance. Behavioral and Brain Sciences, 36(06), 661-679.
Lange, F. & Eggert, F. (2014). Sweet delusion: Glucose drinks fail to counteract ego depletion. Appetite.
Molden, D. C., Hui, C. M., Noreen, E. E., Meier, B. P., Scholer, A. A., D’Agostino, P. R., &
Martin, V. (2012). The motivational versus metabolic effects of carbohydrates on selfcontrol. Psychological Science, 23, 1130–1137.
Wang, X. T., & Dvorak, R. D. (2010). Sweet future: Fluctuating blood glucose levels affect future discounting. Psychological Science, 21(2), 183-188.
Happy new year, all, and for those of you have something to say about the Eagles’ loss to the Saints, feel free to put remarks in the comments section, and I (plus some guys I know from South Philly) will get to each of you. Please be patient.
Today’s post has nothing to do with football, but rather with something that deeply puzzles me about a new paper in press at Evolution and Human Behavior entitled, The Effect of Ecological Harshness on Perceptions of the Ideal Female Body Size: An Experimental Life History Approach by Sarah Hill and colleagues.
For those of you too busy at the gym keeping your new year’s resolution, what puzzles me is why priming works on preferences that are calibrated over the course of a lifetime. You may now get back to your CrossFit WOD or whatever.
Ok, for those of you too lazy to get into CrossFit and so still reading, here’s what Hill did. In a series of three experiments, they primed subjects with “ecological harshness” using fake news items about the present economic downturn or increasing levels of violent crime. The central dependent measure was subjects’ reported preferences regarding the ideal body size – people see an array of bodies that differ in size, and pick the one they think is the ideal.
The idea is, roughly, this. According to life history theory (LHT), optimal body size depends on the ecological conditions. Putting it, again, very roughly, if you grow up in a world in which resources are scarce, then it might not be a bad idea to store energy in your body against the risk that you’re likely to need it. In contrast, if you grow up in a world of rich resources, then storing calories in your body might not be as high a priority. In this way, the features of the environment one grows up in calibrates preferences about how one would like one’s body to be.
The authors were interested to see if priming subjects with respect to the present ecological conditions – are things going well or poorly? – would move around some subjects’ beliefs about ideal body size. As the authors put it:
We predicted that women sensitized to a faster life history strategy would respond to ecological harshness cues by idealizing a heavier body size relative to controls. Because the relationship between energy status and fertility is specific to women, we predicted that these cues would not influence men‘s own body ideals.
Holding aside the details, in three studies, results conformed to this prediction.
Here is why I find this puzzling. One way to think about life history theory is that people are building up representations of what the world is like from the data they gather as they are developing. How much food am I getting? How much social support do I have? How long are the people around me living? Every day, people get a new observation, building up, presumably, an increasingly accurate estimate of whatever parameter is relevant for the life history variable in question.
Updating the world should take place in something like a Bayesian fashion. If you’ve gone through ten years of near starvation, and then you suddenly are treated to a banquet, you wouldn’t want to update your beliefs about the world you live in based on that one big meal. In terms of the data, that’s a drop in the bucket, one observation set against thousands. After a long period of time, updating should take place very gradually, and only, it seems to me, in the face of a substantial amount of new data.
Consider, for instance, Deb Lieberman’s nice work on incest. She finds that aversion to incest depends on the amount of time that one has lived with an opposite sex sibling. It would be surprising if the system designed to avoid incest changed significantly the day after – or even the hour after – an opposite sex sibling happened not to be around. Wouldn’t it?
In these priming studies, the subject is getting a tiny amount of information about the state of the world, in this case in the form of a newspaper article. Presumably this information is set against all the prior information that the subject had that influenced their view of ideal body size. In this context, the new information seems tiny. Why should such manipulations be expected to work? Is the life history system designed for a world in which abrupt changes occur frequently? Is it a side-effect, having to do with the way that the information is delivered?
I want to be clear that my worry isn’t a more general worry about priming in psychological experiments, which, over the last year, has generated a certain amount of debate. I’m puzzled because my prediction would have been that to move around these sorts of beliefs, whether they are implicit or explicit, would have been pretty difficult, given that the beliefs are like priors with a ton of data behind them.
There’s probably a good theory paper on this to be written. Challenge offered…
The motto at my home institution, the University of Pennsylvania, is Leges Sine Moribus Vanae, which translates to “Legolas the elf does not have a Möbius strip weather vane.” No, wait, actually it means, “laws without morals are useless,” and in this post I’m going to offer some thoughts on why Penn should have gone with something like lux et veritas and also some more lux, one-upping you-know-who and you-know-who-else.
Recently, I and my collaborator presented some data that continues a line of argument that links religious moral practices to self-interested life-history strategic moves. And by that I just mean that if you’re a powerful monogamist in a religious organization, you can imagine that you might think a really Good Idea is to tell your flock that sex outside of monogamist marriages are so wrong that the penalty will be everlasting damnation in the pits of hell, thus disincentivizing your neighbors from acting on any coveting they might be doing of your wife.
If you think that minting moral rules about who can’t do what to whom is all about how to make one’s group all cooperative and stuff – and I’m not saying anyone really thinks that – but if you did think that people choose moral rules in order to make their group more harmonious and, overall, better off, then you might find it surprising that people try to change rules in such a way that it helps them but makes their fellow group members worse off.
I think this happens quite a lot. Somewhat extreme examples are cult leaders such as Jim Jones and David Koresh, the latter of whom seems to have propagated the idea among his followers that the only person who should be allowed to have sex with women was David Koresh.
All of which brings me to – what else? – fourteenth century Venice. In Why Nations Fail, Acemoglu and Robinson answer the question that lurks in their title with, more or less, the notion of institutions. “Institutions,” to a certain breed of economist, means, roughly, the rules of the game that are being played. What are the sorts of things that you can do or, to put it differently, what are the things that if you do them, the people with coercive power – often but not always a centralized government – will punish you.
One such rule of the game around this time in Venice was the commenda which was, roughly, a contract. Suppose you were a merchant of Venice with a bunch of money but a dearth of wanderlust. How could you parlay your material riches into more material riches, as people are wont to want to do? You could sail a ship across the wine dark sea, trade Italian goods for the stuff in, say, North Africa, come on back and sell the North African stuff, and, thusly, increase your wealth. The issue is that stubborn lack of wanderlust. If you’d rather sit at home and enjoy the fruits of the city – at that time, as metropolitan as London and Paris, then you don’t really want to be headed out to sea with all the risk of mishaps and general nuisances that make one late to dinner and all that. The commenda allowed such a person to stay at home, but put up the capital for a trading voyage. Then some strapping young (but capital poor) nautical adventurer could partner with the merchant, using the money to buy trade goods, giving up a share of the profits from the voyage when it was completed.
In such ways new Venetian fortunes were made, elevating the cash-poor into the rich and, as it turns out, powerful. Here is how Acemoglu and Robinson put it:
Each new wave of enterprising young men who became rich via the commenda or other similar economic institutions tended to reduce the profits and economic success of established elites. And they did not just reduce their profits; they also challenged their political power. Thus there was always a temptation, if they could get away with it, for the existing elites sitting in the Great Council to close down the system to these new people. The Great Council then moved to adopt an economic Serrata. The switch toward extractive political institutions was now being followed by a move toward extractive economic institutions. Most important, they banned the use of commenda contracts, one of the great institutional innovations that had made Venice rich. This shouldn’t be a surprise: the commenda benefited new merchants, and now the established elite was trying to exclude them.
If ever there were a case of pulling the ladder up behind one, this is it. The effects, according to Acemoglu and Robinson were as devastating as they were predictable. With this form of contract banned, along with other restrictions placed on trade, Venice deviated from its meteoric rising economic trajectory, shifting gears from full throttle into reverse. Acemoglu and Robinson are a little hard on the Venetians, writing of the modern city:
Instead of pioneering trade routes and economic institutions, Venetians make pizza and ice cream and blow colored glass for hordes of foreigners. The tourists come to see the pre-Serrata wonders of Venice, such as the Doge’s Palace and the lions of St. Mark’s Cathedral, which were looted from Byzantium when Venice ruled the Mediterranean. Venice went from economic powerhouse to museum.
The rules banning the commenda were, in their essence, born of selfishness and they went beyond useless to positively destructive. To be sure, Why Nations Fail provides any number of examples of laws that facilitated rather than undermined economic growth and prosperity. However, even in such cases, it’s not at all clear that “morality” was at the heart of them. Much of the groundwork for the success of the industrial revolution in England was set by rules that served the interests of some – namely the landed classes fighting against royal domination – and these rules also happened to have wealth-creating effects because they produced favorable economic incentives.
One way to read the history of institutions is that the people who have the power to build (and destroy) them have nearly always done so in a way that reflects their interests and those of their friends and allies. People shape the rules of the game to fix the game, or at least tilt the game, in their favor. The downstream effects of these institutions depends on any number of contextual factors and can be very bad, as in the case of Venice, or very good, as in the case of England.
The Venetian case is interesting for any number of reasons. I myself found it interesting because the banning of mutually beneficial contracts is a transparent case in which people are using rules that prevent others from being better off, to their own selfish ends. Modern readers likely scoff at the short-sightedness of those who would use the coercive power of a centralized government to ban contracts that make both parties better off.
Thankfully, we live in a more enlightened age. Right?
Acemoglu, D., & Robinson, J. (2012). Why nations fail: the origins of power, prosperity, and poverty. Random House Digital, Inc.
It is always sad, during this, the holiday season, which fills our lives with joy and love, to see people get as angry as some did because of one particular article in the October issue of Archives of Sexual Behavior.
Andrew Galperin and colleagues’ paper entitled, “Sexual Regret: Evidence for Evolved Sex Differences” drew on three samples, and investigated what people regret when it comes to prior sexual behavior. Putting it very roughly, more men regret that they didn’t and more women regret that they did. In the words of the authors:
… women reported more numerous and more intensely felt sexual action regrets than men did, particularly regrets involving ‘‘casual’’ sex … men reported more numerous and stronger sexual inaction regrets than women did, particularly regrets involving failure to engage in casual sex or the pursuit of a relationship that delayed sexual activity or precluded better sexual opportunities
Erin Gloria Ryan was, it seems, not amused. She wrote about the work in a piece entitled: “Women Are Hard Wired To Feel Bad About Being Sluts, Says Suspect Study.” In typical fashion from my experiences reading Jezebel, the piece opens with some false, hysterical claims, including that substitute for good writing, ALL CAPS to make her point emphatic. She writes: “A new study claims that women are HARD WIRED (sic) regret casual sex whereas men are HARD WIRED to think random sex is great.”
While Galperin et al. do motivate their work with an evolutionary approach, neither the word “hard” nor the word “wired” appear anywhere in the piece. Further, the authors explicitly acknowledge that there are “social factors that might moderate or exacerbate evolved dispositions in each sex to regret certain sexual experiences.” My sense is that this idea is the sort of thing that the author of the piece favors, given what I take to be her favored explanation, which is that “… civilizations place high value on controlling female sexuality and humans are social creatures with an aversion to ostracization.” I’m not quite sure how feeling regret saves someone from ostracism – or ostracization, as Ryan would have it, but in any case, the venom in Ryan’s piece seems to have invited similar tones from the people who commented on her brief remarks, which comments included the usual name-calling, epithets, and use of ALL CAPS for emphasis. One writer seems to have taken Ryan at her word that the authors of the study used the term “hard wired,” writing:
Besides the fact that this “study” is a bunch of misogynist evolutionary psychology bullshit, I also really hate the phrase “hard wired.”
Other comments strike similar tones, with some inexplicable animated gifs thrown in for good measure, including Belle from Beauty and the Beast, and I think Rita Hayworth.
What is clear from the Ryan piece is that she’s very upset about the work. She’s not the only one. Amanda Hess at The XX Factor at Slate also wrote a piece about the paper, saying that “the reasoning employed here is primitive, at best” and ended her article with these provocative remarks:
A study of the sex lives of 200 college students can’t actually tell us anything about how our early ancestors shacked up, and vice versa. It could, however, speak to the masturbatory tendencies of some scientists.
From the last sentence, alluding to the behavior of the scholars as opposed to the ideas, it seems that Hess is sufficiently angry not to worry about getting personal, never mind worrying about understating the full sample size by 24,625.
A third person irritated by the work is Jon Marks, who posted the following remark on a facebook site called BioAnthropology News: “Another argument for barring psychologists from talking about human evolution.” Marks is so miffed he wants to gag the members of a whole field. When asked to explain this rather strong position for silencing his fellow members of the academy, he explained this way:
Humans are the products of their evolutionary and cultural history. Taking a psychological snapshot of this population in the here and now affords no valid inferences about the origin of whatever results you find. Further, given the troubled history of the Universal Generalization in human evolutionary studies, serious students of the subject tend to be more circumspect. Hope that helps
I’m afraid that this explanation doesn’t help me much, but, passing on, Eric Michael Johnson commented on the post, remarking that it “sounds WEIRD,” and linked to his piece in Scientific American. In that piece, Johnson wrote:
The fact that empirical differences exist on identical psychological studies when replicated cross-culturally should make evolutionary researchers take caution (especially Evolutionary Psychologists who are most guilty of essentializing these studies)
To put this in context, I might note that there is a second article in the very same issue of Archives, by Rammsayer and Troche. This article analyzed the data from “156 male and 136 female undergraduate psychology students ranging in age from19 to 30 years.” The dependent measures were a series of self-report measures. These measures asked subjects about both their attitudes and their own behaviors.
So, as you can tell, not only did the research similarly use self-report data – something that Ryan fumed about – but the sample was far more narrow and far smaller than the Galperin et al. article’s sample.
The methodological criticisms that are invoked are really just smoke screens for the real reason that critics don’t like the papers. If their concerns were with the samples, then they would not be fretting so heavily over evolutionary psychology, which actually does better in drawing broader samples than the relevant comparison discipline.
Now, it’s true that UCLA issued a press release for the latter study, while as far as I know there was no release for the other one. Perhaps Ryan, Hess and Marks would froth as much over the Rammsayer and Troche piece as they did over the Galperin et al. piece.
For some reason, I doubt it. But I hope they don’t read it, against the chance that reading another article reporting data about human sexual behavior makes them even angrier. After all, ‘tis the season of peace and love.
Galperin, A., Haselton, M. G., Frederick, D. A., Poore, J., von Hippel, W., Buss, D. M., & Gonzaga, G. C. (2013). Sexual Regret: Evidence for Evolved Sex Differences. Archives of sexual behavior, 42(7), 1145-1161
Rammsayer, T. H., & Troche, S. J. (2013). The Relationship Between Sociosexuality and Aspects of Body Image in Men and Women: A Structural Equation Modeling Approach. Archives of sexual behavior, 42(7), 1173-1179.
This announcement comes via Peter DeScioli:
The Department of Political Science at Stony Brook University seeks applicants for the PhD program with research interests at the intersection of evolutionary psychology and political science. The faculty include evolutionary researchers Andy Delton (andrewdelton.com), Peter DeScioli (pdescioli.com), and Oleg Smirnov (sites.google.com/site/olegsmirnov17/). Active research areas include cooperation, generosity, morality, alliances, property, risk preferences, agent-based modeling, and evolutionary game theory. The department is also strong in experimental economics and has recently initiated an interdisciplinary Center for Behavioral Political Economy. We seek to recruit exceptional students who will pioneer novel evolutionary approaches to understanding political behavior and institutions.
Please forward this notice to all interested students. The application deadline is January 15. For general questions about the PhD program, contact Jason Barabas, PhD Recruitment Director. For more information and application materials, visit: http://www.stonybrook.edu/commcms/polisci/phd_program.html
Continuing the discussion of citations, I thought I would report on the results of the first paper that I had checked for the accuracy of the citations. (I won’t, of course, indicate the paper or the authors of the paper.) If you want to preserve your time so that you can stuff your turkey or whatever, you can stop reading if you just want the take-home message, which is that the first paper checked for the accuracy of citations was pretty darn accurate, with just a few questionable citations, but nothing really facepalmy or anything.
So. I assigned the work to a trusted research assistant (RA), who I asked to ensure that each citation in the text properly cited the source. That is, I asked her to check to see if the source cited actually said what the authors of the paper said that it said. She was able to check nearly all of them, with some exceptions. In some cases, papers cited in the text didn’t appear in the reference section, in a few cases making it impossible to determine what paper was being cited. In other cases, papers cited were unavailable because they were unpublished or in journals to which the University of Pennsylvania does not subscribe.
She reported that it took her roughly 11 hours to complete the task.
I should note that Elsevier, as part of its copy-editing process, already checks to be sure every work in the references is cited in the text and every parenthetical cite has a corresponding citation in the reference section. So, in the future, I think I’ll instruct the person doing the work that she need not do this. (Having said that, as an author, I think that I nearly always have a research assistant do this check before submission. It’s possible that some citations are missed, but generally, it seems to me that authors really ought to be taking care of this. There are always last minute modifications that lead to errors, but I think it’s more or less reasonable to ask authors to be careful about making sure that the parentheticals and references match. It’s a task that can be assigned even to undergraduates, a little dig I’m putting in here just to check to see if my undergraduate RA Molly is reading all my blog entries.)
The result of her check was that nearly every citation was, in fact, accurate. There were really only two cases in which my RA identified potential problems. One was a case in which she found the wording of a sentence a little misleading in terms of the words the authors chose to relay the idea in the source material. A second was a case in which a finding was reported without adding a caveat about the scope of the finding. There were a couple of other cases in which it seemed to me that the distance between the citation and the source were debatable and probably defensible. I passed the result of the check onto the authors, who I have asked to modify the document appropriately. I also asked them to send me a cover letter indicating the changes.
Overall, I think I would say that, based on N = 1, the results were better than I would have predicted. (This has nothing at all to do with the specific authors of the paper in question. My expectations are based on my overall sense of citation accuracy in journals.) That is, I would have guessed that there would have been a larger number of discrepancies, and that the citations would have been further off than it seems like they were. Still, I think it’s worth continuing the pilot program to see if this is a typical case. If it is, then I think that my view will be that it might not be worth continuing, or worth continuing only on something of a spot-check or random basis. If there are so few errors, working hard to find them might not be worth it. (Again, I’m open to discussion. Feel free to comment or send me a note directly.)
The other issue, which this pilot didn’t address, is page citations. On this issue there are some remarks in the comments section of the prior post and I have also received some email on the topic. My sense is that people are in favor of adding pincites, or, at minimum, asking authors to provide page citations when they refer to entire books or other very lengthy works. I think maybe I’ll defer this issue until the meeting of the editorial staff in Brazil for the Human Behavior and Evolution Society meeting.
To my American readers, happy Thanksgiving. To my non-American readers, happy Thursday. See you all on the other side of the holiday.
I received a number of offline comments about my last post, in addition to the commenters on the post itself. By and large, it seems that there is an interest in some sort of mechanism to check citations, so I have put a little pilot program into effect at Evolution and Human Behavior.
Here’s how I’m currently structuring it. I received a paper that is in its third iteration. After studying the new manuscript, I decided to accept it. I have delayed officially accepting it, however, and sent the manuscript to someone who has agreed to check the citations in exchange for an hourly wage. My employee is not a graduate student – so I am not taking her away from scholarly pursuits – but does have an undergraduate degree and has already shown competence, so I trust her to do a good job.
The economics are somewhat alarming. Suppose that I’m paying $12 per hour. She estimates it takes about twenty minutes to track down and verify a citation. So each citation costs $4 to check. If a paper has, say, 50 citations – which is often on the low side – then this process, conservatively, costs $200 per paper. If the journal publishes 50 papers per year, the annual budget would be $10,000, a tidy sum. Add in review papers, which might have three times as many cites, and the figure potentially increases substantially.
Despite this expense, I decided it would be worthwhile to conduct a pilot program along these lines. My expectation is to have a few papers checked in this way every month, as opposed to having all of them checked. There is some money in one of my budgets for this, so I can at least fund the pilot program. One question is how many cites are questionable. Maybe it’ll turn out that the answer is basically zero, and so the whole thing was more or less a tempest in a teapot. (This would still leave open the issue of whether the field should introduce pincites. I’m genuinely curious about this. Drop me a line to tell me if you’d prefer the present world, or the world in which you have to supply page numbers in your citations. A pain for authors, but a boon for readers.)
There is another cost beyond the labor. Instead of the paper in question going right to production, there will now be a lag between when the paper is accepted and when it goes to press. If my cite-checker finds a number of potentially incorrect cites, then there is another iteration of the manuscript introduced into the process, which might well be vexing for the author or authors.
Throwing money at the problem is not unreasonable, I think, for the moment, but I’m not sure that it’s sustainable over the long term or, even if it were, if the money spent this way would not be better spent on other things.
Having said that, my sense is that the law school solution is not really an option. I don’t think that it will, any time soon, improve a student’s reputation or job prospects to perform such tasks. The law school regime – work on the law review to get a clerkship – has no obvious equivalent in psychology as currently constituted, as far as I can tell.
Other ideas? A bond system? Should authors put up a $300 bond when they submit, which they get back if all the citations are correct? Would authors be willing to do this, or simply migrate to another outlet? A simple fee, as in the PLoS family of journals, to defray the cost? This seems to add insult to injury, given the existing resentment about Elsevier and its unseemly profits. A tax on members of the Human Behavior and Evolution Society? E&HB is the journal of the Society, and the Society benefits to the extent the journal is better. Consider it a public good produced by the membership? What about coercing advanced undergrads? We already more or less force them to participate in experiments in the name of pedagogy. Checking citations is educational, right? When people come to me to ask to volunteer in my lab, which is not too infrequently, do I stop turning them away, and say, yes, as a matter of fact there is something you can do to help…? It might take them longer, and they might not be as adept as a grad student would be, but the price sounds great…
Still open to ideas and discussion. The pilot program is under way…
The judicial branch is my favorite arm of the government of the United States. Ok, the judiciary isn’t as sexy as the lawgivers and warmongers, but at least it doesn’t hang up a “closed for business” sign when the justices are having a spat, and even the choleric Justice Thomas has the decency not to bomb anyone in his foul moods. Further, in contrast to some of the people in the other branches one might mention #cough# Joe Wilson #cough#, the members of the Supreme Court of the United States (SCOTUS) wear their office with quiet dignity. And if they stage a constitutional coup from time to time, well, at least their web site works.
Where does SCOTUS intersect with Evolution and Human Behavior? Recently (hat tip: Ray Hames who, I should add, I did not mean to imply was dead; he’s doing just fine), an article in Evolution and Human Behavior was cited in the context of a Supreme Court case. Kyle Gibson, the author of the piece in question, has a brief discussion of this on his blog. The case, Hollingsworth, et al. v. Perry, was about same-sex marriage, and the paper was cited in an amicus brief in support of the point that “adoption is good for children in need of permanent families.” So good for Kyle, homosexual couples, and children in need of permanent families. (Still, he added wistfully, nothing is truly forever…)
Reading the amicus brief reminded me of the various peculiarities of writing for the law. I have been fortunate to collaborate with legal scholars a couple of times, and got a front row seat to view the publication process for a paper in a law review.
I thought I would take a few moments to explain how the publication process in legal scholarship ought to be a point of deepest shame for those of us publishing in the (social) scientific literature.
I’m not talking about the fact that their papers are typeset so that only 100 words or so appear on each page. I’m sure there’s some reason or tradition that explains why they have eight inch margins on either side and on the bottom of their pages, which I’m guessing probably has something to do with lawyers having equity positions in paper firms.
Instead, I’m talking about a practice in publishing for law reviews that we in the social sciences don’t have that, to me, makes a certain amount of sense.
They make sure citations are right.
Many readers might already be aware of this, but after a paper has been accepted to a law review, an excruciating process is set in motion that makes the excruciating process of dealing with the copy editors at our journals feel as effortless as signing up for federal health insurance. (Ok, bad example…) Intrepid law students, who work for the respective law reviews, meticulously – and I mean meticulously – pore over each sentence, word, and syllable of the submitted paper and ensure that each claim, no matter how small, is cited and, moreover, is properly cited. To give you a sense of the no-claim-is-too-pedestrian-to-cite mentality, in explaining the data we presented in our paper, I reminded the reader that correlation coefficients range between -1.0 and +1.0. The student editor called out this sentence with a request that I “supply a source for this claim.” (Note, speaking of accurate citations. I actually don’t recall if that’s an exact quote. Pretty close though.) Such diligence leads, it is true, to a tremendous amount of material “below the line,” occasionally to the point where the main text is dwarfed by the supporting documentation below it.
Further, authors are asked to provide quoted material from the cited source that supports the claim made in the text. Often but not always these quotations are left in the footnotes for the reader’s reference.
Impressive, right? Sounds right scholarly, doesn’t it? But wait, there’s more.
When I was working on the papers for the law reviews, I learned the term “pin cite” or sometimes as one word, “pincite.” This term refers to the practice of telling the reader, for the source cited, what page the supporting ideas can be found on. This would seem to be a fairly good idea; as Wikipedia puts it, a pincite “gives helpful information about the cited authority to the reader.” It does indeed. How often do you see a citation in a journal to a book or other lengthy publication that you’re sure would take you hours to track down if you actually bothered to try to look for it? Pincites add a burden to authors to find the precise place where a claim supports the point their making, but eases the burden on the scholar consuming the work to backtrack through the literature. We provide page numbers for quotations, yes, but rarely for anything else.
It makes a certain amount of sense that the legal community has these exacting standards. After all, legal decisions are often based on prior legal decisions, and of course there is the powerful principle of precedent that permeates SCOTUS and other corners of the bench. Law, in some important sense, is supposed to be accretive, building on prior decisions, and present decisions should be able to be traced back to prior decisions, legislation and, in some cases, relevant data, such as findings regarding whether adoption is good for children in need of permanent families. (It is. See Kyle Gibson, Differential Parental Investment in Families with Both Adopted and Genetic Children, 30 EVOL. & HUM. BEHAV. 184, 187 (2009). That “187” is the pincite, by the way.)
All of which would seem to be true of science as well. It’s not clear that sciences ought to be more accretive than the law, but surely we’re supposed to be building upon prior knowledge, and a reader’s ability to interpret the present findings depend at least sometimes on the prior findings on which the present data and arguments rest. Not only do we not provide page numbers in our citations, but many of us have had the experience of tracking back through a citation and finding that a source doesn’t really support the claim for which the author cites it. (Raise your hand if you have… all of you…? Oh…)
A process such as that used at law reviews could go some way to ameliorating this way in which social scientific scholarship looks to be curably deficient. Why aren’t authors required to indicate the page or pages on which supporting information is to be found?
More importantly, why don’t armies of graduate students – or postdocs or whoever – pore over each and every citation in papers before publication to ensure that citations are accurate?
This is not, in fact, a rhetorical question. And, sure, there are barriers. Will the students be paid? If so, where will the money come from? If they won’t be paid, how will they be recognized or compensated? What tasks will not be done because graduate students are devoting time to checking the accuracy of citations?
These are important questions. Still… Are we willing to say that the law requires greater care than science in documenting the connections to prior scholarship? Is it merely momentum (on our part) and tradition (on theirs) that explains the vast gulf in practices between the two fields?
Psychology is undergoing a number of transitions in the way that we do business, from statistics to replications to data archiving and more.
This is a good time to introduce innovations to try to make each paper more useful by ensuring that citations are both precise and accurate.
Now accepting proposals regarding how to proceed.
Speaking of moral dumbfounding, earlier this week my local paper, The Philadelphia Inquirer ran a piece about a paper by some even-more-local-to-me fellow members of the community of the University of Pennsylvania. The article, by Matthew Allen and Peter Reese, is entitled “Financial Incentives for Living Kidney Donation: Ethics and Evidence,” published in the well-known outlet, the Clinical Journal of the American Society of Nephrology.
The term “living kidney donation” does not refer to the kidney equivalent of the Monty Python live liver bit in Meaning of Life in which the organ is removed involuntarily and without anesthetic. Instead the distinction is between harvesting kidneys from live people as opposed to recently or about-to-be deceased people. At issue is the question of financial incentives. At present, it is illegal to sell your organs for transplant. As many have noted, organs, like sex, are fine to give away but not to sell. (Which does raise the question. It’s legal to get paid to have sex with someone if the sex is being recorded, and thus we have the porn industry. Suppose I filmed you selling your kidney for money… that would ok, right? And thus is born a new industry… “kidney porn”…)
An interesting feature of the Allen and Reese paper is that it reviews a number of objections to selling kidneys, and refutes all but one of them.
For example, one worry is that the possibility of getting money for kidneys will “crowd out” donations made out of purely selfless, altruistic motives. That is, it could be that if some people were getting paid to donate kidneys, the people who would otherwise give them away would choose not to. This is a perfectly reasonable concern. If the whole point of allowing payment for organs is to increase the supply of organs, then if crowding out occurred, then the whole point would be moot. Allen and Reese report, however, that the limited amount of empirical data that get at this issue suggest that crowding out doesn’t occur.
Allen and Reese discuss other objections to paying for kidneys, including the “unjust inducement” critique – that if kidneys can be sold poor people will “unjustly” differentially choose to sell their kidneys, in the same way that allowing people to sell their labor “unjustly induces” poorer people to do things like work for a living and allowing people to sell knick-knacks “unjustly induces” others to finally clean out the goddamn garage. And there’s the “undue inducement” critique – that if kidneys can be sold people won’t be able to consent properly, being overpowered by the lure of the cash for their vital organ, in the same way that the fact that Skyrim expansion packs are allowed to be sold unduly induces some of us to engage in a transaction that will simultaneously make us worse off financially and waste a shit-ton of our time. The authors (handily) address these two as well, leaving one, the commodification critique:
The commodification critique holds that the human body has inestimable intrinsic value and allowing someone to sell the body, or part of it, degrades that person’s dignity. In short, the body is sacred and money is dirty.
Let’s hold aside the fact that human life can’t be inestimable insofar as apparently the EPA has helpfully estimated it. Anyway, my favorite line in the paper is that “the first three critiques could be verified through a trial of financial incentives for living kidney donation, but the commodification critique is not empirically testable.”
In some sense, they’ve run afoul of the issue I recently addressed, moral dumbfounding, though the issue also comes under other names such as “protected” or “sacred values.” The idea is that there are some things that people are unwilling to allow more or less independent of how much how many (other) people have to suffer to maintain the moral rule in question.
The psychological issue – or, really, issues – revolve around why there are such things as protected values at all – what a curious way to do evolutionary business. Really, I don’t care how much welfare is lost, but I won’t, in fine Meat Loaf fashion, under any circumstances, do that…? And of course there’s the more specific question of the mixing of organs and money. The idea that the body is sacred and money is dirty doesn’t seem to quite do it. After all, the body, being sacred, can still be cut into, pillaged for organs, and sewn back up, if it’s all done for love. (Never mind the other things that are perfectly fine to do for, on, and to bodies if it’s for love.) And money doesn’t make just any old transaction treif. I mean, when you sell me your old stroller on Craigslist people find it positively charming. (Editor’s note: I do not want your old stroller.)
There’s something special about adding otherwise-not-so-dirty money to the otherwise perfectly-defilable-body-in-the-privacy-of-your-own-home that is specifically and narrowly morally horrible. And before you say that it’s just something WEIRD about decadent Americans, in almost no country in the world is it legally permissible to sell your organs.
So what the heck? What sort of value is being protected keeping kidney-needing people on a lengthy donor’s list, and money-needing people in want of money while in possession of twice as many kidneys as anyone really needs?
Proposals now being accepted. The only requirement is that the proposal has to potentially explain the phenomenon. “Bodies are sacred” is a super motto for a health spa (and “bodies are scared” would be a good motto for a horror film maybe), but it’s not an explanation for why combining money and surgery equals morally horrible. Placebo explanations need not apply.
Postscript: I know posts have been coming infrequently recently. I have some pretty good excuses reasons, but I can’t promise I’ll be able to speed up in the very near future. Still, thank you for visiting, and I hope you’ll check back from time to time.