Commentaryíñaumard et al.: A mutualistic approach to morality discounting; see McClure et al. 2007), but in either case I will often have the impulse to cheat when it is against my long-term interest. Since faking my motives is an entirely intrapsychic process, the only way I can commit myself not to do it is to inter- pret my current choice as a test case for how I am apt to choose in the future: "If I am hypocritical [or biased, or selfish ...] this time, why wouldn't I expect to be next time?" Thus bundled together, a series of impulses loses leverage against a series of better, later altematives - greatly if the discounting is hyperbolic, less so but StiU possibly if tlie discounting is hyperboloid (Ainslie 2012). Then, to the extent that I am aware of my temptation problem, I will have an incentive to make personal rules against deciding unfairly - that is, to interpret each choice where I might be unfair as a test case of whether I can expect to resist this kind of temptation in the future. I draw the line between fair and unfair by tlie kind of reasoning that Baumard et al. describe, and then face reward contingencies that wUl be simuar to tliose of a repeated prisoner's dilemma. Whatever my reputation is with other people, I wül have a reputation with myself tliat is at stake in each choice, and which, like my social reputation, is dis- proportionately vulnerable to lapses (Monterosso et al. 2002). Tliis dynamic can account for two of the tliree phenomena that the authors highlight as seeming anomalies for mutualism: 1. Aldiough helping strangers witliout expectation of retum can be rewarding in its own right, I may also help them because of a personal rule for faimess at times when I would rather cheat and could do so witliout social consequences. Then I do behave as if I had made a social contract. The contract is real, but exists between my present self and my expected future selves. Like the oral contracts among traders tliat Baumard et al. list (sect. 2.1.3, para. 1), my contract is self-enforcing. I may still get away with cheating, by means of die casuistry witli personal rules called rationalization; or I may instead become hyper-moral, if I am especially fearful of giving myself an unfavor- able self-signal (Bodner & Prelec 2001). Either deviation moves me away from optimal social desirability, but my central anchor is just where Baumard et al. say it should be. 2. To tlie extent that my reputation with myself feels vulner- able, I may reject an experimenter's instmction to maximize my personal payoff in a one-shot Prisoner's Dilemma or Dictator game, and instead regard the game as another test case of my character (Ainslie 2005). Such an interpretation makes it "not tliat easy . . . to shed one's intuitive social and moral dispositions when participating in such a game" (sect. 3.3.2, para. 1). 3. No further explanation seems necessary for tlie punishment phenomenon. It is not remarkable that subjects become angry at either cheating or moralizing stances by other subjects, and pay to indulge this anger. As with problem (2), the seeming anomaly arises from experimenters' assumptions that die reward contin- gencies tliey set up for a game are tlie only ones in subjects' minds. As for the cognitive criteria for partners' value, talent, and effort probably do not exliaust the qualities that are rationally weighed in social choice. Wealth or status conveyed by inheri- tance or the happenstance of history have always been factors, and transparency itself-how easy it is to be evaluated - must be one. But the autliors' proposal of social selection will work perfectly well with other criteria for estimation. The hard part of tlieir goal ("to contribute . . . proximate and ultimate expla- nations of human morality"; target article. Abstract) has been to explain the semblance of bargaining when counteiparties are apparendy absent. Tlús can be accomplished by the logic of intemal intertemporal bargaining, without positing a specially evolved motive. ACKNOWLEDGMENT This material is tlie result of work supported with resources and tlie use of facilities at the Department of Veterans Affairs Medical Center, Coatesville, PA. The opinions expressed are not diose of die Department of Veterans Affairs or of the US Government. NOTE 1. This commentary is considered a work of die US government and as such is not subject to copyright widiin die United States. Cooperation and fairness depend on self-regulation doi: 10.1017/SO140525X12000696 Sarah E. Ainsworth and Roy F. Baumeister Department of Psychology, Florida State University, Tallahassee, FL 32306-4301. ainsworth@psy.fsu.edu baumeister@psy.fsu.edu http:/Aivww.psy.fsu.edu/~baumeistertice/ainsworth.htmi http://www.psy.fsu.edu/~baumeistertice/index.htmi Abstract: Any evolved disposition for faimess and cooperation would not replace but merely compete widi selfish and other antisocial impulses. Therefore, we propose that human cooperation ajid faimess depend on self-regulation. Evidence shows reductions in faimess and odier prosocial tendencies when self-regulation fails. The message of this commentary is that self-regulation plays a decisive role in social cooperation. Baumard et al. have proposed diat cooperation and other moral behavior refiect an evolved dis- position toward faimess. They elaborate that humans cooperate when the benefits of doing so outweigh the costs - as diey often do, because the benefits include social acceptance. Humans depend on belonging to social groups in order to survive and reproduce, so natural selection favored traits such as a disposition toward faimess tliat facilitate groups. We agree, but widi some reservations. Selfishness is natural in die animal kingdom, and humans have presumably not shed these selfish impulses. Therefore, faimess impulses must compete in the psyche against selfish impulses. Self-regulation is die executive capacity to adjudicate among competing motivations, especially in favor of socially and culturally valued ones (e.g., Baumeister & Vohs 2007). Self-regulation may often be needed in order that the relatively new and fragile impulse toward faimess can prevail over hunger, greed, lust, anger, and other uncooperative impulses. TThe cost-benefit calculation described by Baumard et al. is further complicated by the fact that the costs of cooperation are often immediate, whereas the benefits are anticipated in the future. Most animals Uve in the present (Roberts 2002), and so the capacity to forego immediate gains for the sake of possible future benefits probably depends on the evolutionarily recent expansion of self-regulatory powers. Indeed, much of' today's work on self-regulation is descended from Mischel's (e.g., 1974) studies on the capacity to delay gratification. Empirical findings confirm the role of self-regulation in ensur- ing faimess and cooperation. This work has proceeded by exploiting the finding that the capacity for self-regulation func- tions hke a limited energy resource akin to the folk notion of will- power: After self-regulating, performance suffers on otlier, seemingly unrelated self-regtdation tasks, suggesting tliat some energy has been depleted (e.g., Baumeister & Tiemey 2011). The state of diminished self-regulatory capacity is called ego depletion. Recent work has shown that faimess and helpfulness diminish when people have depleted their willpower. Banker et al. (in preparation) show that ego depletion causes people to become less fair in allocating rewards between self and others. Specifically, after exerting self-control in one context and then going to a different situation, people selfishly keep, a larger portion of the cash stake for themselves instead of sharing it fairly. Outright dishonest behavior has also been shown to occur among ego-depleted participants. Mead et al. (2009) let participants grade dieir ovm tests and claim cash BEHAVIORAL AND BRAIN SCIENCES (2013) 36:1 79