Having established what general mechanisms underlie decisions about cheating, we can move on to another relevant question – what is the default option when people face an opportunity to cheat? People are governed by two contradictory mechanisms: one that inclines them to exploit opportunities to gain and another that helps them form a sense of their own honesty. Because of these counteracting impulses, an irrational pattern of cheating emerges. But which side has greater pull?
Greene and Paxton (2009) attempted to answer this question by examining neural activity during honest and dishonest decision-making. They put forward two hypotheses that can explain honest decision when an individual is faced with temptation: (1) the will hypothesis which contends that honesty is a result of active resistance of temptation and therefore brain regions responsible for cognitive control are employed when people are honest; (2) the grace hypothesis which contends that for being honest, no additional cognitive control is required. The first one is more favorable toward the corrupt view of human nature, whereas the second is more compatible with the good-natured view.
Participants in their experiment had to predict an outcome of a fair coin-toss and were rewarded for doing it correctly. The cover story for this study was that it was an experiment examining paranormal abilities to foresee the future when one makes his or her predictions privately and is financially motivated to do it correctly. The real purpose was to test levels of dishonesty among subjects. In control group participants had to report their prediction before a toss so cheating was not possible, in experimental group they had to report after a toss whether they were correct so cheating was possible with zero probability of being caught.
Figure 2. Distribution of self-reported percent Wins in the Opportunity condition (Greene & Paxton, 2009, p. 12507).
Based on distribution of outcome declarations participants were divided into two groups which were analyzed separately. In the case of honest group, the crucial element of interest were loss trials in the opportunity condition as compared to loss trials in no-opportunity condition. Two competing hypotheses provide different predictions: the will hypothesis that in opportunity loss trials there should be increased activity in areas involved in cognitive control (anterior cingulate cortex, dorsolateral prefrontal cortex and ventrolateral prefrontal cortex) and that reaction time should be longer, as compared to no-opportunity loss trials; the grace hypothesis that no differences should appear.
It turned out that the grace hypothesis was valid in case of honest participants – there were no differences in reaction time and neural activity. These results suggest that there exist some people who are able to be honest by not even realizing their opportunity to gain. These participants did not take dishonest option into consideration. This result is surprising since it contradicts both intuitive understanding of human moral decision-making and standard economic model.
However, generalization of the grace hypothesis is not possible. Analysis of responses in the dishonest group provided reversed conclusions. It turned out that both reaction time in the opportunity-loss as compared to the opportunity-win was longer and that more brain activity in control-related regions were observed in the opportunity as compared to the control condition (both win and loss trials). Dishonest group seems therefore to provide a support for the will hypothesis. Their default response is the desire to cheat and they have to engage much cognitive control in such decisions.
A question arises which explanation applies to the majority of people? As Greene and Paxton (2009) point out, the [g]race hypothesis applies only to honest decisions in individuals who consistently behaved honestly and not to decisions reflecting limited honesty (p. 12509). In their experiment some participants did not even consider the possibility of cheating. But it is not implausible that in different circumstances they would also think according to dishonest pattern. Honesty seems to result from either lack of realization of opportunity to gain or active rejection of temptation when such opportunity is considered.
Abe (2011) comments these results that the exact role of prefrontal cortex in decisions about deception in not clear. It may be the case that it is engaged in active considerations about an action (cost-benefit analysis) or in attempts to resist temptation. It is also not certain what regions are responsible for predisposing people for different response patterns. As he argues, such decisions may result from some interaction between the prefrontal cortex and subcortical areas providing motivation (reward seeking).
Further research provide some evidence for the will hypothesis. Shalvi, Eldar and Bereby-Meyer (2012) conducted a die-under-cup study in which they examined which response is an automatic one: honesty which can transform into dishonesty when time and justifications are available; or dishonesty which can be resisted when people have enough time and lack of justifications. Authors hypothesized that cheating is a natural tendency that needs to be overcome by cognitive control.
In the first experiment they instructed participants to roll three times and to report the first outcome. The amount of time available was manipulated across conditions. One group had to report under time pressure, the other not. It turned out that those who had more time to deliberate were more honest than subjects in the time pressure condition. In the second experiment they reduced the availability of justifications by instructing participants to roll only once. Obtained results confirmed the hypothesis – in the low time pressure condition cheating was less pervasive than in the first experiment, but no differences were observed in high time pressure condition. As authors argue, people can behave ethically when they have time and do not have justifications.
Similar results were reached in a study conducted by Gunia and his collaborators (Gunia, Wang, Huang, Wang, & Murnighan, 2012) who had their participants play a modified version of Gneezy’s (2005) deception game. In this game 15 $ had to be divided between two players. Player 1 sees two options: one which earns him or her 10 $ and lets 5 $ to the other player; and another one which earns him or her 5 $ and lets 10 $ to the other player. Player 1 must send message to player 2 that indicates which option is more profitable for him or her and then player 2 makes decision which option to choose. Again, the amount of time available for player 1 decision was manipulated and it turned out that more immediate choices are less ethical – only about 56% of participants sent truthful message, whereas about 87 % did it in the contemplation condition (more time for deliberation).
Further exploration of the interplay between will and grace in the moral domain was done by Fosgaard and his collaborators (Fosgaard, Hansen, & Piovesan, 2013) who sought to distinguish between two effects in cheating: (1) grace effect – becoming aware that cheating is an option and therefore losing grace; (2) will effect – inferring a norm that cheating is acceptable and actively choosing to do it. They conducted an experiment in which they had participants toss a coin privately and report results on a sheet of paper. Participants could receive 10 DKK if coin-toss resulted in white or nothing if it resulted in black. What is important subjects were students of the same class. They introduced two manipulations in their study (2 x 2 design): suggestion whether cheating is an option – earlier reports on the sheet were either all whites or fairly distributed; suggestion whether reports were made by other classmates (previous reports handwritten) or by someone else (preprinted).Continued on Next Page »
Abe, N. (2011). How the Brain Shapes Deception: An Integrated Review of the Literature. The Neuroscientist, 17(5), 560-574.
Akerlof, G. A. (1983). Loyalty filters. American Economic Review, 73(1), 54-63.
Alexander, R. D. (1987). The biology of moral systems. Hawthorne, NY: Aldine de Gruyter.
Ariely, D., Bracha, A., & Meier, S. (2009). Doing good or doing well? Image motivation and monetary incentives in behaving prosocially. The American Economic Review, 99(1), 544-555.
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall.
Bandura, A. (1990). Selective activation and disengagement of moral control. Journal of Social Issues, 46(1), 27-46.
Bandura, A. (1991). Social cognitive theory of moral thought and action. In W. M. Kurtines & J. L. Gewirtz (Eds.), Handbook of moral behavior and development: Theory, research and applications (Vol. 1, pp. 71-129). Hillsdale, NJ: Erlbaum.
Batson, C. D., Kobrynowicz, D., Dinnerstein, J. L., Kampf, H. C., & Wilson, A. D. (1997). In a very different voice: unmasking moral hypocrisy. Journal of personality and social psychology, 72(6), 1335-1348.
Batson, C. D., Thompson, E. R., Seuferling, G., Whitney, H., & Strongman, J. A. (1999). Moral hypocrisy: appearing moral to oneself without being so. Journal of personality and social psychology, 77(3), 525-537.
Becker, G. S. (1968). Crime and Punishment: An Economic Approach. The Journal of Political Economy, 76(2), 169-217.
Cushman, F., Gray, K., Gaffey, A., & Mendes, W. B. (2012). Simulating murder: the aversion to harmful action. Emotion, 12(1), 2-7.
Dana, J., Cain, D. M., & Dawes, R. M. (2006). What you don’t know won’t hurt me: Costly (but quiet) exit in dictator games. Organizational Behavior and Human Decision Processes, 100(2), 193-201.
Dana, J., Weber, R. A., & Kuang, J. X. (2007). Exploiting moral wiggle room: experiments demonstrating an illusory preference for fairness. Economic Theory, 33(1), 67-80.
Fischbacher, U., & Föllmi-Heusi, F. (2013). Lies in disguise—an experimental study on cheating. Journal of the European Economic Association, 11(3), 525-547.
Fischbacher, U., & Utikal, V. (2011). Disadvantageous lies (Working paper No. 71). Thurgau Institute of Economics and Department of Economics, University of Konstanz.
Fosgaard, T. R., Hansen, L. G., & Piovesan, M. (2013). Separating Will from Grace: An experiment on conformity and awareness in cheating. Journal of Economic Behavior & Organization, 93, 279-284.
Gino, F., Ayal, S., & Ariely, D. (2009). Contagion and differentiation in unethical behavior: The effect of one bad apple on the barrel. Psychological Science, 20(3), 393-398.
Gintis, H. (2000). A great book with an outdated model of human behavior [Review of the book The biology of moral systems, by R. D. Alexander]. Retrieved from http://www.amazon.com/review/R1YJET21KXATC/
Gintis, H., Bowles, S., Boyd, R., & Fehr, E. (Eds.). (2006). Moral sentiments and material interests: The Foundations of Cooperation in Economic Life. Cambridge, MA: MIT Press.
Gneezy, U. (2005). Deception: The role of consequences. American Economic Review, 95(1), 384-394.
Greene, J. D., & Paxton, J. M. (2009). Patterns of neural activity associated with honest and dishonest moral decisions. Proceedings of the National Academy of Sciences, 106(30), 12506-12511.
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-2108.
Gunia, B. C., Wang, L., Huang, L., Wang, J., & Murnighan, J. K. (2012). Contemplation and conversation: Subtle influences on moral decision making. Academy of Management Journal, 55(1), 13-33.
Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological review, 108(4), 814-834.
Haidt, J. (2007). The new synthesis in moral psychology. Science, 316(5827), 998-1002.
Haidt, J., & Bjorklund, F. (2008). Social intuitionists answer six questions about moral psychology. In W. Sinnott-Armstrong (Ed.), Moral Psychology, Volume 2: The Cognitive Science of Morality: Intuition and Diversity (pp. 181-217). Cambridge, MA: MIT Press.
Hao, L., & Houser, D. (2011). Honest lies (ICES 2011-03). Fairfax, VA: Interdisciplinary Center for Economic Science, George Mason University.
Hao, L., & Houser, D. (2013). Perceptions, Intentions, and Cheating (ICES 2013-02). Fairfax, VA: Interdisciplinary Center for Economic Science, George Mason University.
Jiang, T. (2013). Cheating in mind games: The subtlety of rules matters. Journal of Economic Behavior & Organization, 93, 328-336.
Kunda, Z. (1990). The case for motivated reasoning. Psychological bulletin, 108(3), 480-498.
Kurzban, R. (2010). Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton, NJ: Princeton University Press.
Lewis, A., Bardis, A., Flint, C., Mason, C., Smith, N., Tickle, C., & Zinser, J. (2012). Drawing the line somewhere: An experimental study of moral compromise. Journal of Economic Psychology, 33(4), 718-725.
List, J. A. (2007). On the interpretation of giving in dictator games. Journal of Political Economy, 115(3), 482-493.
Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of marketing research, 45(6), 633-644.
Moore, C., & Tenbrunsel, A. E. (2014). “Just think about it”? Cognitive complexity and moral choice. Organizational Behavior and Human Decision Processes, 123(2), 138-149.
Moore, D. A., & Loewenstein, G. (2004). Self-interest, automaticity, and the psychology of conflict of interest. Social Justice Research, 17(2), 189-202.
Paharia, N., Vohs, K. D., & Deshpandé, R. (2013). Sweatshop labor is wrong unless the shoes are cute: Cognition can both help and hurt moral motivated reasoning. Organizational Behavior and Human Decision Processes, 121(1), 81-88.
Pinker, S. (2003). The blank slate: The modern denial of human nature. New York, NY: Penguin.
Ruedy, N. E., Moore, C., Gino, F., & Schweitzer, M. E. (2013). The cheater’s high: The unexpected affective benefits of unethical behavior. Journal of Personality and Social Psychology, 105(4), 531-548.
Schweitzer, M. E., & Hsee, C. K. (2002). Stretching the truth: Elastic justification and motivated communication of uncertain information. Journal of Risk and Uncertainty, 25(2), 185-201.
Shalvi, S., Dana, J., Handgraaf, M. J., & De Dreu, C. K. (2011). Justified ethicality: Observing desired counterfactuals modifies ethical perceptions and behavior. Organizational Behavior and Human Decision Processes, 115(2), 181-190.
Shalvi, S., Eldar, O., & Bereby-Meyer, Y. (2012). Honesty requires time (and lack of justifications). Psychological science, 23(10), 1264-1270.
Shalvi, S., Handgraaf, M. J., & De Dreu, C. K. (2011). Ethical manoeuvring: why people avoid both major and minor lies. British Journal of Management, 22, S16-S27.
Shalvi, S., & Leiser, D. (2013). Moral firmness. Journal of Economic Behavior & Organization, 93, 400-407.
Snyder, M. L., Kleck, R. E., Strenta, A., & Mentzer, S. J. (1979). Avoidance of the handicapped: an attributional ambiguity analysis. Journal of personality and social psychology, 37(12), 2297-2306.
Sykes, G. M., & Matza, D. (1957). Techniques of neutralization: A theory of delinquency. American sociological review, 22(6), 664-670.
Tenbrunsel, A. E., Diekmann, K. A., Wade-Benzoni, K. A., & Bazerman, M. H. (2010). The ethical mirage: A temporal explanation as to why we are not as ethical as we think we are. Research in Organizational Behavior, 30, 153-173.
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven, CT: Yale University Press.
Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly review of biology, 46 (1), 35-57.
von Hippel, W., & Trivers, R. (2011). The evolution and psychology of self-deception. Behavioral and Brain Sciences, 34(1), 1-16.
Wright, R. (1994). The moral animal: Why we are, the way we are: The new science of evolutionary psychology. New York, NY: Vintage Books.
Xu, Z. X., & Ma, H. K. (2014). Does Honesty Result from Moral Will or Moral Grace? Why Moral Identity Matters. Journal of Business Ethics, 1-14.
1.) In order to avoid misunderstanding, I think it is important to note that self-interested does not mean bad. In this interpretation good behaviors such as charity donations or other forms of helping are possible, but the ultimate reason for engaging in them is self-interest (in case of altruistic behaviors it may be, for example, reputation gains from appearing moral).
2.) Originally developed by Fischbacher and Föllmi-Heusi (2013). In order to assure participants that no one secretly watches roll outcomes, a die is placed under a cup with a small hole to check results.
3.) It is important to note that, as Kurzban (2010) points out, notions of self-concept, self-protection etc. are all problematic because given our knowledge about the architecture of human brain, it is not clear what exactly self is. I use these terms because they are convenient but on a closer examination they make no sense.