Would You Cheat? Cheating Behavior, Human Nature, and Decision-Making
Indeed, they conducted an experiment that confirmed this prediction. The task was a modified version of the die-under-cup paradigm2 - participants could either roll a die (1 to 5€ payoff for outcomes 1 to 5 respectively, 0€ payoff for 6, with expected value of the game = 2.5€ on average) or choose a fixed amount of money. This amount was manipulated across conditions: in one condition people were offered 3.5€, creating a situation where they could lie by reporting only 4 or 5 if they decided to roll and did not want to be worse off; in the other condition people were offered 2.5€ so they could lie by reporting 3, 4 or 5 and still be better off. This seemingly irrelevant detail made a huge difference in lying pattern. The existence of intermediate options in the 2.5€ condition created a ‘golden solution’ – people were able to take a small increase in payoff (cheat) while avoiding the need to make a ‘major’ lie, which would damage their self-concept.
This suggests that there must be some threshold of temptation in order to trigger cheating. People are less likely to cheat when it would only marginally increase their payoff due to the insufficient reward provided as compared to the psychological cost of lying. It raises a very interesting and important question – how do people determine when cheating is appropriate in a given circumstances? This is where the role of human morality becomes essentially important.
Due to the fact that people are concerned with their self-concept rather than following rules per se, they refrain from dishonest behaviors that would be perceived badly, either by others or by themselves. But when they are able to cognitively reconstruct the situation in a manner that would make their actions look moral, the problem disappears. In other words, people are motivated to seek justifications for their behavior. Prior studies have documented how people make use of ambiguities in a situation in order to engage in self-serving behavior (Snyder, Kleck, Strenta, & Mentzer, 1979) and even make some effort only in order to convince others and themselves of their own morality (Batson, Kobrynowicz, Dinnerstein, Kampf, & Wilson, 1997; Batson, Thompson, Seuferling, Whitney, & Strongman, 1999).Kurzban (2010) investigates numerous instances of informational biases that are produced by the human brain and introduces the analogy of an internal ‘press secretary’ – a module in the mind that creates representations most advantageous to self-image but not necessarily accurate in their details. The role of the ‘press secretary’ is to create justifications for engagement in immoral action without harm one’s self-concept3 (see also von Hippel & Trivers, 2011). People can use the power of moral reasoning to classify certain actions as perfectly moral and profit from them. Various experiments have demonstrated this behavior.
Advertisement
Shalvi and his collaborators (Shalvi, Dana, Handgraaf, & De Dreu, 2011) conducted a study in the die-under-cup paradigm in which they manipulated the availability of self-justifications. Specifically, they hypothesized that when people observe desired counterfactuals, they are more likely to make self-justifications and therefore engage in immoral behavior. In their experiment they manipulated how many times participants were able to roll a die – in one scenario they were instructed to roll only once and to report the outcome; in the other they were instructed to roll a die multiple times in order to check whether it was fair but to report only the first roll. It turned out that people cheated more in the multiple-roll scenario. Even though both (nonzero) immorality of cheating and (zero) probability of being caught were held constant across treatments, people behaved differently due to the ease with which they could justify their cheating to themselves.
Similar results were obtained by Lewis et al. (2012), who presented to participants hypothetical multiple-roll scenarios and asked them about what they would do had they rolled particular sequences. It turned out that people approve of lying more when there are some desired counterfactuals that might justify the lie. Specifically, participants reported higher average outcomes in series such as 1-5-6 rather than in 1-1-1.
Another study that documents people’s desire to appear honest without actually being so was conducted by Hao and Houser (2011). They designed a study in which they distinguished between efforts made in order to appear and to be honest. Specifically, they had participants play a two-stage game – in the first stage participants had to make predictions about an outcome of a four-sided die and then, in the second stage, roll it and record results. People were paid according to the accuracy of their predictions – the highest possible payoff ($25) was given in case of betting 100% on a given number when this number was rolled; however, in case of rolling a number with 0 probability assigned, the payoff was also equal 0. Experimenters manipulated the possibility to cheat – in one condition they verified recording in the second stage, in the other condition number was self-reported. Participants were informed whether their reports would be supervised so they were aware of the opportunity to cheat.
The most rational thing to do was to assign 100% probability to a given number and report it in the second stage, regardless of the actual outcome of the die-roll. Interestingly, participants in both conditions made similar predictions what suggests that those in the opportunity condition abandoned financial gains in order to appear honest by making acceptable predictions (assigning 100% would be suspicious). 95% of subjects in the opportunity condition made predictions with no more than 50% probability assigned to a single outcome. After establishing an honest appearance, in the second stage people either reported truthfully or cheated to the maximum extent. The only thing that matters is an honest appearance and people are willing to pay a price for it.
This effect was further elucidated in a subsequent study by Hao and Houser (2013). Following the same design, they introduced a third treatment (impulsive) in which subjects had to submit their predictions after rolling a die, as opposed to planned treatment in which subjects had to submit predictions before rolling. This manipulation created the situation where, in planned treatment submitting dishonestly-looking predictions would be considered by others as deliberate cheating, whereas in impulsive treatment misreporting predictions provided no clear evidence for cheating (that was indeed confirmed by asking independent evaluators for interpretation of certain scenarios). Authors hypothesized that due to reputation considerations more honest-looking predictions (no more than 50% for the highest bet) would be observed in the planned scenario and it was indeed the case (95% versus only 66% in impulsive).
These results provide an insight into reasons underlying moral compromise. Due to the social nature of humans all actions must be socially mediated because their value depends greatly on the environment. While the temptation to cheat seems to be a dilemma at the individual level, the ultimate reasons for the dilemma in the first place is the desire to make one’s behavior socially acceptable, both outwardly and in conformity with our own self-image. Thus, the overreporting of die rolls equaling four in Fischbacher and Föllmi-Heusi’s (2013) experiment can be interpreted as a form of social signaling. The participant gives up 1 CHF in order to assure others that he or she does not cheat and therefore is a good person. Most people do not make such cost-benefit analysis consciously, but on the unconscious level such decisions are at the heart of moral decision-making (Alexander, 1987).
Such patterns of self-concept maintenance and striving to achieve a desirable appearance are not universal. So far participants are assumed to be self-interested – it was not considered that someone might cheat by reporting lower numbers then what they actually rolled (thus decreasing their payoff). But in fact, as Fischbacher and Utikal (2011) showed, it is possible. They conducted a die-under-cup experiment in which they compared reports of students and nuns. It turned out that both groups lied, but each in different directions. Because the self-image of a nun is focused on not appearing greedy, values such as modesty and humility are important to them; as such, high numbers were systematically underreported and low numbers overreported. These results provide further support to the claim that when making decisions concerning fairness, the main factor in determining the outcome is one’s own reputation, or at least their impression of it.
Another study that documents a dependence on contextual factors in ethical decision-making was conducted by Gino, Ayal and Ariely (2009). They had participants solve mathematical problems in a limited amount of time and then self-report results. They introduced a manipulation where a partner acting as a participant made it clear that he had cheated but was not caught. It turned out that observation of the other person increased cheating, but only when participants identified with him (for example, he wore a t-shirt of the university where the study was conducted). Under different conditions, when participants did not identify with the cheater (for example, wearing a t-shirt of a university the participant did not like), the cheating rate did not increase. Thus, people are primarily susceptible to information concerning how their actions might be perceived in a social setting that they relate to; they react to perceptions of behavior in particular contexts, not to abstract concepts of morality or immorality.Continued on Next Page »