Commentary on Krueger

Abstract: 54 words
Main Text: 974 words
References: 465 words
Total Text: 1493 words

Proper experimental design and implementation are necessary conditions for the move towards a balanced social psychology

Andreas Ortmann
Center for Economic Research and Graduate Education
Politickych veznu 7
111 21 Praha 1
Czech Republic
+420 224 005 117
Andreas.Ortmann@cerge-ei.cz
http://home.cerge-ei.cz/ortmann

Michal Ostatnicky
Center for Economic Research and Graduate Education
Politickych veznu 7
111 21 Praha 1
Czech Republic
+420 224 005 162
Michal.Ostatnicky@cerge-ei.cz

Abstract

We applaud the authors' basic message. We note that the negative research emphasis is not special solely to social psychology and judgement and decision making. We argue that the proposed integration of NHST and Bayesian analysis is promising but will ultimately succeed only if more attention is paid to proper experimental design and implementation.



We do subscribe to the basic message of Krueger and Funder that there is a negative research emphasis in social psychology and judgement and decision making and that this negative research emphasis hinders theory developments such as programs that try to understand to what extent seemingly maladapted heuristics in laboratory settings may be quite reasonable in real-life settings (e.g., Gigerenzer & Todd & the ABC Research 2000).

Krueger and Funder persuasively lay out the allure of such a negative research emphasis. Indeed, it is much more interesting (and, we submit, on average easier, faster, and less expensive) to generate violations of norms or conventions than to explain why they have arisen in the first place. While we are as surprised as the authors that the persistent emphasis on norm violations has not yet decisively eliminated its allure, we do see evidence that, at least in psychology, the tide is turning (e.g., Gigerenzer 1991, 1996; Koehler 1996; Juslin, Winman, & Olsson 2000; Gigerenzer, Hertwig, Hoffrage, & Sedlmeier forthcoming). The target article strikes us as yet another good example of that encouraging trend.

Curiously, but maybe not surprisingly, while the unbalanced view of humans as cognitive misers seems slowly but surely on its way out in social psychology and judgement and decision making, the heuristics-and-biases program, which seems mostly responsible for the unbalanced view, has over the past decade invaded economics with little resistance (e.g., Rabin 1998; see Friedman 1998 for an early and lone attempt to stem the tide), amidst outrageous claims. To wit, "mental illusions should be considered the rule rather than the exception." (Thaler, 1991, p.4) Sound familiar?

It is easy to see why the widespread practice of taking the predictions of canonical decision and game theory as explicit or implicit null hypothesis (e.g., the predictions of no giving in standard one-shot dictator, ultimatum, or various social dilemma games), has facilitated this development. While the simplistic rational actor paradigm surely deserves to be questioned, and while experimental evidence questioning it has generated some intriguing theory developments recently (e.g., Goeree & Holt 2001), the rational actor paradigm is often questioned by perfunctory reference to the various "anomalies" that psychologists in the heuristics-and-biases tradition claim to have discovered. This negative research strategy nowadays often goes under the name of behavioral economics and finance.

Alleged errors of judgement and decision making such as the overconfidence bias or the false consensus effect (or any other choice anomaly of the list provided in Table 1 in the target article) are taken to be stable and systematically replicable phenomena.1 Rabin (1998), whose article has become the symbolic reference for most self-anointed experts in the areas of behavioral economics and finance, is particularly explicit about it when he says, "I emphasize what psychologists and experimental economists have learned about people, rather than how they have learned about it." (Rabin 1998, p. 12)

Of course, there is no such thing as an empirical insight per se; each and every empirical result is a joint test of some (null) hypothesis about the behavior of people and of the way the test was designed and implemented. Think of the giving behavior in dictator, ultimatum, or various other social dilemma games and how it can be systematically affected by social distance (e.g., Hoffman, McCabe, & Smith 1996), or think of the dramatic effects that real vs hypothetical payoffs (e.g., Holt & Laury 2002) can have on choice behavior. Or, take the false consensus effect that figures prominently in the Krueger and Funder narrative. Mullen et al. (1985) argued that there was overwhelming evidence in the psychology literature that such an effect existed and that it was rather robust. Dawes (1989, 1990) already questioned the meaning of the FCE as defined then. Interestingly, he found that a more appropriate definition (one which calls a consensus effect false only if one’s own decision is weighed more heavily than that of a randomly selected person from the same population) often (but not always) shows just the opposite of what the old definition led to.

Most recently, Engelmann & Strobel (2000) tested the false consensus effect the way it arguably should be done - with representative information and monetary incentives - and found that it disappears. Similar issues of representativeness of information and selected sampling of problems (as in the context of overconfidence), as well as more fundamental issues of the benefits and costs of certain experimental practices are at the heart of the controversy surrounding the question of the reality of cognitive illusions (e.g., Kahneman & Tversky 1996; Gigerenzer 1996; Gigerenzer, Hertwig, Hoffrage, & Sedlmaier forthcoming; Hertwig & Ortmann 2001) and, more generally, the negative research emphasis that Krueger and Funder persuasively attack.

An acknowledgement of the central role of experimental practices for the move towards a balanced social psychology is curiously absent in Krueger and Funder’s list of suggestions that might get us back to balance; we therefore propose that thinking about methodological issues would be an appropriate addition, for both economists and psychologists, to their two empirical suggestions to de-emphasize negative studies and to study the range of behavior and cognitive performance.

We fully agree with the authors’ critique of NHST (see also Gigerenzer, Krauss, & Vitouch forthcoming) and find promising the authors’ suggestion of integrating NHST with Bayesian concepts of hypothesis evaluation.We caution, however, that the success of such a strategy is crucially dependent on aspects of proper experimental design and implementation such as the proper construction of the experimental (learning) environment (e.g., appropriate control of the social distance between experimenter and subjects, representativeness of information, and learning opportunities), proper financial incentives, and unambiguous and comprehensive instructions that facilitate systematic replication, among others (Hertwig & Ortmann 2001, 2003; Ortmann & Hertwig 2002).


Footnotes:

1. The fact that pretty much each and every bias enumerated in Table 1 has a contradictory sibling has escaped the attention of almost all economists. Back to text

References

Dawes, R.M. (1989). Statistical criteria for establishing a truly false consensus effect. Journal of Experimental Social Psychology, 25, 1-17.

Dawes, R.M. (1990). The potential nonfalsity of the false consensus effect. In R.M. Hogarth (Ed.), Insights in Decision Making: A Tribute to Hillel J. Einhorn. Chicago: University of Chicago Press.

Engelmann, D. & Strobel, M. (2000). The false consensus effect disappears if representative information and moneraty incentives are given. Experimental Economics 3(3), 241-260.

Friedman, D. (1998). Monty Hall's three doors: Construction and deconstruction of a choice anomaly. The American Economic Review, 88(4), 933-46.

Gigerenzer, G. (1991). How to make cognitive illusions to disappear: Beyond heuristics and biases. In W. Stroebe & M. Hewstone (Ed.), European Review of Social Psychology, vol. 2. Wiley.

Gigerenzer, G. (1996). On narrow norms and vague heuristics: A reply to Kahneman and Tversky (1996). Psychological Review, 103, 592-96.

Gigerenzer, G., Hertwig, R., Hoffrage, U. & Sedlmeier, P. (Forthcoming). Cognitive illusions reconsidered. In C.R. Plott & V.L. Smith (Eds.), Handbook of experimental economics results. Amsterdam: Elsevier-North-Holland.

Gigerenzer, G., Krauss, S. & Vitouch, O. (Forthcoming). In D. Kaplan (Ed.), Handbook on Quantitative Methods in the Social Sciences. New York: Sage.

Gigerenzer, G., Todd, P.M. & ABC Research Group (2000). Simple heuristics that make us smart. Oxford University Press.

Goeree, J.K. & Holt, C.A. (2001). Ten little treasures of game theory and ten intuitive contradictions. The American Economic Review 91(5), 1403-1422.

Hertwig, R. & Ortmann, A. (2001). Experimental practices in economics: A methodological challenge for psychologists? Behavioral and Brain Sciences, 24(3), 383-451.

Hertwig, R. & Ortmann, A. (2003). Economists’ and psychologists’ experimental practices: How they differ , why they differ, and how they could converge. In I. Brocas & J.D. Carrillo (Eds.), The Psychology of Economic Decisions. Oxford University Press.

Hoffmanm E., McCabe, K.A. & Smith, V.L. (1996). Social distance and other-regarding behavior in dictator games. American Economic Review, 86, 653-60.

Holt, C.A. & Laury, S.K. (2002). Risk aversion and Incentive Effects. The American Economic Review, 92(5), 1644-1655.

Juslin, P., Winman, A.& Olsson, H. (2000). Naive empiricism and dogmatism in confidence research: A critical examination of the hard-easy effect. Psychological Review, 107, 384-96.

Kahneman, D. & Tversky, A. (1996). On the reality of cognitive illusions: A reply to Gigerenzer’s critique. Psychological Review, 103, 582-91.

Koehler, J.J. (1996). The base rate fallacy reconsidered: Descriptive, normative, and methodological challenges. Behavioral and Brain Sciences 19, 1-53.

Mullen, B., Atkins, J.L., Champion, D.S., Edwards, C., Hardy, D., Story, J.E. & Venderklok, M. (1985). The false consensus effect: A meta-analysis of 115 hypothesis tests. Journal of Experimental Social Psychology, 21, 263-283.

Ortmann, A. & Hertwig, R. (2002). The cost of deception: Evidence from psychology. Experimental Economics 5(2), 111-131.

Rabin, M. (1998). Psychology of economics. Journal of Economic Literature 36, 11-46.

Thaler, R.H. (1991). Quasi rational economics. New York: Sage.