file: c:\...\papers\...\JEL prospectus.wpd revised: 4/17/2000 printed:May 25, 2000
Biases and Heuristics in Psychology and Economics
Andreas Ortmanna,b and Ralph Hertwiga
aCenter for Adaptive Behavior and Cognition
Max Planck Institute for Human Development
141954 Berlin, GERMANY
Charles University/Academy of Sciences of the Czech Republic
Politickych veznu 7
111 21 Prague 1, CZECH REPUBLIC
Article proposal for
The Journal of Economic Literature
Research on human reasoning, judgment, and decision making has been shaped by two major programs. The first assumed that the laws of logic and probability theory represent the laws of rational reasoning, and that humans actually follow these laws. Variants of this view, which traces its origin to the Enlightenment, can be found from Jean Piaget’s formal operations to Bayesian models of reasoning so familiar to economists. While the first program thus emphasizes human rationality, the second program emphasizes human irrationality. Like the Enlightenment program, it assumes that rational judgment can be reduced to the laws of logic and probability, but proponents of this program claim that human cognition systematically deviates from these norms. These systematic departures from norms have become known as "biases", "cognitive illusions", or "anomalies." Well-known examples include overconfidence bias, illusion of control, base rate fallacy, conjunction fallacy, belief in the law of small numbers, false consensus effect, hindsight bias, endowment and certainty effects, preference reversal, confirmation bias, etc. Such alleged systematic departures from norms have led to the belief that ordinary people, and even experts, are cognitive misers whose reasoning, judgement, and decision making abilities are "an embarrassment to the picture of human beings as rational beings" (Oberauer, Wilhelm, & Diaz, 1999). Proponents of what has become known as the heuristics-and-biases program (e.g., Tversky & Kahneman 1974; Kahneman & Tversky 1996) have suggested that mental short-cuts, or "heuristics," such as availability and representativeness (which are used to explain, among other things, base rate fallacy, conjunction fallacy, and belief in the law of small numbers) or anchoring and adjustment (which are used to explain the hindsight bias) were responsible for the alleged systematic departures from norms.
The heuristics-and-biases program has been the dominant paradigm in research on human reasoning, judgment, and decision making over the past three decades (Lopes 1991). More recently, the heuristics-and-biases program has caught the attention of numerous social scientists, including noted economists (e.g., Hanson& Kysar 1999; Camerer 1995; Rabin 1998; Barber & Odean 1999, 1999b; Odean 1999). In fact, much of today’s behavioral economics and finance draws its inspiration and concepts from the heuristics-and-biases paradigm (e.g., Thaler 1993; Shiller 2000; Goldberg & von Nitzsch 2000). There are good reasons for this attention, as systematic biases may have important economic implications. Camerer (1995, p. 594), for example, has conjectured that the well-documented high failure rate of small businesses may be due to overconfidence, while Odean and his collaborators have argued that overconfidence based on misinterpretation of random sequences of successes leads some, typically male, investors to trade too much. Shiller draws explicitly on the experimental findings of Kahneman & Tversky to explain "irrational exuberance" in the stock market. Hanson & Kysar argue that the reality of cognitive illusions has opened the door to systematic manipulation of consumer product markets.
In light of its rapidly growing acceptance among economists and other scholars, it is therefore interesting to note that the heuristics-and-biases program has been under attack for some time among psychologists (e.g., Christensen-Szalanski & Beach 1984; Lopes 1991; Gigerenzer 1991a, 1996). These days, important parts of the heuristics-and-biases program such as the base-rate fallacy are considered a myth at least in some quarters (e.g., Koehler 1996.) This critique has challenged the notion of human decision makers as systematically flawed bumblers. It draws on notions of bounded rationality (e.g., Simon 1956, 1990) to argue that humans have evolved surprisingly effective simple decision rules that in many contexts serve them well (e.g., Gigerenzer, Todd, & the ABC Research Group, 1999), and redefines what constitutes rationality by taking into account constraints on resources such as time, knowledge, and cognitive processing ability.
It is the purpose of this article to familiarize economists with the (sometimes very public) debate between advocates of the heuristics-and-biases program and those in the ecological rationality program (e.g., Kahneman & Tversky 1996; Gigerenzer 1996).
The article that we envision will have an introduction that will extend the preceding four paragraphs, three sections that are motivated and sketched below, followed by a concluding discussion.
1. Biases in psychology.
Biases are defined as departures from classic norms of reasoning, judgement, and decision making as codified in the laws of logic and probability theory. The heuristics-and-biases program suggested that biases are systematic and wide-spread. To investigate this possibility, psychologists invented numerous tests of deductive and inductive reasoning. For instance, the Wason selection task, "the most intensely researched single problem in the history of the psychology of reasoning" (Evans, Newsstead, & Byrne, 1993, p. 99), is often taken to demonstrate that people cannot properly evaluate conditional statements and do not engage in falsificationist strategies of testing hypotheses. Likewise, and maybe more importantly for modern economics, the "Linda Problem" was designed to test people’s understanding of "the simplest and most fundamental qualitative law of probability" (Tversky & Kahneman, 1983, p. 294), the conjunction rule, which holds that the mathematical probability of a conjoint event cannot exceed that of either of its components. As Tversky & Kahneman (1983, p. 313) put it, "A system of judgments that does not obey the conjunction rule cannot be expected to obey more complicated principles that presuppose this rule, such as Bayesian updating, external calibration, and the maximization of expected utility." Other experimental demonstrations suggested that humans are prone to fall prey to the overconfidence bias, the base rate fallacy (which is taken to demonstrate that people over-weigh individuating information about an event or member of the population and under-weigh the relevant base rates, thus calling into question whether people could be Bayesians), the belief in the law of small numbers (which says that people overestimate the information contained in small samples), false consensus effects (which suggest that people overestimate the degree to which others share their views), etc. All of these demonstrations of biases have cast doubt on people’s ability to engage in Bayesian reasoning, an important assumption underlying much of modern economic theory.
Documented biases have typically been the results of experimental implementations of text problems meant to capture the essence of a logical or probabilistic proposition. In typical experiments subjects were given word problems designed in such a manner that reasoning according to a normative principle of logic or probability theory would lead to a "correct" response. Reasoning according to other principles would lead to a qualitatively different and – as seen through the lens of the normative principle – "incorrect" response. The results -- for instance, in a first battery of experimental tests typically only about 10% of the subjects made the logically correct choice in the Wason Selection Task and 80 - 90% of subjects violated the conjunction rule in the Linda Problem -- seemed to demonstrate that subjects’ reasoning did not match well with the model of human reasoning proposed by propositional logic and probability theory, and that ordinary people, and even experts, are cognitively challenged.
Section One of the proposed article will present three major biases -- the conjunction fallacy, the overconfidence bias, and the base rate fallacy; all of which will be used as running examples in the remainder of the article -- plus brief discusssions of about half a dozen other cognitive illusions. While we will focus on biases in inductive reasoning, more specifically, on biases in subjective probability (confidence) and frequency judgments, we will also discuss biases in deductive reasoning (e.g., confirmation bias.)
2. Deconstructing biases in psychology.
The impressive collection of instances of cognitive illusions documented in the heuristics-and-biases program has challenged the validity of economic models. Specifically, the documented cognitive illusions have challenged the assumption that probabilistic judgments are consistent and unbiased (Thaler, 1991, p. 115). Unfortunately, the pertinent findings of the heuristics-and-biases program – namely the claim that "mental illusions should be considered the rule rather than the exception" (Thaler, 1991, p. 4) -- have themselves come under heavy fire. This critique has typically not been acknowledged by those economists who adopted the heuristics-and-biases program as an alternative paradigm to the standard rational-actor paradigm. In our view, the economics profession thus has a rather incomplete picture of research on uncertainty in psychology.
The proponents of the heuristics-and-biases program and its critics do battle on four grounds. The first challenge refers to the alleged stability of cognitive illusions. Most so-called cognitive illusions in probabilistic reasoning have been demonstrated using problems represented in terms of probabilities. A growing set of studies, however, has demonstrated that the allegedly stable cognitive illusions can be made to disappear by one of two manipulations: presenting information in natural frequencies or asking questions about frequencies rather than probabilities, and using items that are randomly sampled rather then selected. Hertwig & Gigerenzer (1999) and Fiedler (1988), for example, demonstrated that the conjunction fallacy can be drastically reduced and even made to disappear when the probability format is replaced by a frequency format.
Gigerenzer, Hoffrage, & Kleinboelting (1991) showed that the "overconfidence bias" disappears when participants estimate the number of correct answers instead of the probability that a particular answer is correct (see also May, 1987; Sniezek & Buckley, 1993) and when participants are asked to judge a representative set of items (see Juslin 1994; Sedlmeier, Hertwig, & Gigerenzer, 1998). Likewise, Koehler, Gibbs, & Hogarth (1994) reported that the "illusion of control" (Langer, 1975) is reduced when the single-event format is replaced by a frequency format, that is, when participants judge a series of events rather than a single event. It also has been demonstrated that Bayesian reasoning improves in lay people (Cosmides & Tooby, 1996; Gigerenzer & Hoffrage, 1995) and experts (Hoffrage & Gigerenzer, 1998; Lindsey, Hertwig, & Gigerenzer 1999) when Bayesian problems are presented in natural frequencies (i.e., absolute frequencies obtained by natural sampling) rather than in a single-event probability format. Koehler (1996) has amassed evidence thgat in some studies judgments were found appropriately sensitive to base rates, in others too little, and yet others too much. His review suggests that in this context too the choice of the information format and the representative sampling play a crucial role in predicting an outcome.
These findings suggest that people’s judgments and decisions are sensitive to the distinction between various representations of information, such as probabilities and frequencies, and to the nature of the sampling process. The fact that many of the alleged cognitive biases can be made to disappear, or at least to be reduced, and in some cases inverted, had led to important questions about experimental design and implementation (e.g., Hertwig & Ortmann 2000; Hilton 1995)
The second challenge is Krueger’s (2000) argument that the widespread asymmetric null-hypothesis testing of normative behavior has a built-in bias to identify biases because the null-hypothesis in behavioral decision-making is typically a point-specific prediction of some normative principle based on logic, probability theory, or the various forms of EU. The predicament is that any difference between such a theoretically predicted single value and the empirical value observed in the experiment can be made significant, if the sample size n is made large enough. Thus, psychology’s way of testing hypotheses stacks the deck in favor of rejecting predictions derived from normative principles (i.e., the null hypothesis). [We note en passant that a similar problem has plagued experimental economics. In addition, many of the "anomalies" documented in economics have been based on corner-point equilibria, e.g., most public good experiments, ultimatum and dictator games, trust games, etc.]
The third challenge concerns the question of the appropriateness of laws of logic or probability and various expected utility theories as the relevant norms against which to measure human competence and performance (e.g., Gigerenzer & Todd 1999; Oaksford & Chater 1994, 1996; Oaksford, Chater, & Grainger 1999; Oberauer, Wilhelm & Diaz 1999; Kleiter, Krebs, Doherty, Garavan, Chadwick, & Brake 1997.) More specifically, Gigerenzer (1996) has argued that Kahneman and Tversky impose unnecessarily narrow norms of sound reasoning. He argues that most practicing statisticians start by investigating the content of a problem, work out a set of assumptions, and, finally, build a statistical model based on these assumptions. The heuristics-and-biases program starts at the opposite end. A formal principle, such as the modens ponens and modus pollens, the conjunction rule, or Bayes’s rule, is chosen as normative, and some real-world content is filled in afterward, on the assumption that only structure matters. The content of the problem is not analyzed in building a normative model, nor are the specific assumptions people make about the situation. Take Birnbaum’s (1983) thoughtful explication of a rational response to the cab problem as an example. This problem involves a cab that is involved in a hit-and-run accident at night. The text provides the information that a witness identified the cab as being blue, along with information about the eyewitness’s ability to discriminate blue and green cabs, and the base rate of blue and green cabs in the city. Rather than mechanically plugging these values into Bayes’s formula as typically is done in the heuristics-and-biases program, Birnbaum started with the content of the problem and made assumptions about various psychological processes a witness may use. In terms of a signal detection model, for instance, a witness may try to minimize some error function: If the witness is concerned about being accused of incorrect testimony, then she may adjust her criterion so as to maximize the probability of a correct identification. If instead the witness is concerned of being accused about other types of errors, then she may adjust her criterion so as to minimize those specific errors. Obviously, different goals will lead to different posterior probabilities (see Gigerenzer, 1998, and Mueser, Cowan & Mueser, 1999, for the detailed arguments). In a related development, Oaksford and Chater (1994, 1996) have demonstrated that the selection task ought to be understood as a decision-making task under uncertainty rather than a deductive reasoning task. Such an approach has allowed them to rationalize subjects’ behavior in various selection tasks as optimal data selection.
The fourth challenge to the heuristics-and-biases program is its loose definition of the heuristics allegedly generating the cognitive illusions. More than 25 years after heuristics such as availability and representativeness have been proposed as driving probability and frequency judgments, they remain vague, undefined, and unspecified with respect to both the antecedent conditions that elicit (or suppress) them and also to the cognitive processes that underlie them. This is a curious state of affair given the long-standing concerns even of those close to the heuristics-and-biases approach, to wit, "Heuristics may be faulted as a general theory of judgment because of the difficulty of knowing which will be applied in any particular instance." (Slovic, Fischhoff, & Lichtenstein, 1977, p. 6)
The problem with these heuristics is that they at once explain too little and too much. Too little, because we do not know when these heuristics work and how; too much, because, post hoc, one of them can be fitted to almost any experimental result. For example, base-rate neglect is commonly attributed to representativeness. However, the opposite result, overweighting of base rates (conservatism), is as easily "explained" by saying the process is anchoring (on the base rate) and adjustment. The ability to explain "every" phenomenon post hoc, has serious implications for using these types of heuristics to explain economic biases. The lack of falsifiable process models of heuristics immunizes the heuristics-and-biases program against critique. It also has the unfortunate consequence of slowing down the formalization of models that incorporate psychologically plausible assumptions that acknowledge constraints of time, knowledge, and computational capacity. It is another purpose of this article to free the term "heuristic" from the connotation of "flawed rule of thumb," to sketch progress that has been made toward precise and falsifiable process models of simple rules that make us smart, and to reposition heuristics as the centerpiece of new models of human reasoning, judgement, and decision making. This is, roughly, the ecological rationality program.
Section Two will illustrate these four challenges with examples from the relevant literature.
The key issues will be how these demonstrations have been experimentally implemented, and what kind of theoretical progress has been made to model the conditions under which judgements are correct or incorrect, compared to specific norms.
3. Implications for economic theory and practice.
"We are well-known for setting traps and taking delight at human failure. Haven’t we reached the point of diminishing returns? Demonstrations of one more error for the sake of an error, or one more violation for the sake of a violation, are nothing new. Not only are they not new, they add to an already lopsided view of human competence." (Then-president Barbara Mellers (1996), in a letter to the members of the Judgment and Decision Making Society, the home base of researchers in the heuristics-and-biases tradition)
Matthew Rabin (1998), in this journal, proposed to enrich economic models through empirical assumption making: We should attempt to replace some of the current assumptions in economics with assumptions built from systematic patterns of behavior identified by psychological research. His review emphasized "what psychologists and experimental economists have learned about people, rather than how they have learned it" (1998, p. 12).
We find ourselves in complete agreement with the exhortation that assumption-making ought to be grounded in the reality of human reasoning, judgment, and decision making. However, as one cannot separate theories from the tools that are used to produce them (Gigerenzer 1995), one cannot separate the experimental findings from the manner in which they were generated. The proposed article will present evidence that what have been construed by the heuristics-and-biases program as systematic patterns of biases are, in many cases, rather spurious effects that can be made to disappear easily through relatively simple and straightforward experimental manipulations. The evidence motivates important questions about the appropriate norms, the external validity of the experimental demonstrations on which the heuristics-and-biases program draws, and the dearth of precise and falsifiable process models that specify the antecedent conditions that elicit (or suppress) biases.
Empirical assumption making based on an incomplete knowledge base and without concern for the experimental conditions which produced the results and their external validity is likely to set economics on a detour very similar to the one that judgment and decision making got set on twenty years ago -- a detour that in psychology is increasingly being recognized as such (e.g., Mellers 1996). As much of today’s behavioral economics and finance draws its inspiration and concepts from the heuristics-and-biases paradigm and its experimental demonstrations (e.g., Thaler 1993; Shiller 2000; Goldberg & von Nitzsch 2000), it is important to understand how the evidence for those concepts was generated, and to what extent laboratory results are likely to transfer to real-world phenomena. For example, while indeed the high-failure rate of small businesses or day traders may be due to overconfidence, one can surely think of more precise, and empirically better testable, explanations.
Section Three of the proposed article will address these issues. We note that several prominent experimental economists have brought up related issues, namely Friedman (1998) and Binmore (1999), and that questions of the external validity of experiments has also resurfaced in the context of numerous "anomalies" identified by experimental economists.
BARBER, B.M. AND T. ODEAN. "Trading Is Hazardous to Your Wealth: The Common Stock Investment Performance of Individual Investors." Journal of Finance, 1999 (forthcoming).
------- . "Boys Will Be Boys: Gender, Overconfidence, and Common Stock Investment." Working paper, University of California, Davis, 1999a.
BINMORE, K. "Why experiment in economics?" The Economic Journal, 1999, pp.16 - 24.
BIRNBAUM, M.H. "Base rates in Bayesian inference: Signal detection analysis of the cab problem." American Journal of Psychology, 1983, 85 - 94.
CAMERER, C. "Individual Decision Making." In KAGEL AND ROTH (Eds), Handbook of experimental economics. Princeton: Princeton University Press, 1995, 587 - 703.
COSMIDES, L. AND J. TOOBY. "Are Humans Good Intuitive Statisticians After All? Rethinking Some Conclusions From the Literature on Judgment and Uncertainty." Cognition, 1996, 187 - 276.
CHRISTENSEN-SZALANSKI, J.J. AND L.R. BEACH. "The citation bias: Fad and fashion in the judgment and decision literature." American Psychologist, 1984, 75 - 78.
EVANS, J.St.B.T., NEWSTEAD, S.E., AND R.M.J. BYRNE. Human Reasoning: The Psychology of Deduction. Hillsdale, NJ: Erlbaum.
FIEDLER, K. (1988) "The dependence of the conjunction fallacy on subtle linguistic factors." Psychological Research, 50, 123 - 29.
FRIEDMAN, D. "Monty Hall’s Three Doors: Construction and Deconstruction of a Choice Anomaly." American Economic Review, 1998, 933 - 46.
GIGERENZER, G. "From Tools To Theories: A Heuristic of Discovery in Cognitive Psychology." Psychological Review, 1991, 254 - 67.
------- . "How to make cognitive illusions disappear. Beyond heuristics and biases." In STROEBE AND HEWSTONE (Eds), European Review of social psychology (vol.2). Chichester, UK: Wiley, 1991a, 83 - 115.
------- . "On Narrow Norms and Vague Heuristics: A Reply to Kahneman and Tversky (1996)." Psychological Review, 1996, 592 - 96.
------- . "Psychological challenges for normative models." In GABBAY AND SMETS (Eds), Handbook of defeasible reasoning and uncertainty management systems (vol.1). Dordrecht: Kluwer, 441 - 67.
GIGERENZER, G. AND U. HOFFRAGE. "How to improve Bayesian reasoning without instruction: frequency formats." Psychological Review, 1995, 684 - 704.
GIGERENZER, G., HOFFRAGE, U. AND H. KLEINBOELTING. "Probabilistic mental models: A Brunswickian theory of confidence." Psychological Review, 98, 506 - 28.
GIGERENZER, G. and P. TODD. "Fast and frugal heuristics: The adaptive toolbox. In GIGERENZER, TODD, AND THE ABC RESEARCH GROUP, 1999, 3 - 34.
GIGERENZER, G., TODD, P., AND THE ABC RESEARCH GROUP. Simple Heuristics That Make Us Smart. Oxford: Oxford University Press, 1999.
GOLDBERG, J. AND R.VON NITZSCH. Behavioral Finance. Gewinnen mit Kompetenz. Muenchen: FinanzBuch Verlag, 1999.
HANSON, J.D. AND D.A. KYSAR. "Taking Behavioralism Seriously: The Problem of Market Manipulation," New York University Law Review, 1999, 630 - 749.
HERTWIG, R. AND G. GIGERENZER. "The ‘conjunction fallacy’ revisited: How intelligent inferences look like reasoning errors," Journal of Behavioral Decision Making, 1999, 275 - 305.
HERTWIG, R. AND A. ORTMANN. "Experimental Practices in Economics: A Challenge for Psychologists?" [target article], Behavioral and Brain Sciences, 2000 (forthcoming).
HILTON, D. "The social content of reasoning: Conversational inference and rational judgment." Psychological Bulletin, 1995, 248 - 71.
HOFFRAGE, U. AND G. GIGERENZER. "Using natural frequencies to improve diagnostic inferences." Academic Medicine, 1998, 538 - 40.
JUSLIN, P. "The Overconfidence phenomenon as a consequence of informal experimenter-guided selection of almanac items. Organizational Behavior and Human Decision Processes, 1994, 226 - 46.
KAHNEMAN, D. AND A. TVERSKY. "On the reality of cognitive illusions: A reply to Gigerenzer’s critique." Psychological Review, 1996, 582 - 91.
KLEITER, G.D., KREBS, M., DOHERTY, M.E., GARAVAN, H., CHADWICK, R., AND G. BRAKE. "Do Subjects Understand Base Rates?" Organizational Behavior and Human Decision Processes, 1997, 25 - 61.
KOEHLER, J.J. "The base rate fallacy reconsidered: Descriptive, normative, and methodological challenges." Behavioral and Brain Sciences, 1996, 1 - 53.
KOEHLER, J.J., GIBBS, B.J. AND R.M. HOGARTH. "Shattering the illusion of control: Multi-shot versus single-shot gambles." Journal of Behavioral Decision Making, 1994, 183 - 92.
KRUEGER, J. "The bet on bias: A foregone conclusion? Psycholoquy, 2000, http://www.cogsci.soton.ac.uk/cgi/psyc/newspsy?9.46
LANGER, E.J. "The illusion of control." Journal of Personaliy and Social Psychology, 1975, 311 - 28.
LINDSEY, S., HERTWIG, R., AND G. GIGERENZER. "Communicating statistical evidence." Manuscript submitted for publication, 1999.
LOPES, L.L. "Three misleading assumptions in the customary rhetoric of the bias literature." Theory & Psychology, 1991, 231 - 36.
MAY, R.S. Realismus von subjektiven Wahrscheinlichkeiten: Eine kognitionspsychologische Analyse inferentieller Prozesse beim Over-confidence Phaenomen [Calibration of subjective probabilities: A cognitive analysis of inference processes in overconfidence.] Frankfurt: Lang, 1987.
MELLERS, B. "From the President." J/DM Newsletter, 1996, 3.
MUESER, P.R., COWAN, N., AND K.T. MUESER. "A generalized signal detection model to predict rational variation in base rate use. Cognition, 1999, 267 - 312.
OAKSFORD, M. AND N. CHATER. "A Rational Analysis of the Selection Task as Optimal Data Selection." Psychological Review, 1994, 608 - 31.
------- . "Rational Explanation of the Selection Task." Psychological Review, 1996, 381 - 91.
OAKSFORD, M., CHATER, N. AND B. GRAINGER. "Probabilistic Effects in Data Selection." Thinking and Reasoning, 1999, 193 - 253.
OBERAUER, K., WIHELM, O., AND R.R. DIAZ. "Bayesian Rationality for the Wason Selection Task? A Test of Optimal Data Selection Theory." Thinking and Reasoning, 1999, 115 - 44.
ODEAN, T. "Do Investors Trade Too Much?" American Economic Review, 1999, 1279 - 98.
RABIN, M. "Psychology and Economics." Journal of Economic Literature, 1998, 11 - 46.
SEDLMAIER, P., HERTWIG, R., AND G. GIGERENZER. "Are judgements of the positional frequencies of letters sytematically biased due to availability?" Journal of Experimental Psychology: Learning, Memory, and Cognition, 1998, 754 - 70.
SHILLER, ROBERT J. Irrational Exuberance. Princeton: Princeton University Press, 2000.
SIMON, H.A. "Rational choice and the structure of environments." Psychological Review, 1956, 129 - 38.
SIMON, H.A. "Invariants of human behavior." Annual Review of Psychology, 1990, 1 - 19.
SLOVIK, P., FISCHHOFF, B. AND S. LICHTENSTEIN. "Behavioral Decision Theory." Annual Review of Psychology, 1977, 1 - 39.
SNIEZEK, J.A. AND T. BUCKLEY. "Decision errors made by individuals and groups. In N.J. CASTELLAN (Ed) Individual and group decision making. Hillsdale, NJ: Erlbaum, 1993, 120 - 150.
THALER, R.H. Quasi Rational Economics. New York: Russell Sage Foundation, 1991.
THALER, R.H. (Ed) Advances in Behavioral Finance. New York: Russell Sage Foundation, 1993.
TVERSKY, A. AND D. KAHNEMAN. "Judgement under uncertainty: Heuristics and Biases." Science, 1974, 1124 - 31.
TVERSKY, A. AND D. KAHNEMAN. "Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment." Psychological Review, 1983, 293 - 315.