Last semester I taught Judgement and Decision Making using the Scott Plous book The Psychology of Judgment and Decision Making. I found the book quite frustrating in that it followed the “rhetoric of irrationality” that is currently favoured within JDM right up to the last chapter where it took a more critical view of this stance. Discussing a number of the various ‘biases’ proposed by JDM researchers with my students it struck me that it should be possible to model a lot of the situations to find the conditions under which the ‘biases’ are actually functional (I am writing ‘biases’ inside scare quotes as in most cases JDM researchers fail to think of biases as coming along with heuristics and task environments).
One example of just such a potentially modelable phenomenon is that of postdecisional dissonance. In the Plous book it is explained in terms of cognitive dissonance, which is a mechanism for maintaining a relatively coherent set of beliefs. Yet, it seems that there is an alternative explanation. The first example of postdecisional dissonance Plous talks about is from Knox and Inkster 1968 who showed that people tend to be much surer that ‘their’ horse will win after they have bet upon it as opposed to just before betting. The same effect was found in the case of voting by Frenkel and Doob in 1976.
The obvious thought is that in some cases such an increase in belief that the chosen alternative is the best may help to improve the choice. For example, it is quite likely that people just after getting married are much surer that their new spouse is the right one than just before marrying. Being much more certain of one’s choice in that situation is likely to lead to the investment of additional effort into making the marriage a success. But, even in cases where this effect is not likely to take place, it is possible to think of a functional explanation.
Consider a possible scenario in which it is possible to change one’s preference after the initial choice but only at some cost. Such situations are very common. For example, having bought a particular brand of television one might go back to the store and return the recently purchased set in order to buy a different one. Consider further the possibility that one is in the possession of incomplete information and that there are at least two alternatives that are almost equally attractive. This is a condition that, given bounded rationality, is going to be the case very often. Indeed, the cases where one of the alternatives is much better than the others are not going to be very interesting in terms of decision making. In this kind of situation it is possible to end up vacillating between the available alternatives as new information comes in affecting their perceived utility or as we focus upon particular aspects of the alternatives making them alternatively appear superior. While this situation is bad enough prior to an initial choice being made, it is potentially downright disastrous afterwards. One could, potentially, end up repeatedly changing one’s preference and paying the cost of switching.
Postdecisional dissonance could be a means to stop that from occurring. The mechanism is that, once a choice is made, the chosen alternative’s subjective expected utility is increased in order to make it unlikely that random fluctuations in the SEU will lead to the unchosen alternative coming to be perceived as superior. At the same time, should any significant change in the SEU of either alternative occur, the mechanism allows for a change in the choice. In effect, once a choice is made, that choice will only be second guessed if there is a significant change in the SEUs, hopefully sufficient to outweigh the cost of changing one’s mind. Each of the various variables I have mentioned ought to be capable of being modelled using a fairly simple set-up making it possible to see just what sorts of conditions are necessary for postdecisional dissonance to be functional in this way. My suspicion is that these conditions are going to be fairly common.
While this is a model of just one of the very many mechanisms that Plous mentions, it seems to me that it ought to be possible to pursue this line of attack for many of the others. I wonder if this kind of thing is actually being done by anyone. I should ask my KLI colleague, Joanna Bryson, as she is into modelling and, therefore, may be able to give me an answer.
Joanna Bryson
July 25, 2010
Hey Konrad —
So certainly you are right about the problem of persistence vs. dithering in action selection. I’ve just had another article come out on exactly that problem http://is.gd/dGsUz will take you to the journal page.
But I’ve had a theory about the persistence of individually-defining traits since hearing Andreas Wilke talk about the number of false positives people have with recognising patterns. It’s one of those inclusive fitness arguments that the social sciences still get nervous about. The idea is that for humanity it makes sense for individuals to individuate — to stake out well-dispersed regions of the space their culture has identified as worth exploring. Some of the individuals will win, others will lose. The winners will get prestige and be imitated more than the losers, so the society as a whole benefits (even the “losers”) and you have a better mechanism of concurrent search.
I have a PhD student (Marios Richards) and an MSc student (Daniel Taylor) working on modelling various aspects of this right now — but with quite abstract agents.
I’m sure whatever branch of EU funding supports philosophers would love us to death if you & I write a grant about this. Both Daniel & Marios need funding!
Konrad Talmont-Kaminski
July 25, 2010
I think the theory you propose and the one that I put forward need not be in conflict. Both mechanisms could well be functioning. Putting into bounded rationality terms, one is a matter of the distribution of satisfaction levels within a population capable of social learning. The other concerns the mechanism needed to avoid dithering once the satisfaction level has been obtained by any particular individual.
Randy Mayes
July 25, 2010
Nice post Konrad. I taught roughly the same course using Jonathan Baron’s Thinking and Deciding, most recent edition. Really enjoyed using it, and Baron is an exceptionally responsive guy when it comes to answering questions. It doesn’t come through so much in the text, but he is very skeptical, e.g., of the widely accepted notion of loss aversion, believing that the phenomena are adequately explained by the status quo bias combined with DMU. Also have had some interesting conversations regarding the notion of confirmation bias, which he thinks is a terrible term, as it suggests the existence of an actual bias to confirm, for which there is no evidence. BTW, Johnson-Laird and Clare Walsh have a short fascinating paper out called “Changing your Mind” which challenges the minimalist conception of belief change in favor of an explanatory conception. Very smart stuff.
Konrad Talmont-Kaminski
July 25, 2010
I’m having a look at the Baron book after buying it as one of the options for next year. I don’t ming there being a discourse between the text nad the lecturer but it was just a bit too much discourse for me with Plous. I think the basic problem with JDM is the degree to which it is atomised in regards to the mechanisms it proposed. Sometimes it seems like they propose a new mental mechanism for every new empirical methodology they think up. Ridiculous. And it just seems obvious to me that starting from a bounded rationality perpective would order a lot of things for them in terms of them being able to know that where there is a REAL bias, there must be a heuristic and a task environment, etc. (and vice versa!) It just makes you think in a much more systematic way about what the data is revealing. I suspect the Baron book is going to be way too big for us to cover in what is only a half-semester course and my current favourite for next year is Hardman’s JDM: A Psychological Perspective. By the end of page 5 he’s mentioned both Simon and Gigerenzer. Good man. I’ll have to read it all the way through before I decide, of course.
I don’t know the paper you mention but I find Johnson-Laird to be one of the most interesting and reasonable of the people approaching reasoning from a traditional perspective. I read his How we Reason a while back and I think that what he says could be actually made to fit nicely with Simon. What I find interesting is that (I believe) both Johnson-Laird and Jonathan Evans are Wason-trained and they are both at the reasonable end of that whole tradition. I mean, Evans is one of the dual process gurus (and I am very sceptical of that approach) but his approach is really quite open-minded and thoughtful. He got chewed out by Gigerenzer after he’d commented on one of G’s articles and I could not help but feel that while some of G’s responses were correct, G was also being a bit unfair to Evans.
One final thing, while reading Plous I could not help but keep on thinking of Paul Rozin’s work and just how thorough and careful he is compared to much of the JDM work.
Randy Mayes
July 25, 2010
Have you considered Robyn Dawes’ Rational Choice in an Uncertain World? There is a new version with Reid Hastie having apparently done a thorough update. I just ordered an exam copy, for possible use in our new class on induction. I think I mentioned I read How We Reason on your recommendation and thought it was outstanding. I think Gigerenzer brought a much needed perspective to the biases and heuristics tradition, but I’ve found some of his work to be screechy and not very careful. I recently had my students read an (admittedly popular) article he wrote called “I think, therefore I err” and my own view is that it gets most of its work done by simply equivocating on the term ‘error’ where sometimes it means the wrong answer and sometimes it means a reasoning or calculation mistake. Anyway, I see Laird as staking out a very sensible, but still exciting middle ground there. Just very non tendentious and clear about the problems of testability, etc.
Konrad Talmont-Kaminski
July 25, 2010
I’ve had a copy of the Dawes’ book for ages and I’m not impressed (not that I’ve seen the latest version, mind you). I think part of my problem is that the psych students I’m teaching JDM have it as one of their first classes so they’re pretty green. Both Dawes and, even more, Baron are books for elder years, I think. I’ve been reading the Hardman book and I can heartily recommend it. He does the right thing and starts off discussing what rationality is and how it is difficult to put your finger on this basic issue! Seriously, I think you need to look it.
I’ve never been big on getting exam copies of books as I know that almost none of my students are going to buy the book since the cost is prohibitive for them. I would feel like I was conning the publishers if I tried to get an exam copy. Mind you, in effect, I end up spending a pretty packet on books.
Gigerenzer can be ‘screechy’ as you put it. I think the reason is that he is basically right and has been largely ignored at least till recently. As for his article, I rather liked it when I read it and did not spot the problems. If he is making the process/product equivocation, that is quite serious as he chews out the maximising crew on just this issue. I should add that I think that Gigerenzer is not being radical enough. :-) My motto, after all, is “Heuristics all the way up!” – not something that Gigerenzer is in a habit of putting front and centre.
It is a pity that you’re not coming this year to Kazimierz. In line with my motto, I’ll have a fun little presentation trying to show that the way people actually use SEU is a heuristic (and I suspect the same can be said for a large class of seemingly non-heuristic solutions). Actually, there should be quite a bit of Simon-talk at the meeting quite apart from my own fetish. OK, so that may not be so apart from my fetish at all, actually.
Randy Mayes
July 26, 2010
I’ll be sorry to miss that talk, but I hope it will be online afterwards or that you’ll send it my way. Thanks for the Hardman recommendation. I like your motto. It reminds me of “Turtles all the way down!”(which is probably your intention). I’m not sure exactly what you mean by it, though. Do you completely reject logic and the axioms of probability as setting a normative (though not necessarily a prescriptive) standard? I don’t think of their normativity in a priori terms, but I do think they provide a basis for judging the effectiveness of our heuristics.
Konrad Talmont-Kaminski
July 26, 2010
Good old Bertie, of course. Or was it William James who said that? The historical record is somewhat hazy on it. There are two issues: the normative status of logic and whether people make deductive inferences. The answer to the first is provided by the argumentative theory of reasoning put forward by Hugo Mercier (he and Dan Sperber will have a BBS article out on this within 6 months). The basic idea is that these kinds of abilities arose in order to detect cheating in communication. The further step is to say that the intersubjectively shared explicit norms linked to that kind of reasoning are also a result of this need. This does not undermine the objectivity of logic but it explains why we care about it. Also, it shows that logical norms are not opposed to pragmatic norms but are, actually, an outgrowth (although there may be disagreement between what particular norms suggest at times, but that is quite normal when it comes to products of evolution – same thing happens with different cognitive mechanisms, after all). Ultimately, I think that evolution is the one and only source of normativity. So, logic may be necessary to judge the normativity of heuristics but only because we need to use logic to understand this normativity. Not because logic lies behind the normativity.
The next thing is the question of deductive inferences. Here I take two lines. Firstly, I think that deductive arguments are basically facts (about the connections between certain facts/propositions) that, once we are aware of, we use to inductively infer. If you’re wondering what I mean by this, this is basically the line that Gilbert Harman takes in Change in View. The idea is that, for example, when you detect an inconsistent set of propositions in your beliefs, you have a range of different responses available to you and you chose between them inductively (yes, I define induction very broadly and, yes, I see it as coextensive with heuristics – it all ties back to the problem of induction). Secondly, I think that our ability to appreciate deductive arguments must be due to the recruitment of mental mechanisms that were originally developed for other purposes. And I’m quite interested to see what research reveals about these mechanisms.
Maria Frapolli has a different line from Harman, she does think that we do make deductive inferences but her line ends up allowing my heuristics all the way up line any way. She has a really cool paper on this in the collection of articles that Marcin and I just edited.
Another line of attack for the whole question of heuristics is empirical. You define heuristics, identify a set of their characteristics and start looking. This is kind of the tack that Bill Wimsatt’s work invites. In his Philosophy for Limited Beings he identifies six different traits that all heuristics share, making this approach possible. And, of course, if you go back to Simon, he talked about how scientific discovery is driven by heuristic reasoning. He even modelled it on computers, showing how a particular high level heuristic could lead to the discovery, in one example, of Boyle’s law (IIRC).
And it all goes back to Hume’s problem, ultimately.
Randy Mayes
July 26, 2010
Funny, this is where I was expecting you to say something I would disagree with. Obviously, there is some speculative stuff her, but I don’t find it outlandish, especially when one compares it to the fairy tales about human cognition that must be told to support a non consequentialist account of the normativity of logic. Harman’s book was formative for me when I was writing my dissertation on explanation, though I thought at the time that he, like Quine and Sellars, put too much faith in an unanalyzed notion of an ‘explanatory relation’ to describe just how we decide which way to go with our logical inferences. (Of course, the positive spin on this is that they framed the question for psychologists to answer. The Johnson-Laird article, “Changing your mind” provides direct support for Harman’s view.)
I actually already have the BBS article by Sperber and Mercier, sans peer commentary and look forward to reading it soon. I’m working on a little talk right now provisionally entitled “Not by argument alone” which tries to put into historical context my view that we do not actually rationally revise our beliefs on the basis of argument alone, but rather require that any argument be supported by answers to explanatory questions that arise from accepting an argumentative conclusion. So, e.g., put in the context of the history of modern philosophy, we do not accept Berkeleian phenomenalism because it leaves unanswered the question where our perceptions come from. Similarly, we do not accept Platonic conceptions of the Good or Concepts, because it provides no plausible explanation of how we can come to know these sorts of things. Etc. Essentially it is the Principle of Sufficient Reason, but bifurcated into an evidentiary and explanatory sense and adapted to a non foundationalist account of the growth of knowledge. I don’t know Frapoli, but it sounds like I need to read her.
Manuel "Moe" G.
July 29, 2010
That is why it is not always wise to use the culturally defined notion of rationality, and why there is a benefit to specifically stating (1) the failure mode of decision that you are trying to avoid and (2) how you are modeling the (2A) cost of falling victim the failure mode and the (2B) cost of remedy.
Konrad Talmont-Kaminski
July 29, 2010
Manuel,
I agree that we have to, in effect, ask ourselves the question “Functional for whom?” The way you look at it, however, suggests that the answers to be given are somewhat arbitrary. This is not the case as functionality will, ultimately, be tied to helping maintain the stability of some process, be it some aspect of the homeostasis of the individual in question or at some other level. That stability issue will then translate into the adaptiveness of the mechanism in question, be the relevant evolutionary analysis genetic or cultural, revealing for whom or what (if indeed anything) it is actually functional.
You should have also mentioned your fuller analysis of this issue on your own blog at http://manuelmoeg.blogspot.com/2010/07/can-it-be-irrational-to-prize.html
Notice that I have not mentioned anything about the rationality of the mechanism. If you point out that I was talking about bounded rationality I should aver that the term is not one I am altogether happy with as Simon’s position reached far beyond what is normally considered in terms of rationality of any flavour. If you press for my own stance I will have to say that rationality, just as ethical norms, is an intersubjective, explicit norm that helps to maintain stable communities. As such it partly reflects the underlying biological evolutionary considerations, at the same time as it is somewhat free to alter as fits the relevant cultural context and the cultural evolutionary considerations that go with it. What it, in effect, means in this case is that it may well be rational to alter one’s evaluation of the probability of an outcome due to postdecisional dissonance but that this tendency becomes problematic ones we attempt a proper understanding of the actual probabilities, at which point the postdecisional dissonance may become disadvantageous as it clashes with explicitly stated norms of rationality. The apparent inconsistency is due to our notions of rationality normally including both efficacy and consistency, a combination that sometimes is in itself inconsistent.
Manuel "Moe" G.
July 29, 2010
Thank you for the helpful reply, but my profound ignorance prevents me from getting much from it. I am self-taught exclusively from an engineer’s perspective of decision making from Decision Analysis texts [ http://en.wikipedia.org/wiki/Decision_analysis coined in 1964 by Ronald A. Howard http://en.wikipedia.org/wiki/Ronald_A._Howard ]. So I beg your forgiveness if I misunderstand you.
Thank you for referring to my post at http://manuelmoeg.blogspot.com/2010/07/can-it-be-irrational-to-prize.html . Your comment made my try to make some improvements and clarifications.
Konrad Talmont-Kaminski
July 29, 2010
Given your engineering background I can but suggest that you look at the work of Herbert Simon, which underpins much of my own thinking, and which is very much written from an engineering point of view. There’s rather a lot of it but the same basic outlook can be found throughout.