I was planning on reading this years ago, but it was in too much demand for the local library system to have many available copies, and my reading habits drastically dropped after I stalled on blogging about one book. Perhaps over that time I continued reading enough of the “heuristics & biases” literature that too much of it was already familiar. Or worse: I came in knowing that the “replication crisis” had already devastated much of the priming literature which Kahneman declares we “have no choice” but to believe. The initial example of completing words with a missing letter is plausible, I don’t know if that’s been replicated. He highlight’s John Bargh’s elderly-primed slow walking study, which in particular has come in for a lot of criticism. He even mentions one of the later studies which says an “interaction” can given an opposite result without considering whether than undermined the first study. I once would have had a positive association with Roy Baumeister, cited here, but now I can’t help but think of him as someone who (unlike Kahneman) refuses to recognize the problem. He also mentions (as he did in Judgement Under Uncertainty) that people neglect just how important sample size is in studies to avoid noise determining results, and now the replication crisis is largely about findings from noise-mining. In another now-embarrassing bit, he references the study declaring the “hot hand” to be a myth, and how coaches refused to take it seriously because they trusted their intuition, although newer papers claim the original paper was mistaken. To that I might add the book’s uncritical citation of Thaler/Sunstein’s “Nudge” claim about defaults having a large effect on organ donation. Stuart Buck points out that those numbers are only for sign-ups rather than actual donation, while Kieran Healy argues that the real determining factor is not defaults but organizational differences.

The anchoring study was part of why I decided to stop reading fiction years ago*. Given the replication crisis, is that still credible? There are two different mechanisms given for this effect: Amos’ preferred “system 2” theory of “adjustment” vs Kahneman’s intuited “system 1” theory, which he later attributed to priming. The latter is now undermined, but I really don’t know about the former. I suppose I should feel better about slacking, but I already know that people (particularly optimists, whom Kahneman recommends choosing to be even if their beliefs are false) choose to believe things that make them feel better and confirm their decisions.
*As my blog indicates, I haven’t been reading quite as much non-fiction for some time after I read a book and never got around to typing up my notes. The fiction I’ve read after I started slacking on that commitment has mostly been short stories online, but I’ve read the pdfs for two novels since (“Psycho” and “The Bridge of San Luis Rey”).

One bit I was confused on: in the study on Israeli judges, are 65% of all parole requests considered after meals granted, or are 65% of granted requests those considered after meals? There other bits presenting various studies I got somewhat confused on and had to re-read. I suppose that all goes to support his point about how mentally taxing System 2 is. I also wonder if I might have been “reading it wrong” by actually doing some mental calculation on the sample problems whose subjects usually just use give a quick intuitive answer on. But my intuition often failed to come up with the answers Kahneman says are normal (which is not to say that they consistently came to the “correct” answer, as he didn’t bother to present any such thing when there isn’t an objective one there’s no way to show intuitions are incorrect). He has many examples which he says should be easy to give an intuitive answer for, but they actually weren’t for me because I’m terrible at estimation.

One of Kahneman & Tversky’s most famous questions was the one involving Linda, the feminist bank teller. Steve Sailer often criticizes that example by noting humans are social creatures who assume detail in the sentence was given for a reason (this is even a rule of discourse). Kahneman seems to be aware of that angle, in his discussion of the problem about the baseball & bat whose total cost is $1.10: “the people who give the intuitive answer have missed an obvious social cue; they should have wondered why anyone would include in a questionnaire a puzzle with such an obvious answer”. Kahneman notes elsewhere that people tend to give more accurate answers when they have the “broad” perspective from seeing different possibilities at once, as this engages System 2 and causes people to notice how the options differ (or don’t). The surprising thing is that people continued give wrong answers when they just had the two options for Linda (the “short form” version of the problem), and as one is a logical subset of the other I’m fine with saying people make a logical error. It isn’t just laymen like Sailer who have made such critiques, Kahneman notes that it has been controversial among others in academia, with some arguing that people interpret “probability” to mean “plausibility”, but things objectively more probable should also be more plausible for rational people. It is interesting to note that asking problems in terms of “how many” out of 100 results in better accuracy than asking in terms of percentage of probability. Kahneman thinks that getting people to form a mental image of concrete quantities is the reason for that improvement.

The Linda example is one of many which rely on stereotypes, at which point I feel obligated to point out Lee Jussim’s work on stereotype accuracy, but that’s more directly relevant to whether is more likely to be a feminist or non-feminist (an “easier question” for heuristics to answer than noticing logical subsets of bank-tellers). All the same, Jussim has also noted that people are able to set aside their stereotype-derived priors when they have access to more specific information. Kahneman isn’t entirely anti-stereotype or dismissive of their accuracy, as he calls resistance to stereotypes a positive social norm which has the downside of resulting in supobtimal judgements. He doesn’t think it’s just rubes who use stereotypes, as he notes that when he was teaching in Oregon his colleague Robyn Dawes (an old favorite of this blog), to his chagrin, fell for a problem designed to evoke the “representativeness heuristic” (in this case drawing on stereotypes of nerds). Part of that example involved a clinician’s personality analysis of the subject being discussed, which Kahneman indicts people for taking too seriously even though the description notes that such analyses are of uncertain accuracy. Someone like Sailer would object that people assume that bit of info must be valid or else they wouldn’t be presented with it, but there’s lots of inaccurate bits of info out there which people actually are presented with.

One critic Kahneman seems to respect is Gary Klein, who values expert intuition much more than Kahneman (or Dawes, I might add). The two engaged in “adversarial collaboration”, which (after a lot of work) led to agreement on when to trust expert intuition: “an environment that is sufficiently regular to be predictable” & “an opportunity to learn these regularities”. This got me to thinking, because one of the examples of supposed experts who systematically overestimate their abilities are stock-traders (whose value-added seems to depend on a rejection of the Efficient Market Hypothesis). It seems to me that these people should be able to examine their own performance in terms of objective numbers, and while stocks move around in a seemingly random walk, couldn’t that also constitute a “sufficiently regular” environment in a sense? So why don’t stock-pickers learn the limits of their own abilities vs chance? Perhaps the Klein & Kahneman paper defines its terms more precisely to show exactly why certain professions fit and others don’t. One pattern in stock prices Kahneman does notice is that they tend to go down after the CEO gets award from press. He seems to think this is caused by the CEO being overconfident or distracted from their core mission, but it strikes me as the “Sports Illustrated curse” which Kahneman dismisses elsewhere as just regression to the mean: awards are received precisely when the outlook is rosiest for company.

A big part of the reason Kahneman won an economics Nobel (shared with experimental economist Vernon Smith, who has a study cited in the book) was his introduction of “prospect theory” as an alternative to traditional “decision theory” based on a rational utility-maximizer. I was impressed that Kahneman acknowledges it has blindspots of its own: its big innovation is adding reference points which have assigned utility of 0 (with large negative utility for losses and smaller positive utility for gains). Actual regret/disappointment doesn’t match up in many cases. Both utility & prospect theory assume the people evaluate options independently and then choose the preferred option, but regret (whose anticipation affects decisions) depends on the option one could have taken but did not. Others have attempted alternatives which add regret & disappointment, but have not demonstrated enough utility for the broader community to justify complicating models. Kahneman calls prospect theory “lucky” in its comparative success in getting adopted, rather than simply more “true”.

A lot of my reading here has engaged in niggling critiques and spotting flaws, but that’s in part because so much of the (mostly good) content I’ve heard before, and the more recent criticisms are just what I find more interesting now. This is a very readable presentation of his ideas for a popular audience; much moreso than his edited collection “Judgment Under Uncertainty: Heuristics and Biases”, which I checked out years ago and then had to return before I could finish. And I made sure to put this positive note at the end because people remember that enough to prefer an experience of milder pain after initial (more severe) pain compared to a complete cessation after the same initial pain.

UPDATE: Mere hours after hitting publish, I heard about Daniel Lakens’ critique of the hungry judge study, via Scott Alexander.

Advertisements