Framing Bias and More Information

Brad Flansbaum writes about the difficulty of correctly interpreting survival odds for patients and physicians. His post is based on an interesting study about framing bias in medical decision making. He notes that physicians, in particular, may have trouble communicating small survival chances in meaningful language, and seem to systematically overstate the chances for success as compared to patients. In the paper he is citing, two-thirds of physicians attached the label “clearly better” to a reduction in relative risk of death of one-third, and absolute mortality risk reduction of 2%. Interestingly, around half of the patients in this study attached the label “clearly better” to this same risk reduction.

Brad is particularly worried about the ability of physicians to deliver information to patients in a meaningful format, which is one key component of good treatment decisions. How do we improve decision making?

I think that the best answer is more types of information to inform decisions. We need to add quality of life and cost to the mix to arrive at better decisions. Better communication about survival chances would be an improvement, but that is only part of the answer, and in and of itself won’t likely reduce costs as much as many seem to assume. Belief that better prognostication of death will lead to great cost reductions is likely the “fools gold” of health reform. We need to add information on the impact of a treatment on quality of life as well as explicitly discussing the costs of the therapy. More variables to consider, each with uncertainty, provide a more realistic setting in which to make treatment decisions. Only by learning to more directly encounter and talk about the trade-offs between survival, quality of life and cost can we hope to arrive at a reasonable answer to the bottom line questions: “is it worth it?” “should we do it?” My book has a fairly detailed discussion of these issues in chapters 3, 5 and 6.


Why no one seems to change their mind

Jonah Lehrer with an interesting post reviewing social psychology literature that suggests that human’s do not reason in order to decide, but instead to be able to argue with others. He reviews experiments conducted by Amos Tversky and Thomas Gilovich that proved that the ‘hot hand’ does not exist in professional basketball, first using an analysis of the Philly 76ers of the 1980s (they say Andrew Tooney was not statistically speaking, a streaky shooter, which is by the way, obviously false!); they later confirmed the non-streak reality with an analysis of the Boston Celtics. However, no one believed the results of the study; all basketball fans know that players get hot hands, and that some are notoriously streaky.  Lehrer asks:

Why, then, do we believe in the hot hand? Confirmation bias is to blame. Once a player makes two shots in a row – an utterly unremarkable event – we start thinking about the possibility of a streak. Maybe he’s hot? Why isn’t he getting the ball? It’s at this point that our faulty reasoning mechanisms kick in, as we start ignoring the misses and focusing on the makes. In other words, we seek out evidence that confirms our suspicions of streakiness. The end result is that a mental fiction dominates our perception of the game.

The larger question, of course, is why confirmation bias exists. This is the sort of mental mistake that seems ripe for fixing by natural selection, since it always leads to erroneous beliefs and faulty causal theories. We’d be a hell of a lot smarter if we weren’t only drawn to evidence that confirms what we already believe.

Hugo Mercier and Dan Sperber have a new theory of reasoning that holds that the act of reasoning is not about discovery, choosing, figuring it out or deciding. It is only about arguing for what you believe.

Reasoning is generally seen as a mean to improve knowledge and make better decisions. Much evidence, however, shows that reasoning often leads to epistemic distortions and poor decisions. This suggests rethinking the function of reasoning. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade.

Applying the notion that reasoning is only to develop better arguments to advocate for your ‘side’ in health policy/reform discussions is either liberating or profoundly depressing. Not sure which. Maybe both.

update: related Matt Yglesias post on ideology scores, and who is actually available to be a crossover voter and change their mind; h/t to Austin.