Why Critical Thinking Is in Short Supply

November 30, 2015 at 07:00 PM
Share & Print

Like essentially everything under the sun, our business is bursting with very strong yet varied, sometimes contradictory viewpoints. Ken Fisher hates annuities. John Bogle is committed to passive money management. Dave Ramsey thinks that people should expect 12% annual returns. Eugene Fama says that markets are efficient. Seth Klarman views that as crazy talk.

Intuitively, we tend to believe that we can bridge the gaps between people and ideas using reason and careful analysis. Unfortunately, the evidence suggests that doing so is far harder than we'd like to think. We simply aren't very good at looking at the available evidence for a given viewpoint with any degree of objectivity. Maybe all this has something to do with America's declining math scores, but I think the causes are much more insidious.

While information is cheap and getting cheaper, meaning is increasingly expensive. We are beset by confirmation bias, our tendency to look for and accept evidence that supports what we already think we know and ignore the rest. Per motivated reasoning, we tend to reject new evidence when it contradicts our established beliefs. Sadly, the smarter we are, the more likely we are to deny or oppose data that seem in conflict with ideas we deem important. Finally, bringing true believers together in a group tends only to compound the problem.

Rigged Games

One study by the late Ziva Kunda brought a group of subjects into a lab and told them that they would be playing a trivia game. Before they started, they watched someone else (the "Winner") play, to get the hang of the game. Half the subjects were told that the Winner would be on their team and half were told that the Winner would be on the opposing team. The game they watched was rigged and the Winner answered every question correctly.

When asked about the Winner's success, those who expected to play with the Winner were extremely impressed while those who expected to play against the Winner were dismissive. They attributed the good performance to luck rather than skill (self-serving bias, anyone?). Thus the exact same event receives diametrically opposed interpretations depending upon whose side you're on.

Because of our bias blindness, we all tend to think that even though these sorts of foibles are real that they only apply to others. Thus, for example, a 2006 study found that doctors are poor judges of their own performance. We inevitably draft ourselves onto clans of various sorts and generally deem our side right and the other sides wrong. Getting past that tendency in our personal and professional lives is really hard (to say the least).

Medical Problems

In medicine, perverse incentives are at work pushing people in the wrong direction (and I shouldn't need to add that we have pervasive perverse incentives at work in our business too). Indeed, even when a course of action undertaken by a well-trained professional has been conclusively shown to be in error, the professionals don't tend to change their approaches, especially when there is money involved. Some of that is due to the sorts of biases I have outlined above, but a certain causality illusion is at work too.

We readily acknowledge that correlation does not necessarily imply causation. That's why advertising in our industry carefully states something like "past performance is not indicative of future results." But our brains are wired for us to make the connection we carefully deny. A recent study examined how an illusion of causality influences the way we process new information and found that this illusion both cements bad ideas in our minds and prevents new information from correcting them.

The researchers enlisted the help of students to play doctors who specialize in a fictitious rare disease and to assess whether new medications could cure it. In the first phase of the study, the "doctors" were divided into two groups: a "high illusion" group that was presented with mostly patients who had taken Drug A and a "low illusion" group that saw mostly patients who hadn't. Each "doctor" saw 100 patients, and in each instance, were told whether the patient had recovered. The recovery rate was 70% across-the-board (meaning that the drug wasn't effective) but the "doctors" weren't told that. The study found that "doctors" in the high illusion group were more susceptible to erroneously concluding that Drug A had a positive effect.

Presumably, because the "doctors" in the low illusion group had more opportunities to see the problem resolve without the drug, they were less prone to assuming a linkage between its administration and recovery. Previous studies have shown that simply seeing a high volume of people achieve the desired outcome after doing something, albeit something ineffective, primes the observer to correlate the two and assume causation.

In the second phase of the study, the experiment was repeated, except this time some patients simultaneously received two drugs — the ineffective one from the first phase and a second drug that actually worked. This time, "doctors" from both groups were presented with 50 patients who'd received the two drugs and 50 who received no drugs. Patients in the drug group recovered 90% of the time, while the group that didn't get meds continued to have a 70% recovery rate. Those from the "high illusion" group were less likely than those from the "low illusion" group to recognize the new drug's effectiveness. Instead, they attributed the benefits to the drug they'd already decided was effective. The (erroneous) prior belief in the first drug's potency essentially prevented them from acquiring the (correct) new information.

In another study, researchers invited a group of teenagers to test the effectiveness of a wristband with a metal bar in aiding physical and intellectual performance. Using language deliberately filled with fancy jargon, the researchers explained that the strips could help and then invited the students to test out the product while performing certain written tasks. At every step — like a good salesperson — researchers primed the students to find a benefit by raving about how previous users had noticed its performance enhancement capabilities. By the end of the demonstration, many participants said they'd be willing to buy.

At this point the researchers stopped acting like salespeople and guided the students through a critical thinking tutorial of what they'd just seen. They demonstrated areas where the presented evidence was weak, explained the causality illusion, and emphasized that to assess whether the product had improved performance, they needed to have a clear baseline for comparison.

Thereafter, the researchers ran the students through a new test similar to the one used in the study on the causality illusion. Participants who had learned about the challenges of establishing causality ran more trials without the medication (a necessary step to measuring the drug's effectiveness) and made more accurate assessments of the drug's efficacy. This is a promising result but, as I have noted before, good critical thinking is a skill that can be acquired and improved, but it needs adequate domain knowledge to become truly effective.

If we want to get better as advisors, we'll need to question more and know more, no matter the assumptions with which we begin each day and no matter which team we see ourselves on.

NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.

Related Stories

Resource Center