In 1587, the Roman Catholic Church created the position of Advocatus Diaboli (the "Devil's Advocate") to prepare and raise all possible arguments against the canonization of any candidate for sainthood. Theology aside, it's a fantastic idea, even if almost nobody emulates it.
There is a wide body of research on what has come to be known as motivated reasoning and its flip side, confirmation bias. Confirmation bias is our tendency to notice and accept that which fits within our preconceived notions and beliefs, while motivated reasoning is our complementary tendency to scrutinize ideas more critically when we disagree with them than when we agree.
We are similarly much more likely to recall supporting rather than opposing evidence. Upton Sinclair offered perhaps its most popular expression: "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!"
A related problem is what Nobel laureate Daniel Kahneman calls the "planning fallacy." It's a corollary to optimism bias (think Lake Wobegon—where all the children are above average) and self-serving bias (where the good stuff is my doing and the bad stuff is always someone else's fault). The planning fallacy is our tendency to underestimate the time, costs and risks of future actions and at the same time overestimate the benefits thereof. It's why we underestimate bad results. It's why we think it won't take us as long to accomplish something as it does. It's why projects tend to cost more than we expect. It's why the results we achieve aren't as good as we expect.
These cognitive biases—among others—are a constant threat to our decision-making. Fortunately, behavioral economics has done a terrific job at providing an outline as to what these risks look like. It is one thing to recognize these cognitive difficulties, of course, and quite another actually to do something about them.
Unfortunately, one major difficulty is the bias blind spot—our general inability to recognize that we suffer from the same cognitive biases that plague other people. If we believe something to be true, we quite naturally think it's objectively true and assume that those who disagree have some sort of problem. It's the same kind of thinking that allows us to smile knowingly when friends tell us about how smart, talented and attractive their children are while remaining utterly convinced about the objective truth of the attributes of our own kids.
Unfortunately, there isn't a lot we can do to deal with these issues, as even Kahneman concedes. Perhaps if we could only be more like scientists, who employ a rigorous method to root out error, our process might improve.
We can start by using what Harvard Medical School's Atul Gawande calls "the power of negative thinking," which is the essence of the scientific method. That means actively looking for failures and how to overcome them. Yet scientists themselves fall prey to inherent bias, despite their formal protocols designed to find and eliminate error. In his famous 1974 Caltech commencement address, the great physicist Richard Feynman talked about the scientific method as the best means to achieve progress. Even so, notice his emphasis: "The first principle is that you must not fool yourself—and you are the easiest person to fool."
Scientists routinely acknowledge that they get things wrong, at least in theory, but they also hold fast to the idea that these errors get corrected over time as other scientists try to build upon earlier work. However, John Ioannidis of Stanford has shown that "most published research findings are probably false," and subsequent reviews support that claim.
In a commentary in Nature last year, scientists at Amgen disclosed that they had been unable to replicate the vast majority (47 of 53) of the landmark pre-clinical research studies they had examined. In a similar study, researchers at Bayer HealthCare reported that they had reproduced the published results in just a quarter of 67 seminal studies they examined. Despite rigorous protocols and a culture designed to root out error aggressively, scientists get things wrong—really important things—a lot more often than we'd like to think.