The Fox and the Hedgehog

April 01, 2013 at 08:00 PM
Share & Print

I'm a big fan of books that help us cut through the noise of our current information overload to better understand how we got where we are and how to do better in the future. A few of my favorites are "Freakonomics" by Stephen Levin; "Stumbling on Happiness" by Daniel Gilbert, which details why our brains are woefully bad at predicting what will make us happy; and "Guns, Germs, and Steel" by Jared Diamond, who explains how "advanced" societies got that way when others didn't.

Diamond's answer is that a relatively small number of technological breakthroughs ("tipping points" to use Malcolm Gladwell's term), one built upon another, yielded huge results such as going from horse-drawn carriages to the moon in under 100 years. One of those advancements was the printing press, which enabled the sharing of information, which led to machines, steam power, the industrial revolution and the greatest explosion of wealth in human history.

Perhaps not coincidentally, in the latest addition to my favorite books list—"The Signal and the Noise"—author Nate Silver also talks about the printing press and how the aftermath of its "information explosion" paralleled that of the Internet. You may recognize Silver's name: In his New York Times blog, FiveThirtyEight, he predicted the results of the past three national elections with great accuracy.

I found "The Signal and the Noise" to be a brilliant book, explaining a broad range of current phenomena such as the extreme political divisiveness in America, how the ratings agencies (and almost everyone else) missed the mortgage meltdown of 2007, why most political pundits are terrible at predicting elections, why virtually no experts foresaw the fall of the Soviet Union, why the majority of published research findings are false, and how the Red Socks defied the odds and missed the 2009 Major League playoffs.

Silver, whose background includes successfully applying statistical analysis to sports betting (think "Moneyball") and poker, does more than simply articulate the difficulty of prediction in the modern world. He cites some success stories as well, including vast increases in the accuracy of weather forecasting, a winning professional gambler and chess-playing computers. Silver uses these and other examples to offer a set of solutions—or, at least, guidelines—to make better predictions within the noise of too much information.

If anyone can use help separating signals of important future changes from the background information noise, it's financial advisors. Constructing investment portfolios is, after all, an effort to cut through all the noise that we're bombarded with daily by the news media. Asset allocation models are designed to mitigate our natural impulses and tendencies. Yet, even with models designed to capture long-term upward trends in the markets while minimizing risk, advisors are making assumptions (usually backed by historical data), about how various asset classes will perform—both in terms of returns and risk profile—and how they will perform relative to each other. If we learned anything from the turbulent markets of 2007 and 2008, it's that outside factors can and do affect both investment returns and their correlations, which suggests that clients would be better served by advisors who do a better job of monitoring the factors that affect asset class performance.

According to Silver, "too much" information hampers our predictions in two ways. The first is that our statistical analysis hasn't caught up with the flood of information. Current statistical analysis theory, called "frequentism," is based on the idea that all subjective judgments should be eliminated from our calculations. I'll spare you Silver's lengthy discussion of the flaw in this thinking, except to say that he believes this academic notion isn't possible in the real world, consequently resulting in faulty conclusions.

One example of this problem can be seen in scientific research. A 2005 study by John Ioannidis aptly titled "Why Most Published Research Findings Are False" concluded, "The majority of hypotheses deemed to be true in journals in medicine and most other academic and scientific profession are, in fact, false," Silver wrote. Mind boggling, no? Yet, according to Silver, when Bayer Labs tested Ioannidis' findings, "they could not replicate about two-thirds of the positive findings claimed in medical journals."

Silver suggests an alternative system for probability analysis based on Bayes' Theorem (again, I'll spare you the details), which, in very simplified language, requires taking a hard look at our assumptions, which often are based on our biases. According to Silver, the advantage of the Bayesian approach is that it involves "an explicit acknowledgement that our prior beliefs and biases can affect how we interpret new evidence and what those prior beliefs are."

Silver cites the performance of the credit ratings agencies—primarily Moody's and Standard & Poor's—prior to the mortgage meltdown as an example of traditional statistical analysis based on very unrealistic assumptions. Before the crash, S&P determined that there was only a 0.12% (1 in 850) chance that the collateralized debt obligations it analyzed would fail within the following five years. In fact, 28% of them failed—an error of more than 200%. According to Silver and others, the problem was that both agencies based their analyses on two assumptions: that real estate defaults occur independently (when someone loses their job, for instance), rather than collectively (say, when the economy crashes) and that if the real estate bubble burst (which they anticipated), it would have a minimal effect. In fact, when a real estate crash appeared imminent, Moody's "increased the default probability in its model by 50%." In reality, they overlooked the extent to which the collateralized mortgage obligation market was leveraged: The default rates were "200 times (20,000%) higher than their models predicted."

As Silver tells it, the problem is that our brains are wired to make sense out of the world around us by looking for patterns. By itself, this tendency can lead us to see correlations where none actually exist, which is why we need scientific experiments to test our hypotheses. This problem is compounded, however, by our current state of information overload, in response to which "we focus on those signals that tell a story about the world as we would like it to be, not how it really is. We ignore the risks that are hardest to measure, even when they pose the greatest threats to our well-being. We make approximations and assumptions about the world that are cruder than we realize. And we abhor uncertainty, even when it is an irreducible part of the problem we are trying to solve."

How can we manage our biases to make better predictions in the real world? Silver cites an Isaiah Berlin essay called "The Hedgehog and the Fox," which is based on the notion that "the fox knows many little things, but the hedgehog knows one big thing." The idea is that there are two kinds of experts: hedgehogs, who are "type-A personalities who believe in Big Ideas—in governing principles about the world that behave as though they were physical laws and undergird virtually every interaction in society"; and foxes, who are "scrappy creatures who believe in a plethora of ideas and in taking a multiple of approaches toward a problem. They tend to be more tolerant of nuance, uncertainty, complexity and dissenting opinion. If hedgehogs are hunters, […] foxes are gatherers."

As you might imagine, foxes turn out to be the better predictors. They are more cautious, rely more on observation than on theory and constantly challenge their ideas and predictions in light of new information or insights. Those traits make them adaptable to finding new approaches if their old one isn't working. Hedgehogs look for simple rules about how the world works and, once they identify them, are aggressive in their defense, rarely changing or even hedging them.

Silver suggests that we try to be more fox-like in our predictions, but acknowledges that in our sound-bite world, hedgehogs are often more popular, especially in the media. This, of course, is a problem for professionals such as financial advisors, who need to be right in their predictions, but at the same time, need to convince others of their expertise.

For advisors, perhaps the solution is to do their homework as a fox. Then, when talking with clients or perspective clients, be slightly more hedgehogish: state the limits of their advice (no one can be certain of the direction of the markets or the economy, etc.), and describe how they manage that uncertainty to give their clients the greatest chance of success based on what they know right now and how they've built in feedback loops to monitor the results and make appropriate adjustments. In that way, an advisor can be an expert at helping clients separate the signal from the noise.

NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.

Related Stories

Resource Center