Guesswork masquerades as expertise in stock market forecasts
Expert predictions tend to be no more accurate than chance, research indicates
“Even if you had a crystal ball and knew exactly how the global economy would perform in 2018, you still could not be sure how this would impact on stock markets.”
For what it’s worth, Wall Street strategists predict the S&P 500 will advance at a more modest pace this year, rising about 7 per cent in 2018 compared with 20 per cent in 2017.
However, it’s “probably not a good idea” to take such forecasts too literally, says Urban Carmel of the Fat Pitch blog, who notes strategists’ year-end target for 2017 was hit in February.
It was a similar picture in 2016; Goldman Sachs abandoned five of its top six trade ideas for the year within a month of releasing them. At the beginning of 2015, Barron’s was reporting that the consensus forecast for the year was “uniformly upbeat”, with strategists eyeing double-digit gains; instead, stocks endured a flat year.
These are not exceptional cases. In 2015, financial commentator Morgan Housel analysed forecasts made by the 22 chief market strategists at Wall Street’s biggest investment firms over the 2000-2014 period. The average strategist didn’t forecast a single down year during that period, which was characterised by the two biggest crashes since the 1930s (the 2000-2002 dotcom crash and the global financial crisis of 2007-2009).
On average, strategists’ annual forecasts were off by 14.7 percentage points per year. Nor are the figures distorted by the 2008 crash. Even if one excludes 2008, strategists’ average forecast missed the mark by 12 percentage points annually.
In a refreshing break from the norm, UBS global chief economist Paul O’Donovan last month argued his colleagues in the investment business should focus on providing clients with meaningful analysis rather than issuing precise forecasts that are bound to miss the mark.
“Economists should not forecast,” O’Donovan wrote in a client note. “Economic models are not precise. Models use lots of assumptions. Those assumptions may not turn out to be true. Models give a range of possibilities rather than a single, certain number.” Accordingly, “investors just need to know that the world is doing OK. Inflation may rise a little. Central banks will likely slowly tighten policy.” There is, he added, “no need to get dramatic about decimal points.”
According to a 2009 study, investment bankers are “significantly hindsight biased”, and this bias is just as likely to be found in experienced, knowledgeable bankers
If precise economic forecasting is hard, stock-market forecasting is even harder. After all, even if you had a crystal ball and knew exactly how the global economy would perform in 2018, you still could not be sure how this would impact on stock markets. A cheap stock market might rise in the face of a poor economic outlook, just as an expensive market might fall in the face of strong economic conditions.
Similarly, there are too variables to consider. How will the economy perform? How will this affect interest rates? What sectors outperform in such environments? What are the best stocks in these sectors?
Even if a strategist gets it right on each forecast 75 per cent of the time – way, way above that seen in reality – the odds of getting all four forecasts right is just 29 per cent. If he or she is right 65 per cent of the time, there is a mere 18 per cent chance all four forecasts will be right.
“Now think about the number of forecasts an average analyst’s model contains – sales, costs, margins, taxes, and so on”, writes GMO strategist James Montier in his Little Book of Behavioural Investing. “No wonder these guys are never right.”
Economists know the limitations of their models, says O’Donovan. However, the “world of hashtag economics” doesn’t foster nuanced, evidence-based analysis. O’Donovan notes that it’s “difficult to warn about possibility ranges and underlying assumptions in 280 characters”.
Rather, there is a demand for simple, precise forecasts and most forecasters remain all too willing to cater to this demand, as evidenced by the reception areas at CNBC and Bloomberg TV being “crowded with mobs of economists fighting to get their forecasts on air”.
Still, marketing is not the only reason that forecasters engage in the futile forecasting game. Psychologists and behavioural economists have long known that humans are an overconfident lot and that this overconfidence causes them to have too much faith in their ability to divine the future.
In reality, expert predictions tend to be no better than chance, as confirmed by a famous study conducted by predictions expert Prof Philip Tetlock that analysed some 28,000 predictions made by political experts over a two-decade period. There are, Tetlock writes in his book Expert Political Judgement, many reasons why experts get things wrong.
Firstly, experts “can talk themselves into believing they can do things that they manifestly cannot”. There are diminishing returns to knowledge. Beyond a certain point, further information leads to increased confidence but not increased accuracy.
Secondly, like most people, experts are reluctant to change their minds as they don’t like to admit when they’re wrong. They fall prey to confirmation bias, subconsciously seeking out evidence that backs up their assessment and ignoring that which contradicts their viewpoint.
Crucially, there is also the problem of hindsight bias, the “I knew it all along” attitude. Separate psychological research indicates there are three levels to hindsight bias, namely misremembering an earlier opinion or judgment (“I said it would happen”), inevitability (“It had to happen”) and foreseeability (“I knew it would happen”).
Don’t think that hard-nosed financial types are immune to this bias. According to a 2009 study published in Management Science, investment bankers are “significantly hindsight biased”, and this bias is just as likely to be found in experienced, knowledgeable bankers. Hindsight bias, suggests Tetlock, means experts can be slow to learn from their mistakes.
Furthermore, experts typically resort to a number of excuses when they get things wrong, says Tetlock. There is the “if only” clause (if X or Y had happened, I would have been right); the ceteris paribus” clause (it’s not my fault because something utterly unexpected happened); the “it almost occurred” clause (it didn’t happen but I was close); and the “just wait” clause (I’m not wrong, just early).
Information v knowledge
Accurate forecasting is not impossible, says Tetlock, who in recent years has been documenting the success enjoyed by a small group of “super-forecasters” who take an ultra-scientific, data-driven approach. Unfortunately, this approach could not be further from the simplistic game of price targets played by investment strategists.
Investors, warns James Montier, tend to “equate information with knowledge”, but “the two are very often different beasts”.
As a general rule, strategist reports are detailed affairs compiled by diligent, educated professionals. But how much of the information contained in the reports is relevant?
Fund giant Vanguard once analysed the predictive power of various popular investment signals, such as price-earnings ratios, GDP growth, profit margins and so on. Rainfall – “a metric few would link to Wall Street performance” – turned out to be a better predictor of stock returns than many popular metrics, said Vanguard.
Of course, 2018 might be different. Who knows, the forecasters may get lucky. However, the research is clear. The annual forecasting contest is simply a guessing game masquerading as analysis.