Better Forecaster of Future, Confidence or Prudence?
We have endured the wrong forecasts of pundits and experts without them experiencing any costs for their errors. Yet, Philip Tetlock ( University of Pennsylvania) highlighted this in 2005, when he released his 20-year study of 284 experts (professors, journalists, civil servants, etc.). According to “Intelligent Intelligence” (The Economist, July 19, 2014 edition) “their performance was abysmal.”
These results automatically created questions about intelligence agencies, causing David Mandel, of Defence Research and Development Canada, and Alan Barnes, a former intelligence analyst to publish, Accuracy of Forecasts in Strategic Intelligence. In this examination of intelligence analysts’ forecasts, they found significantly better forecasters than Tetlock did with pundits and experts.
When they dug deeper into analysts’ personalities, they found caution – even about their own abilities – pronounced especially among those classified as “superforecasters.” If they erred, it was on the side of uncertainty, meaning they were more likely to say, “I don’t know.”
More significantly, experienced analysts forecasted better than junior ones, meaning good forecasting is learnable as long as three protocols exist:
- Accountability for forecasts
- Skepticism by analysts’ managers
- Absence of self-serving biases
As James Surowiecki writes in “Punditonomics” (The New Yorker, April 7, 2014 edition), pundits and experts don’t forecast within these protocols. Their confidence is rewarded and errors unpunished. People can suffer from erroneous intelligence forecasts.
Businesses aren’t immune. Confidence and glowing forecasts easily seduce us. Style often trumps content and competence. Skepticism is difficult when hearing what we want to hear, and more so when seeing it as dissent or pessimism. That doesn’t even address interpreting prudence as underconfidence.
All of which I’m prudently confident will change . . . some day.
If #1 doesn’t exist, it’s likely to be a guess more than a forecast. If #2 doesn’t exist, it’s lousy R&D. If #3 doesn’t exist, it’s a sales job, not a forecast!!!
Always wonder how such studies are calibrated??? What was and how was the definition calibrated for expert? What were the qualifications for a forecast and how were they reviewed? What was and how was the definition for success calibrated?
Thank you, John, I didn’t look at it that way, very humorous and enlightening! Yes, I wonder too. I can’t always vet the study as well as I would like, so I’ve vetted publications to help me. I do know the definition of expert had some metric of popularity associated to it, as the study was able to conclude that the most well known tended to be most wrong. You’re questions are all good, significant and apropos, but I do not have answers.
I’m pleased though that you ask those questions because many times I have difficulty explaining why your question are so significant. I try to use polls since there are many and people have some sense what’s being achieved with them. Still, diving into definitions and how responses are filtered and counted challenges many people’s patience and concentration. If you know of a simple way to demonstrate why this is important, I welcome hearing it.
Thank you again for your visit and comment. ~Mike