1-330-777-0094
[email protected]

4 Aug 2014

Better Forecaster of Future, Confidence or Prudence?

Dice (Twelve & Two) [0691]

Who forecasts the future better, the confident or the prudent?

Who’s better at forecasting, the confident or prudent? So far, the prudent seem to be winning confidently. More decisively, those most confident tend to be most wrong.

We have endured the wrong forecasts of pundits and experts without them experiencing any costs for their errors. Yet, Philip Tetlock ( University of Pennsylvania) highlighted this in 2005, when he released his 20-year study of 284 experts (professors, journalists, civil servants, etc.). According to “Intelligent Intelligence” (The Economist, July 19, 2014 edition) “their performance was abysmal.”

These results automatically created questions about intelligence agencies, causing David Mandel, of Defence Research and Development Canada, and Alan Barnes, a former intelligence analyst to publish, Accuracy of Forecasts in Strategic Intelligence. In this examination of intelligence analysts’ forecasts, they found significantly better forecasters than Tetlock did with pundits and experts.

When they dug deeper into analysts’ personalities, they found caution – even about their own abilities – pronounced especially among those classified as “superforecasters.” If they erred, it was on the side of uncertainty, meaning they were more likely to say, “I don’t know.”

More significantly, experienced analysts forecasted better than junior ones, meaning good forecasting is learnable as long as three protocols exist:

  1. Accountability for forecasts
  2. Skepticism by analysts’ managers
  3. Absence of self-serving biases

As James Surowiecki writes in “Punditonomics” (The New Yorker, April 7, 2014 edition), pundits and experts don’t forecast within these protocols. Their confidence is rewarded and errors unpunished. People can suffer from erroneous intelligence forecasts.

Businesses aren’t immune. Confidence and glowing forecasts easily seduce us. Style often trumps content and competence. Skepticism is difficult when hearing what we want to hear, and more so when seeing it as dissent or pessimism. That doesn’t even address interpreting prudence as underconfidence.

All of which I’m prudently confident will change . . . some day.

2 Responses

  1. If #1 doesn’t exist, it’s likely to be a guess more than a forecast. If #2 doesn’t exist, it’s lousy R&D. If #3 doesn’t exist, it’s a sales job, not a forecast!!!

    Always wonder how such studies are calibrated??? What was and how was the definition calibrated for expert? What were the qualifications for a forecast and how were they reviewed? What was and how was the definition for success calibrated?

    1. Mike Lehr

      Thank you, John, I didn’t look at it that way, very humorous and enlightening! Yes, I wonder too. I can’t always vet the study as well as I would like, so I’ve vetted publications to help me. I do know the definition of expert had some metric of popularity associated to it, as the study was able to conclude that the most well known tended to be most wrong. You’re questions are all good, significant and apropos, but I do not have answers.

      I’m pleased though that you ask those questions because many times I have difficulty explaining why your question are so significant. I try to use polls since there are many and people have some sense what’s being achieved with them. Still, diving into definitions and how responses are filtered and counted challenges many people’s patience and concentration. If you know of a simple way to demonstrate why this is important, I welcome hearing it.

      Thank you again for your visit and comment. ~Mike

Leave a Reply

Powered by Paranoid Hosting™. 'Cause you never know...