Had Enough of Experts? Had Enough of Failure.

Roland Valentine Stewart points out the predictive failures of our ‘experts’ – and suggests where we should look to improve them.

 

In Britain it was probably the quotation of 2016: ‘People in this country have had enough of experts…’

Conservative politician Michael Gove never finished his point. In the heat of debate during Britain’s EU referendum campaign, he was cut off by Sky News’s Faisal Islam, who instantly spotted the chance to portray the arguments of the Leave campaign’s most articulate spokesman as unscientific and baseless.

Gove’s unfinished sentence has been mocked ever since. His words were portrayed as evidence that UK independence, opposed by the vast majority of ‘experts’ and international bodies, was driven primarily by a dangerous cocktail of ignorance and prejudice.

Yet Gove was absolutely right to question the foresight of our experts.

To understand why, we need to forget the EU for a moment and instead focus on something rarely discussed but nevertheless important: economic forecasting, and predicting the results of policies. Why we are bad at it; how we might improve. Gove’s point was not that experts should be ignored. It was rather that their political influence is in decline, and that, if they want to reverse this trend, they need to raise their game.

At one level Gove’s argument is relatively uncontroversial. Whether in government, academia, Whitehall, or the City of London, experts have long tended to promulgate the latest conventional wisdom, only later to find their forecasts bearing no relation to reality.

They underestimated Hitler. They thought it would be impossible to leave the gold standard. They pushed Britain into the disastrous ERM, and wanted to join the calamitous euro. The Iraq War and the failure to predict the 2008 financial crisis were probably the final straws.

British voters were right to regard expert hostility to Brexit as just the latest episode in a series of expensive gaffs. ‘The experts can’t all be wrong’ is a comforting belief. But it’s also a lazy one, and sometimes disastrous.

Gove was hinting at something more than this piece of common sense, though: not only are experts often wrong, but their forecasting abilities never seem to improve. Our political debates often seem to go round in circles without resolution, no matter how many failed expert predictions. There is little accountability: the same experts keep their jobs in academia and in newspapers. They still fill our television screens and receive deference at Davos.

How does this happen? Let’s start with just one reason: the loose and ambiguous language in which expert forecasts are often delivered in the first place. ‘A vote for Brexit will lead to economic catastrophe’, was a constant expert-approved refrain in 2016.

Yet a forecast like this is impossible to mark, and so its author can always avoid damage to their reputation, however events turn out. How do you define ‘economic catastrophe’? When will this catastrophe emerge – immediately or years later? If no catastrophe happens, then the meaning of words or terms can and will be twisted, goalposts shifted, by those whose credibility depends on the forecast being right: “the catastrophe hasn’t happened yet, but just you wait…”; “the catastrophe has happened – my definition of catastrophe is different to yours…”.

This ambiguous forecasting, impossible to ‘score’, is an important part of the ‘expert problem’.

It means that debates are never ‘won’, people never proven wrong. The ability to record experts’ forecasting accuracy and good judgement (or lack thereof) over time becomes impossible. Charlatans can pose as ‘big name’ sages no matter how often they fail. No one is ever forced, for the sake of their credibility, to learn the right lessons from forecasting failures; nor to use these lessons to make better, more realistic forecasts in future. The quality of debate and the wisdom of government policy consequently suffer. Arguments are rated according to numbers of supporters as opposed to the credibility and judgement of those supporters. It’s a depressing cycle, and one that erodes trust in experts.  

gove-experts-meme

Photo by Policy Exchange’s own AV serf, used under CC BY 2.0.

 

Is there a solution? As the referendum campaign began, a little noticed photograph, published in the Daily Mail, showed Canadian psychology professor Philip Tetlock’s 2016 book, Superforecasting: The Art and Science of Prediction, poking out of Gove’s satchel. It suggests that at the time of his ‘experts’ quip, Gove was aware of at least one figure who can offer some ideas.  

Philip Tetlock’s research has focussed on forecasting and how to improve it. It has involved thousands of experts – academics, journalists, government officials and others – making predictions by answering hundreds of questions on all manner of topical subjects around the globe. They do this alongside amateur volunteers.

Experts don’t like his most famous conclusion – that on average, tracked over almost twenty years, experts did about as well predicting the future as ‘dart-throwing chimpanzees’. 

But Superforecasting is not just bad news. Tetlock discusses the methods that he believes the best forecasters use to make predictions, and there is evidence they work. In a US intelligence-sponsored tournament designed to test forecasting methods over a three-year period, his best volunteers, the ‘superforecasters’, consistently and significantly outperformed their competitors, including professional intelligence analysts with access to classified information.

Yet brilliant though these superforecasters are, their success, Tetlock argues, relies on scrutiny and feedback. They are scored with each prediction on their overall judgement and their confidence, and have nowhere to hide when they are wrong – the questions they answer have clear time horizons and relate to specific events in unambiguous language. Scoring enables them to prove their consistent excellence. Accountability and feedback helps them maintain it.  

Here then, we can find lessons for how forecasting, and therefore policy-making, might improve. Tetlock wants to see experts in any institution scored with each prediction they make, and a record of their aggregate score maintained. He wants them to know when they are unambiguously wrong, so they can acknowledge mistakes and increase their chances of making better forecasts next time.

For this scoring and feedback system to work, experts need to pose and answer more focused questions, about specific events, with unambiguous wording and definite time horizons: “will sterling fall by 10% against the euro by 31st December 2016?”, not “will Brexit lead to an economic catastrophe?”.

In short, experts need to be prepared to put their credibility on the line. It would require bravery for experts to subject themselves to this level of scrutiny, but it could improve their performance and remind people of their relevance.   

Michael Gove is right: people have had enough of experts. But maybe Tetlock’s work offers ideas for a comeback.

 

Roland Valentine Stewart is a Staff Writer for The Quad. He read History at the University of Cambridge.

Enjoyed this article? Subscribe to The Quad. Our featured graph of US, Chinese and Indian growth until 2050 is by Srikar Kashyap and used under a Creative Commons Attribution-Share Alike 3.0 Unported licence.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s