Can Intelligence Analysts Predict The Future?

Answer unclear, ask again later

Research Could Make Forecasts More Accurate

Alan Levine via Flickr CC By SA 2.0

Intelligence agencies, the spies and spooks and analysts grouped under three letter acronyms, exist in part to answer a difficult question that dates back to antiquity: Is it possible to predict the future, and, if so, how do we do it? A study published this month in the Journal of Experimental Psychology answers the question at least in part: Prediction is a skill, but it takes a special environment to develop that skill.

To understand how prediction works, researchers wanted to see if certain behaviors—such as making a lot of predictions, taking time to consider a question before answering it, or just having a working knowledge of politics in the region in question—effected a forecaster's accuracy.

For the experiment, participants competed in two nine-month-long forecasting tournaments. The questions for the tournament were selected by the Intelligence Advanced Research Projects Activity. Over the two years of the tournament, the forecasters were each asked a total of 199 questions, which “covered topics ranging from whether North Korea would test a nuclear device between January 9, 2012, and April 1, 2012, to whether Moody’s would downgrade the sovereign debt rating of Greece between October 3, 2011, and November 30, 2011.” Forecasters had to answer at least 25 of the questions. The vast majority of the questions had just two possible outcomes, like if a certain embattled world leader would remain in power after a given date. Other questions asked forecasters to choose one time-frame among multiple choices for a possible future event. Participants answered the questions online.

In the first year, participants were divided into one of nine possible configurations of prior training and group collaboration. After the first year, the researchers realized that probabilistic-reasoning training was the most important prior training, so they narrowed the forecaster groupings into just four categories: teams with probabilistic-reasoning training, individuals with probabilistic-reasoning training, teams without, individuals without.

After the two tournaments, researchers took answers from people who participated in both rounds, and who answered at least 30 questions. From this final pool of 743 forecasters, the researchers looked for traits and environments in common with the most accurate forecasters. This is what they found:

The best forecasters scored higher on both intelligence and political knowledge than the already well-above-average group of forecasters. The best forecasters had more open-minded cognitive styles. They benefited from better working environments with probability training and collaborative teams. And while making predictions, they spent more time deliberating and updating their forecasts

By analyzing the type of environment that fosters better prediction, the researchers hope to steer the intelligence community away from ‘avoid the last mistake’ kinds of thinking. They specifically highlight two major American intelligence failures, and the poor responses made by the intelligence community.

Analysts also operate under bureaucratic-political pressure— and are tempted to respond to previous mistakes by shifting their response thresholds. They are likelier to say “signal” when recently accused of underconnecting the dots (i.e., 9/11) and to say “noise” when recently accused of overconnecting the dots (i.e., weapons of mass destruction in Iraq).

More tournaments could hold the key to escaping this cycle, the researchers suggest, by helping forecasters to monitor their long-term accuracy, as opposed to just the most recent mistakes.

With knowledge about what makes a better forecasting environment, intelligence agencies could encourage this behavior, so that guessing the future becomes more of a practiced science, rather than a haphazard art.