Building Better Predictions From Polls
A challenge from his son led NC State statistician Fred Wright to analyze exactly what went wrong with political polling leading up to the 2016 Presidential election. His analysis of polling methods reveals that inaccurate election predictions weren’t due to shy voters or flawed surveys, but to excessive averaging over multiple polls. Now Wright (and his son) propose a more accurate model.
Polling predictions come in two parts: the prediction itself (in this instance, the overall spread in support between Clinton and Trump), and the confidence in that prediction, usually expressed as a percentage or a probability of winning.
Pollsters arrive at these probabilities by aggregating, or averaging, polls. So at the very basic level, more polls equals more precision. But polling organizations also have to decide how far back to go in the polling data they aggregate. Most sites include poll data from roughly the last month or more. They average the data and then project the winner.
“Despite all the attention to what ‘went wrong’ in the election prediction, there was little attention to the quantitative performance of prediction sites like fivethirtyeight or HuffPost,” Wright says. “The popular press reacted to the fact that many sites had been highly confident of a Clinton victory, but how wrong were they, really? To answer that question, we looked at the state-by-state performance of predictions instead of looking only at the swing states.”
Wright reverse-engineered reported results from the prediction sites to tease out state-specific data and put everything on the same scale, then developed a regression model in which each state’s calculation for candidate support reflected both a national component and a state-specific deviation from that national component.
“The idea was to be able to view both state-specific information as well as national trends,” Wright says. “Doing so showed that the main thing distinguishing the ‘good’ from ‘bad’ prediction sites in 2016 was that on a state-by-state basis, the latter were overconfident, and their models were insufficiently sensitive to a late change before the election.”
Wright’s model also detected the decline in Clinton’s overall support as starting earlier than generally recognized. “Since our model is designed to be more sensitive, it picked up a late, strong decline in support for Clinton that pre-dated the Comey letter,” Wright says. “According to our results, had we run our model just prior to the election, it would have given Trump a 47 percent chance of victory.
“On a related note, our data suggest that polls were not very biased against Trump,” Wright continues. “We show that the polls may have been roughly correct at the time, but when they were averaged to obtain a consensus, the dropping support for Clinton was inappropriately smoothed over in the final week.”
Wright credits his son and co-author, Alec Wright, an undergraduate at the University of North Carolina at Chapel Hill, with the idea for the research and the resulting model. “This work started as a challenge from my son, who made some snarky comments about statisticians after the election. I claimed I could do better, and so roped him in to work on this and propose a new model. I think our effort has been successful.” The research appears in Electoral Studies.
- Categories: