How Off Were The Polls?

Wang does some calculations:

In close Senate races, Republicans outperformed polls by an average of 5.3 percentage points. Prime examples of that effect could be seen with Republican wins in Kansas and North Carolina, two races that went against pre-election polls.

In gubernatorial races, Republicans outperformed polls nearly 2 percentage points on average. This was enough to put Paul LePage of Maine (tied), Rick Scott of Florida (tied), and Bruce Rauner of Illinois (Quinn +2.0%) over the top.

Silver ponders these polling misses:

Poll BiasInterestingly, this year’s polls were not especially inaccurate. Between gubernatorial and Senate races, the average poll missed the final result by an average of about 5 percentage points — well in line with the recent average. The problem is that almost all of the misses were in the same direction. That reduces the benefit of aggregating or averaging different polls together. It’s crucially important for psephologists to recognize that the error in polls is often correlated. It’s correlated both within states (literally every nonpartisan poll called the Maryland governor’s race wrong, for example) and amongst them (misses often do come in the same direction in most or all close races across the country).

This is something we’ve studied a lot in constructing the FiveThirtyEight model, and it’s something we’ll take another look at before 2016. It may be that pollster “herding” — the tendency of polls to mirror one another’s results rather than being independent — has become a more pronounced problem. Polling aggregators, including FiveThirtyEight, may be contributing to it. A fly-by-night pollster using a dubious methodology can look up the FiveThirtyEight or Upshot or HuffPost Pollster or Real Clear Politics polling consensus and tweak their assumptions so as to match it — but sometimes the polling consensus is wrong.

Joshua Tucker speculates about why the polls were wrong:

We are living in an era where poll response rates are dropping precipitously, at least for traditional phone-based surveys. This point was dramatically illustrated in a recent Pew Report showing that response rates had fallen from 36 percent in 1997 to 9 percent in 2012.

… [T]here are good reasons to think it is harder to reach young people today using telephone surveys. But of course pollsters know this, and so adjust the weights of their surveys accordingly. But with fewer young people in their surveys — combined with the possibility that the young people you can reach by phone are not representative of young people generally — the work that has to be done by these weights grows. Now, not wanting to get a mistaken estimate because of this bias, I wonder if the polling overcompensated in terms of weights in this regard because of the voting patterns observed in the 2012 presidential elections.

Wang identifies a different culprit:

Recently it’s been suggested that the polling industry has struggled lately to reach a representative swath of voters. Low response rate, increasing use of mobile phones, and hard-to-reach demographics have all been cited as possible biases. However, those difficulties would tend to undersample Democratic voters, which was not the problem this year. Instead, inaccuracy may have come from what David Wasserman at The Cook Political Report called “epic turnout collapse” in 2014. And estimating the precise effects of turnout is an older, unsolved problem that looms large for pollsters in every midterm election.