Jeremy Stahl looks at the polling firm’s analysis of where it went wrong in the 2012 election:

It found that four areas most likely combined to result in the disparity between the polling firm’s final prediction of a 49-48 Romney popular vote win, and the actual result of a 51-47 Obama victory. Those four areas were as follows: how they weighted likely voters, underrepresentation of the East and West coasts in geographical controls, underrepresentation of nonwhite voters based on how Gallup determined ethnic backgrounds of survey respondents, and issues in how it contacted landlines that resulted in an “older and more Republican” survey sample. … At the time of the fiercest criticism, Gallup editor-in-chief Frank Newport said that there was “no evidence” that his company’s likely voter models were off. Now he’s saying they’ll do better next time.

Nate wins! Enten indicts the polling industry more generally, noting that “there are proven ways to poll that produce more consistently accurate portrayals of the election than doing a single live telephone interviews of a randomly selected population in a national poll”:

The average of polls done in the final week, excluding Gallup and Rasmussen, had Obama’s lead over Romney more than 2pt too low. I might be willing to look the other way, except the polling average in 2000 had George W Bush winning and had a margin error of again more than 2pt. The error in margin in 1996 was off by 3pt. The 1980 average saw an error of more than 5pt. The years in between 1980 and 1996 were not much better. In other words, the “high” error in national polling even when taking an average isn’t new; in fact, it seems to be rather consistent over the years. …

The state polling, meanwhile, did not show an analogous large bounce. It consistently had Obama leading in the states he needed to be leading in. Moreover, it showed Obama holding very similar positions to those he did prior to the first debate in the non-battleground states.

Yglesias, meanwhile, believes that the differences between Gallup’s numbers and the Obama campaign’s internal numbers polling “did exactly what it was supposed to do”:

[W]hat [the Obama campaign is] doing isn’t what Gallup is doing which—again—is trying to drum up media interest in Gallup polls. And compared to the Obama numbers, the Gallup numbers are really interesting. You could write lots of articles about those numbers, while the Obama numbers tend to suggest that you shouldn’t bother.

I’m not a huge fan of the “we should be gambling all the time about everything” school of thought, but it would be useful in this realm. If public polls were released by people who were placing large financial bets on the outcome of the campaign, then pollsters would work to purge their models of excessive volatility. But in the world that exists, the incentives are all wrong. “Incumbent President presiding over economic growth and falling unemployment will probably win and nobody’s paying attention to the campaign” is a terrible news story. It just happens to be true.

Nate Cohn thinks Gallup could prove more accurate next time:

The survey is extremely well-funded and has huge samples, especially over a multi-week period. They’ve brought in a highly regarded team of survey methodologists, and their commitment to transparency means that analysts should ultimately be able to confirm that their efforts have paid dividends. It just might be enough to restore confidence in America’s oldest polling firm.