Cue socially awkward victory lap

So the votes are in and President Obama has won a second term. I’ll leave the prognostication about the future of America to others. I want to talk about poll averages!

The media assessment of the US campaign has had a very clear split. Almost all the TV talking heads and newspaper columnists said the race was “too close to call” and “impossible to pick in advance,” and that you needed to “stay tuned to see the drama.” The main exceptions were the Friends-of-Fox-and-Friends, who have been confidently predicting a solid Romney victory for weeks. I suspect both sets of journalists were, consciously or not, talking up the kind of campaign that maximized their ratings.

On the other hand, all of the statisticians, political scientists, and other nerds who pore over the polling numbers have said something different: they said the race was close, but not too close to call. They said Obama would win the electoral college decisively, with 300 to 330 votes, and win narrowly win the popular vote, too. All the analysts said the same thing, leading some TV talking heads to say the analysts were crazy or dangerous.

And then the analysts were bang on the money.

The accuracy of their forecasts was uncanny. All the final state-by-state forecasts based on polling averages called all fifty states correctly and, of course, Washington DC as well. When Nate Silver at 538 debuted his poll averaging in the 2008 election, he called 49 of the fifty states right. DC, too. And the states where their predictions expressed the most uncertainty were precisely the states where the vote was closest. One of the poll aggregator sites, Drew Linzer’s Votamatic, has made the correct forecast consistently for over a month, through all the supposed twists and turns of the campaign end-game.

To be sure, predicting many of the state-by-state results isn't hard: everyone knew California would vote for Obama, and Texas for Romney, before the campaign even began. But in the ten or so truly competitive states in the election, the poll aggregators went 10-for-10, and Silver went 9-for-10 in 2008 as well. Those are truly excellent success rates in close electoral contests, rates almost all the TV pundits would fail to match.

Silver also predicted the popular vote margin, by the way. He predicted Obama +2.5%; as at this writing, and with almost all the votes in, the popular vote margin is Obama +2.2%. Uncanny.

There are a couple of us in New Zealand who have tried to import this poll aggregation idea into our own politics. I run an ongoing poll of polls at this site, David Farrar does one at his curiablog site, and Danyl McLaughlin hosts one as well. The New Zealand polls-of-polls have performed creditably in the last two elections, as I document in the VUW series of post-election books, though we are not the oracles that our American counterparts appear to be. I think there are at least two reasons for that:

  1. Speaking only for myself, I do not think my statistical intuition or technical skills rival those of Silver, Linzer, Stanford’s Simon Jackman, or Princeton’s Sam Wang in the US. I am not even close. I know this because Linzer is a friend of mine, and know some of Jackman’s work, too. Both are wizards. I’ll let David or Danyl puff themselves up on this front if they wish.
  2. The volume of raw information coming into poll aggregators in New Zealand is tiny compared the US. In the month prior to the 2011 New Zealand election, there were fourteen published polls. In the final part of this US campaign, there were often twenty state- and national-level polls coming in each day. So there is 30-50 times as much information going into their aggregators as go into ours. More information going in to an aggregating system generally means more accurate information coming out.

Nonetheless I think there is a lot that the New Zealand punditocracy can learn from this election in the USA. The most important lesson is that the data really does matter, and it has a neutral quality that a journalistic conversation with a partisan official usually lacks. Numbers can cut through the spin and reveal the truth much better than multiple spins from different angles can. (This lesson extends beyond just horse-race coverage. Any time I see the all-too-common journalistic frame “He said X. She said Y. So who knows?” I feel cheated. Why did I watch all those ads to pay the salary of someone who says that is journalism?)

The second lesson is that while individual polls can bounce around all over the place, the average of many polls from different sources (often collected with slightly different methods) is fairly stable, and a reasonably reliable guide.

Third, the common practice among political journalists of generating a view or “narrative” about a campaign based on one or two polls can often produce dangerously misleading coverage of the democratic process. CNN’s selective airing of polls showing a popular vote tie may have served to get more people to watch CNN, but it also misinformed American about the true state of the election. It was cynical manipulation of the public for commercial ends.

And the practice among some conservative pundits to note the polls showing the race tightening after the first debate to declare that Romney had momentum, but then pointedly ignore the later polls showing that momentum ceased at the second debate, was equally bad. Declaring Romney’s momentum alive-and-well long after it was actually dead-and-buried, was worse than providing a partisan “spin”; the pundits that did it wore partisan “blinkers.”

The US is not New Zealand. There are important differences. But there are enough similarities that we can learn from mistakes made over there. Our pundits should grasp that opportunity today.

Comments (14)

by Will de Cleene on November 08, 2012
Will de Cleene

The manipulation of big data by the Obama campaign was fascinating, as explained by Michael Cornfield (Institute for Politics, Democracy, and the Internet at George Washington University):

http://podcast.radionz.co.nz/ntn/ntn-20121106-0932-the_digital_strategie...

by Conor Roberts on November 08, 2012
Conor Roberts
Good post. First rule in politics is learn to count. It's been interesting to watch the campaign narrative when it did follow the polls and then when it didn't. Anyway, here's the inevitable post about which of the public polls came out closest: http://www.dailykos.com/story/2012/11/07/1158157/-Most-accurate-national... And I noticed this gossip post on talkingpointsmemo saying Obama's internal polling was 0.1% off in battleground states: http://talkingpointsmemo.com/archives/2012/11/what_bioatch.php?ref=fpblg
by John Norman on November 08, 2012
John Norman
Hi there, Be interested to know what you think Nate took from this outcome.. Not so off-topic a little is how did Charter go in Georgia (was it?)
by Chris Trotter on November 08, 2012
Chris Trotter

Just re-read your posting of 21/11/11, Rob.

Nate Silver you ain't.

by Rob Salmond on November 08, 2012
Rob Salmond

Chris Trotter: Just re-read your posting of 25/11/11, where you failed to make any prediction at all. Very brave.

by Rob Salmond on November 08, 2012
Rob Salmond

@Will: You are certainly right the poll aggregation people are not the only ones making large technical advances in election campaigning.

@Conor: I saw those. Good news for the Obama internals, although being out by 0.1% vs 0.7% is a single poll is probably more luck than anything.

@John: I think Nate is taking record book sales today! Also, not sure what Georgia Charter you're refering to?

by Graeme Edgeler on November 08, 2012
Graeme Edgeler

Nate Silver you ain't.

Nate Silver isn't even Nate Silver. Though he predicted all 51 electoral college races, he was also ~88% sure he'd get at least one of them wrong.

by Will de Cleene on November 08, 2012
Will de Cleene

My money's on the mass customisation of big data campaigns, not on polls. Do we live in a Post-Bernays media yet?

by BeShakey on November 08, 2012
BeShakey

On the other hand, the electoral system in New Zealand should make it a lot easier to make predictions. Plus, there are some things Silver has to do that don't much matter here (whatever the equivalent of his state characteristics would be). There'dbe  a lot of technical things that might not be that hard to import to New Zealand (like calculating house effects for each of the polling companies). So I can't see why (at least in principle, I know it'd be time consuming) someone here couldn't do something significantly better than what we have now (that isn't meant as a jab at you or anyone else who spends time creating polls of polls).

by Rob Salmond on November 09, 2012
Rob Salmond

@BeShakey: Certainly you're right that the overall target is much easier to idenfity in NZ - all we really need to worry about is the nationwide popular vote. And we, both pollsters and poll aggregators, should be able to do better than we currently do with "likely voter" screens and the like.

Assessing house effects, on the other hand, is much harder in NZ because there are so few real elections co compare polling houses' predictions against. For most firms, the n is 2. (While Colmar Brunton have been around for a while, it substantially changedf its polling method in about 2007, so we can only judge its current incarnation against 2008 and 2011.) For the US firms, on the other hand, there are literally hundreds of election results you can use to test for house effects, so long as the firm published a poll about it. There is the nationwide Presidential popular vote, each State-by-State presidential vote, Senate races, House races, Gubernatorial races, etc. And many firms havea long historical track record as well. My view is that in NZ we don't generally have the volume of data to reliably separate small-to-medium house effects from simple sampling error.

(Having said that, I think we can spot *really obvious* house effects in a NZ context, as I did in this article from the Australia and NZ Journal of Statistics: http://robsalmond.com/sites/default/files/Salmond%20ANZJS%202009.pdf )

by Andrew Robertson on November 09, 2012
Andrew Robertson

Hi Rob

As you probably know Colmar Brunton extensively reviewed its political poll following the 2005 election - which included two independent external reviews. Interestingly, weekday vs weekend polling wasn't identified as being a major factor.

Cheers
Andrew

by Rob Salmond on November 09, 2012
Rob Salmond

Hi Andrew - I was aware of Colmar Brunton's review. I noticed in particular that prior to the review CB polled only on weekdays, whereas after the review they polled on weekdays and weekends. I also noticed that before the review CB was distinctly National-leaning as I theorized in my paper (and others had theorized in other contexts), while after the review CB was not. But I was not aware of the content of the external reviews. - Rob

by Peter Green on November 09, 2012
Peter Green

I'm looking at 2006-9 data right now, and it looks to me like CB still had a pretty strong tilt towards National (although the same could be said for the other polls in that time period except for Roy Morgan and TNS).

by Andrew Robertson on November 09, 2012
Andrew Robertson

Hi Rob

Yes absouletly, CB did begin polling on weekends following the review.

Cheers
Andrew 

Post new comment

You must be logged in to post a comment.