Saturday, November 12, 2016

The Inexcusable Arrogance of The Pundits



Tuesday, Donald Trump defeated Hillary Cinton to become President-elect of the United States.  Trump celebrated the win late that night, Ms. Clinton conceded early Wednesday morning, but as the week ended the major pundits were largely unwilling to admit that they were wrong.   Excuses for blowing the call ranged from blaming inaccuracy on late voter decisions to complex explanations that – statistically – the pundits weren’t that far off.  
                     

For example, Nate Silver (who boasted for four years how well he did in predicting state and national results in 2012),  presented a weak defense of his statistical model.



Silver also claimed that the results were within the standard margin-of-error, implying that he didn’t really get it wrong.



Silver gave Trump a 29% chance of winning early Tuesday night.  It’s important to keep in mind that Silver also limited Trump’s chances of winning to 12.6% back on October 18,



and that Silver’s forecast fluctuated as polls did; Silver locked his forecast into poll accuracy, even though he claimed to adjust for bias and outliers – he bluntly failed to consider the effect of groupthink.


Next up is the Huffington Post, which boldly predicted a 98% chance of a Clinton win, then blamed the loss on a “black swan event” (and Trump only a 2% chance),


which amounts to claiming no one could have seen it coming.   This would be a lie.

The New York Times gave Clinton an 85% chance of winning the day of the election, down a bit from 93% on October 25.   This equated to giving Trump a 15% chance, up from 7% on the respective dates.





Rather than candidly admit their bias and its results, the NYT actually blamed … the data itself.   Hypocrisy in print, folks.




Larry Sabato, who has made a nice living from predicting elections over the years, actually claiming a 99% success rate in 2004 and 97% in 2012.



Sabato called 347 Electoral Votes for Clinton this year, which cannot be sanely called anything but a faceplant.


Forbes, best-known for business reporting, also got into the election forecast game, and when they got it badly wrong they blamed ‘statistical error’.



And so it goes.    At this writing, exactly none of the people who made money and gained fame from predicting elections, had the guts to plainly admit they got this one completely wrong.


Why should we care?  Because a lot of media paid attention to these pundits all through the election, especially at the end.  They threw out predictions that were clearly way off the mark.  A lot of them have offered excuses, but let’s step back and see why the explanations are worthless.


Silver, for example, goes into great detail about different factors and how they influenced the election results. 


Some of that is interesting reading, but the sum effect is that it comes off as butt-covering, not least because any professional should have properly included such factors in their pre-election forecast.


So what should the forecast have looked like?  To answer that, we need to step back and ask what we expect from a forecast.  A forecast should have general similarity to what actually happens.  For example, in a weather forecast we often hear about, say, a ‘30% chance of rain’.  That’s actually a little vague, since it doesn’t tell us where that rain will happen or when, but if we hear 30%, we would expect some clouds and only in some places.  A completely clear, sunny day or a torrential downpour would mean the forecast was wrong, no matter what explanation the weather guy offered. So the election results can be seen this way:

In a straight look at the Popular Vote, Hillary Clinton claimed 47.8% to Trump’s 47.3%.   Of course, the actual election does not depend on the Popular Vote, but this result is consistent with a national picture, and the main point is that none of the major pundits gave Trump a 47.3% chance.  By this metric, the major polls grade out this way in their calls:

FOX News: Called 44% for Trump (-3.3%), called 48% for Clinton (+0.2%), aggregate (-3.5%)
LA Times:  Called 47% for Trump, (-0.3%), called 44% for Clinton (-3.8%), aggregate (-4.1%)
ABC/WaPo: Called 43% for Trump (-4.3%), called 47% for Clinton (-0.8%), aggregate (-5.1%)
IBD/TIPP:  Called 45% for Trump, (-2.3%), called 43% for Clinton (-4.8%), aggregate (-7.1%)
CBS News: Called 41% for Trump (-6.3%), called 45% for Clinton (-2.8%) aggregate (-9.1%)
Bloomberg: Called 41% for Trump (-6.3%), called 44% for Clinton (-3.8%), aggregate (-10.1%)



Pretty much everybody was outside a statistical margin of error (Fox was almost inside that line). No one can claim to have nailed that call, but each poll got close-ish on at least one candidate.  Grade them C’s and D’s at a professional standard.


But Presidential elections depend on wining electoral votes from state contests.  In the end, Trump won 306 electoral votes to Clinton’s 232 electoral votes, or 56.9% of the EV to 43.1%.  No one at all came close to predicting Trump would nearly 57 percent of the EV.  Absolutely none of the pundits listed above were anywhere close to being right.   If these were students, we’d be comparing different levels of ‘F’ grades on an exam.


Again using Real Clear Politics’ published results,


we can see the average results of each state by vote for each candidate; the average should give us a reasonable forecast for a candidate winning election.  Using the vote results by state, Trump claimed an average 48.9% of the vote to Clinton’s 45.2%.  Again, none of the pundits came close to this result.


Pundits will sometimes point to variables, margin of error, and other technicalities to excuse blowing the call. But never forget that the main reason for any forecast is to give you a reasonable expectation of what is coming.  It’s fair (but very rare) for a statistician to admit that he cannot forecast a clear outcome; pay attention here to the fact that both Gallup and Pew refused to publish election predictions this year.  But if a pundit publishes a forecast that projects a clear winner by a wide margin, as Silver, Huffington, the New York Times, Sabato and so on all did, they cannot pretend that they did anything but fail when results are so plainly different from their predictions.  Aggregation is a poor tool in election forecasting, and sooner or later the public should demand better work from people who are happy to take credit and publicity for their projections.



Man up, you wimps.  You blew it.