This paper was presented August 7, 2016 at the Association for Education in Journalism and Mass Communication Annual Conference in Minneapolis, Minn. An early version was presented February 27, 2016 at the Association for Education in Journalism and Mass Communication Midwinter Conference in Norman, Okla.
The 2016 presidential campaign has been unique thus far, to say the least. It makes the 2012 cycle look downright boring. Yet, one aspect of the 2012 campaign that stood out to me was the use of public opinion polling by media to frame the race. Leading up to Election Day, it seemed pretty clear that President Obama would have the electoral votes to win a second term. Election forecasting guru Nate Silver thought so, and most polling data agreed.
However, a completely different picture was painted in conservative media – at least in a few anecdotal instances. Fox News contributor Dick Morris infamously predicted a “landslide” victory for Mitt Romney, while Karl Rove’s refusal to accept Obama’s victory-sealing win in Ohio made for awkward Election Night coverage for the cable news ratings leader. Both had evidence on their side – poll numbers that made it look like Romney was indeed going to win Ohio and the White House. But those polls were in the minority, and they were wrong.
This matters. Previous research suggests that publishing of public opinion polls can actually influence public opinion, and eventually, voting. To be fair, these findings have always been tough to untangle. Does a poll showing a candidate with a big lead create a bandwagon effect where everyone wants to vote for the inevitable winner, or does it spur an underdog effect in which the losing candidate’s supporters mobilize to close the gap? Does depicting a close race boost turnout, while voters skip out on a projected blowout? There’s evidence of all of these.
With so many choices out there, I wanted to see how media selected public opinion polls. So, I crafted a method to test media poll mentions against data from poll aggregator RealClearPolitics (RCP). Media poll references could mimic the RCP average of available data. However, they could also stray toward favoring a particular candidate, potentially indicating a partisan media strategy. Or, they could move closer to a dead-heat line, making the “horserace” seem tighter than it really is. Apart from margins, it’s also possible for most major media entities to use their own sponsored polls, earning institutional legitimacy by owning a standard of accuracy in political reporting.
As an initial test of this method, I looked at transcripts from the four broadcast Sunday morning political talk shows – ABC’s This Week, CBS’s Face the Nation, NBC’s Meet the Press, and FOX News Sunday. Mentions of polls were coded from June 2 (the first Sunday after Romney secured the GOP nomination) to Election Day.
There were a lot of variables beyond the basic method I shared above. But for expediency’s sake, let’s hit some of the more significant results:
– Every Sunday morning show pushed the horserace. Compared to polling data, the race was disproportionately depicted as a toss-up, as every network downplayed Obama’s lead.
– However, this didn’t seem to be an attempt to bias coverage in favor of Romney, as comments surrounding poll references were fairly evenly split in favor of the two candidates. FOX News Sunday was the only program with average poll margins that favored Romney, but it wasn’t by a substantial amount, and didn’t differ significantly from other programs. The nearness of the dead-heat line, RCP average, and polls favoring Romney complicated statistical tests, making findings more difficult to interpret.
– Two programs – Meet the Press and This Week – relied heavily on their own in-house polls. Meet the Press especially indicated legitimacy uses, as they spent entire segments detailing results from “our” polls, while completely ignoring public opinion polling on weeks they were without a new in-house poll.
– Based on reaction I heard in conservative media, I tossed in a research question to see how polling critiques were used (can you tell this is a preliminary study?). Sure enough, Republican-affiliated guests on the Sunday morning programs often gave negative critiques of polls – that they were flawed methodologically, that the polling firm had a reputation for bias, or that polls were simply irrelevant to predicting election outcomes. However, conservatives were far from alone. Even though polls fairly consistently portrayed Obama as the leading candidate, 100% of polling critiques made by Democrat-affiliated guests were negative. Only show moderators – presumably protecting the legitimacy of their polling – presented more neutral critiques, attempting to explain method and margins of error, for instance.
– The study produced all sorts of helpful methodological contributions.
– First, polls were rarely mentioned by name and margins were rarely given. Instead, it was far more common to see a statement like “polls show Obama with a healthy lead” with no specific support. The result was an overwhelming majority of the data becoming unusable for margin-related hypothesis testing. That was surprising for the wonky Sunday morning shows, and I would suspect it would be even worse for other types of news programming. Statistical power was, and could continue to be, a real challenge.
– Second, RCP’s own secretive methods for producing their poll averages is problematic. They’ve been accused of arbitrarily changing the selection and life of polls in forming the average, and that certainly seemed to occur in 2012. For instance, one of the most talked about polls of the campaign was a Gallup poll showing Romney with a six-point advantage. Despite including numerous Gallup polls in their index, RCP did not use that particular one. A more transparent poll aggregator would be helpful for the reliability and validity of this type of research.
So, what’s the takeaway? There may well be bias in media selection and presentation of public opinion, but it’s not overtly partisan. Instead, even the signature political strategy programs on television fall prey to hyping the horserace. Some use their own polls to do so, presenting them in a manner that heightens the network’s legitimacy. And all sides will bash polling when it’s a convenient talking point.
My thanks the Mass Communication and Society Division of AEJMC for awarding this project their top paper abstract award at the 2016 Midwinter Conference, and to the Electronic News Division for hosting the updated piece at the annual conference.