Skip links

EXCERPT FROM OUR NEW BOOK, TRUMPED: POLLING IN THE 2016 ELECTION AND WHAT IT MEANS GOING FORWARD

 

Dear Readers: Our new book on 2016’s remarkable election, Trumped, is now available. Trumped features some of the nation’s sharpest political reporters and analysts breaking down an election that truly broke all the rules.

The following is taken from Chapter 10 of the book, authored by Ariel Edwards-Levy and Natalie Jackson of Huffington Post, and Janie Velencia, formerly of Huffington Post. The authors write about political polling in the 2016 cycle and the challenges facing the industry. In this excerpt, they argue that the issue and approval polls that we see on an almost daily basis are still good barometers of public opinion.

Crystal Ball subscribers can get a special discount on Trumped: The 2016 Election That Broke All the Rules from publisher Rowman and Littlefield. Use code 4S17SBTOCB at checkout to get the paperback at 30% off the retail price at Rowman’s website.

— The Editors

 

The debate over what factors caused pollsters to err in 2016 is likely to continue for some time, as is the argument as to what extent the miss represents either a critical failure for the industry or simply a demonstration of overcertainty by pundits and forecasters. But regardless of the magnitude of the error, polling systematically overstated the likelihood of a Clinton win.

That’s something pollsters will have to grapple with in the next election. It’s also something that, as the country settles down to the business of governing, raises a more immediate question: how much can polls be trusted to measure the public’s support for policies?

That question is more than academic. While horse-race surveys may command the bulk of attention, polls that gauge the national mood on issues of policy serve at least as important a role in the democratic process. Writing off their results as intrinsically unreliable would potentially leave much of the nation voiceless in the years between elections.

“Public opinion polls are an important form of accountability on our government. They are a kind of check and balance,” Nick Gourevitch, a pollster for the Democratic firm Global Strategy Group, observed following the election. “Public opinion polls help prevent our elected officials from pursuing policies completely at odds with the public’s desires.”

Fortunately, some of the major pitfalls faced by campaign polling are inherently less problematic for policy surveys. Likely voter models — pollsters’ methods for determining which Americans will turn out in the election — were probably a significant source of inaccuracy.

“Because we can’t know in advance who is actually going to vote, pollsters develop models predicting who is going to vote and what the electorate will look like on Election Day,” analysts at Pew Research explained in a post-election essay. They observed,

This is a notoriously difficult task, and small differences in assumptions can produce sizable differences in election predictions. We may find that the voters that pollsters were expecting, particularly in the Midwestern and Rust Belt states that so defied expectations, were not the ones that showed up. Because many traditional likely-voter models incorporate measures of enthusiasm into their calculus, 2016’s distinctly unenthused electorate — at least on the Democratic side — may have also wreaked some havoc with this aspect of measurement.

Pollsters have adopted a wide variety of metrics to assess respondents’ likelihood of voting. Some, such as Gallup (which chose not to release presidential horse-race polling in 2016), rely on a battery of questions, including asking about a respondent’s self-described past voting behavior and interest in the current election, while others simply ask people whether or not they plan to vote. Another method involves matching survey data to voter files, which contain information on individuals’ vote histories.

To underscore how much pollsters’ decisions can affect their results, the New York Times gave raw polling data from Florida to four different pollsters in September 2016 and asked them to analyze it. The participants diverged on how they adjusted their samples and identified likely voters, with results ranging from a one-point lead for Trump to a four-point lead for Clinton.

Issue polls, which generally seek to represent all Americans, rather than a given year’s electorate, require less extrapolation. Pollsters still have to consider how to weight their sample to make it representative of the nation as a whole, and which demographic factors to consider when doing so. But while no one can know in advance who’ll turn out to vote in an election, those trying to reflect the population of the United States can at least rely on census data as a target.

Another possible factor in the election polling miss that issue polls do not have to worry about is a late shift toward Trump in the few days between final poll releases and Election Night. Such a shift would be unusual — past elections have tended to remain relatively stable in their final stages.

But exit polling indicates that late-deciding voters in some key states, rather than breaking evenly between the candidates, split heavily for Trump. In Michigan, voters who said they’d made up their mind in the last week before the election went for Trump over Clinton by 11 points, compared to an even split among those who reported deciding earlier. In Pennsylvania, voters who decided in the final week went to Trump by a 17-point margin, versus a two-point edge among those who decided previously.

In Wisconsin, where not a single pre-election poll showed Trump ahead, the difference was even more stark: the 14% of voters who said they’d decided in the last week before the election preferred Trump to Clinton by 29 points, while those who decided earlier favored Clinton by a two-point margin.

Not all pollsters saw evidence of such a swing. But regardless, such issues of timing present less of a problem for issue polling, which doesn’t revolve around capturing Americans’ opinions during such a narrowly defined time period as an election campaign. Although some surveys may be planned to coincide with specific dates, such as the State of the Union, in general, there’s not a hard-and-fast deadline after which Americans’ views on an issue stop mattering.

Finally, even polling errors large enough to put horse-race surveys at odds with the results of an election may have less meaningful consequences when it comes to interpreting public opinion. Differences of two points in election surveys can change the outcome, but a two-point difference in opinion on an issue isn’t usually substantial.

HuffPost Pollster’s final aggregate of national polls gave Clinton a 5.3-point lead over Trump. The final tally as of early 2017 put Clinton up 2.1 points in the popular vote, an error of 3.2 points. In horse-race surveys, such a margin can mean the difference between winning and losing. In opinion polls, such a distinction may be far less politically meaningful.

To take one example, Barack Obama’s net approval rating — +16.4 points at the end of his term, per HuffPost Pollster’s aggregate [as of Jan. 3, 2017] — means that he left office on a relative high note, in comparison both to his earlier second-term numbers and to his recent presidential predecessors. That would remain broadly the case if his net approval rating were, instead, 3.2 percentage points lower at +13.2, or, indeed, if it were 3.2 percentage points higher at +19.6.

For another example, a post-election poll from Quinnipiac University found that Americans oppose building a wall along the border with Mexico, one of Donald Trump’s signature policy proposals, by a 13-point margin, with 42% in support and 55% in opposition. If Americans instead opposed such a project by a 9.8-point margin, it would remain reasonable to conclude that such a project would be relatively unpopular.

The very real possibility of such errors, however, serves as a reminder that such results should be treated by readers and pundits as virtually identical. The baseline margin of error of most polls stands around plus-or-minus three percentage points when accounting only for how much the numbers might change due to the random chance of who is selected to participate in the poll, let alone other potential sources for error.

That is a good reason for caution against making too much of any purported shifts in public opinion that amount to just a point or two of variation, and which are just as likely to represent random noise as they are to amount to a notable change. It should also serve as a reminder to be wary of apparent differences in opinion between subgroups, whether it’s Republicans and Democrats, millennials and baby boomers, or white and black Americans. Such groups, which make up only a part of each survey’s population, accordingly carry even higher margins of error.

Challenges for public policy polls

While policy polling may be spared from some of the problems afflicting horse-race polls, they’re also potentially subject to a number of serious issues whose presence should inform the way their results are interpreted.

Among them: elections force the public into making quantifiable decisions. In 2016, people chose to vote for Clinton, Trump, a third-party candidate, or not to vote at all. In comparison, many people never adopt strong positions on current events or policy issues, especially those that are complicated or receive limited news coverage. This leaves respondents malleable, making them more likely to support a bill if they’re told it’s endorsed by a politician in their party, or to reject it if told that it’s backed by an opponent.

The 1975 Public Affairs Act, to take one classic example, doesn’t exist. But Republicans are more likely to oppose repealing the fictitious bill when they’re told that President Barack Obama wants to do so, while Democrats object when they’re told it’s a Republican proposal.

“The lesson here is straightforward: Many poll respondents will offer opinions on issues they know little or nothing about, making it difficult to distinguish pre-existing opinions from reactions formed on the basis of the words of the question,” pollsters Mark Blumenthal and Emily Swanson wrote in 2013. “Poll respondents will find it even easier to offer an opinion when informed where well-known political leaders stand on the issue. It is always best when interpreting survey results to consider how familiar Americans are with the issue, how many are reluctant to offer an opinion and how those who are closely following an issue differ from those who are not.”

Issue polling may also be deeply affected by how pollsters choose to word their questions. That problem is virtually nonexistent in horse-race polls, where wording generally reflects the questions people will see on their ballot, allowing for relatively uniform phrasing.

In policy issues, by contrast, there’s often no clear template for wording, and small changes can carry outsized effects. Experiments that test reactions to changes in wording shed light on how malleable opinions can be, especially when tied to partisanship.

In one survey conducted by the Huffington Post and YouGov, for example, Republicans asked to compare their current financial situation to “when President Obama was first elected” were 19 points likelier than those asked about “the year 2008” to say that their finances had gotten worse. Democrats who saw Obama’s name mentioned, in contrast, were 20 points less likely than those who did not to admit that income inequality had risen during his tenure.

Some surveys intentionally rely on such effects. Campaign pollsters often conduct polls with loaded language to test effective messaging, while interest groups may do so in order to shore up support for their positions. But even when public pollsters, such as media outlets, think tanks, or universities are simply trying to conduct a straightforward test of public opinion, there’s still often little consensus on how a question should be framed, or how much detail about a current event should be provided.

Among Trump’s first acts as president-elect was to announce a deal to keep several hundred jobs at an Indiana factory rather than moving all its production to Mexico. Two online surveys measuring public opinion on the deal chose both to describe it in varying ways and to ask fundamentally different questions about its effects. A Morning Consult survey found that the deal left 60% of voters feeling more favorably toward Trump, while an Economist/YouGov poll found that just 38% of Americans approved of the deal.

Some of that difference likely comes down to how each of the questions was framed. The Morning Consult poll described the deal in broadly upbeat terms, telling voters that Carrier had “decided to keep roughly 1,000 manufacturing jobs in the state of Indiana rather than moving them to Mexico after forming an agreement with President-elect Donald Trump and Vice President-elect Mike Pence.” The Economist/YouGov survey, in contrast, asked about “a deal Donald Trump negotiated with Carrier, an air conditioning equipment manufacturer, to reduce the number of jobs the company had planned to relocate from a plant in Indiana to Mexico.”

More fundamentally, the Morning Consult survey also asked readers how the deal reflects on Trump, while the Economist/YouGov poll asked for opinions about the deal itself. But whether one survey did a better job of capturing the real effect of the announcement on public opinion is impossible to quantitatively measure.

Election surveys, for all their flaws, can be validated or invalidated by the results of the elections they seek to measure. In contrast, there’s no such test that can tell us which pollster most accurately measures, say, Americans’ “true” level of admiration for a president or support for a border wall — or whether pollsters are getting it right at all.