Editor's Note: This column was co-authored by Chris LaCivita
While much of the campaign of 2020 is still being analyzed, litigated, and contested, there is one conclusion on which most observers agree: the public polling results were, by and large, an abject failure.
The fact that the public polling was so consistently wrong is more than a simple academic miss. Public polling colors the landscape. It impacts fundraising. It creates a narrative that, in many instances this year, were far off the mark.
Public polls are more than background noise. As such, the pollsters and academics who provide them should be held to more accountability and higher standards. We've reached a point in the public political gestalt where polling misses should be followed by something more than academic confabs asking rhetorical questions about "what went wrong in the polls."
Private, campaign polls need to be accurate, because millions of dollars of advertising hinges on them. While the same cannot be said of public polls, it is clear and obvious that they need to do better, lest they lose any of their remaining credibility, if they haven't done so already.
Recommended
There has been much discussion, since 2016, of the so called hidden Trump voters. To be sure, "hidden" voters are not a new phenomenon and they didn't begin with Donald Trump. The "Bradley effect," named after 1982 California Gubernatorial candidate Tom Bradley, refers to polling's social desirability bias - where respondents shade their answers to conform with what they perceive to be the socially desirable response.
There are some pollsters and academics who are skeptical of this social desirability bias. In general terms, they are the ones who keep missing the mark. Accounting for these voters and adding it to a holistic and methodical analysis of a survey is a little bit like Occam's razor. It isn't complicated.
First, the obvious: transparency in public pollsters' methodology and results will be crucial in re-establishing confidence and trust. Media organizations and universities that want to contribute to the polling realm should be required to release all of their data, including their modeling assumptions, sample select, cross-tabs, partisan breaks, and demography. This would allow the opportunity for peer review, as well as allow the consumers of the poll to have a better understanding of the survey's potential biases or limitations.
Second, there is no reason any pollster should not be using a voter list for any telephone survey they conduct. The voter list should include new registrants and less frequent voters, as well as frequent voters. There is no reason for a pollster to rely upon RDD (random digit dialing) methodology if their goal is to "get it right." This solves some of the concern about making sure the right set of voters are being interviewed.
Third, stop the over-reliance on the topline "head to head" ballot question. Here is where the hidden voters vex pollsters. The ballot question is one data point of many and is sometimes the least reliable indicator of the state of the race.
Some questions we follow up with include, "Do you know of anyone who supports (Trump/Biden) but refuses to say so publicly?" and "Without saying yes or no, is there a chance that this might describe you?" This begins to pierce the veil. The cross-tabulated responses to these questions should be analyzed in conjunction with the crosstabs on the ballot question for a much finer point on where the race stands.
Other ways in which to gain a finer understanding of the status of the race is to ask a series of "who do you trust on...." questions. Issues might include Covid, the Economy, jobs and trade deals, foreign policy and others. Summarizing the "Trump trust" and "Biden trust" numbers yields significant insight into voter psychology, perception, and motivation.
Another crucial step: Figuring out and allocating the distribution of undecided voters in a poll. Given the fact that "undecided" is not a choice on Election Day or in a mail ballot, a careful analysis of undecided voters is required for a predictive model. How did they answer the generic ballot? What are the images of each candidate among them? What is their partisan affiliation? Their ideology? Which candidate do they trust more on the issue(s) that are most important to them?
Our final Florida survey showed a topline ballot of 45% for Trump and 43% for Biden. But what of the remaining 12% of the electorate? Only a smattering could be seen as supporting one of the other candidates. An analysis of the undecided voters or those who refused to answer the question led to our belief that Trump would carry the state by 4 points – 52 percent to 48 percent.
The problem with these solutions? They require analysis. In the manic news cycles, tight deadlines and desire to be the first to post new “information,” there is a rush to publish the head to head toplines that are, at best incomplete, and at worst, dreadfully misleading and wrong. When there are so many misses over and over again, it is not surprising that there are existential questions that public pollsters, pundits, professional prognosticators, and the media are now facing.
Adam Geller served as the Trump-Pence campaign pollster in 2016. In 2020 he polled for the pro-Trump America First Super Pac and the Preserve America Pac.
Chris LaCivita is a national GOP strategist. He was the lead strategist for Swift Boat Veterans for Truth, which helped President Bush win re-election in 2004, and he is the president of the pro-Trump Preserve America Pac.