in

Political Weights and Polling Measures


Are the polls broken? Probably most Americans, or at least those paying attention, have been thinking so since Donald Trump surprised just about everyone by getting elected president in 2016. Are they right? Have the polls deteriorated?

Those are questions addressed by G. Elliott Morris in Strength in Numbers: How Polls Work and Why We Need Them. In 2017 he was a junior at the University of Texas and by 2020 became a data journalist for The Economist. Like most people in his line of work, he admits there are some problems with polling these days. Despite his liberal views, he admits that “polls have routinely underestimated the attitudes of conservative Americans over the last twenty years,” and that “pre-election polls face severe and prolonged threats from partisan nonresponse.” They were off the mark in 2016, he says, and even more so in 2020. But he insists that polls “are still the best tools we have to gauge support for the actions of the government.”

On that last point Morris is on high ground, but he doesn’t spend much time surveying the terrain. Of course we’d have less information about public opinion without issue polls; more is better than less. But issue poll results can vary widely depending on question wording and question order, and the same results can support quite different conclusions from different analysts. Morris leaves little doubt in which direction his interpretations would bend when he describes “interest groups, lobbyists funded by dark money, demagogic politicians” as the greatest current threats to our political system. It’s not likely he’s talking about teachers’ unions, foundations backing liberal criminal justice procedures, or Joe Biden.

His primary focus is on something he says polls don’t reliably do, or maybe shouldn’t—predict election results. “Do polls predict outcomes?” he asks. “Well, kind of,” he responds. But he’s surely aware George Gallup’s claims to predict elections, based on his numbers in just three elections (1936, 1940, 1944), undermined his and other pollsters’ reputations after Dewey didn’t defeat Truman in 1948.

In that case Gallup’s last interview was conducted on October 25 and showed Dewey with a 5 point lead, less than in earlier polls. Any pollster today would keep on polling. But election polling, then and now, has operated within constraints imposed by time and money. Gallup had to use in-person interviewers for at least two decades after his first random sample poll in October 1935, because many Americans had party lines (people Morris’s age might have to Google that one) or didn’t have phones at all. And they had to stop interviewing well before Election Day, because results had to be mailed in and cross-tabulated by hand.

By the time I got into the polling business, with the Democratic pollster Peter Hart in 1974, we still preferred in-person interviews because you could ask more complex questions. But telephone interviews were quicker and (if you could buy time on a mainframe computer) results more easily tabulated. By the 1980s, as Morris notes, phones were universal, and random digit dialing, invented by CBS pollster Warren Mitofsky, insured acceptably random samples.

By the late 2000s, however, we no longer lived in a country with universal landline telephones and a population that answered the phone. Response rates—the percentage of attempted interviews that were completed—fell below 10 percent. Exit polls, conducted at polling places, routinely oversampled Democrats—most often, as found by Mitofsky (whose work I observed closely in Mexico and Russia as well as the United States), when the interviewers were female graduate students. Absentee and mail-in voting became increasingly common—universal in Oregon, for example. For a time poll results seemed reasonably in line with how people voted. Until 2016, that is.

The guts of Strength in Numbers is Morris’s chapter on how media and Democratic pollsters have moved toward online interviewing and sophisticated computer weighting of the results. This is pretty technical stuff, unless you’re familiar with phrases like “multilevel regression with post-stratification,” but Morris’s prose is reasonably clear. Political pollsters have always tried to replicate the relevant population, conducting extra interviews if necessary or weighting overrepresented respondents’ responses down and underrepresented respondents’ responses up. The difference now is that with robocall or online interviewing, the number of interviews can be vastly increased and computer algorithms can do complex weighting.

Problems remain. “There is no agreed-upon method for predicting who is a ‘likely voter,’” Morris writes, and quotes Democratic analyst David Shor’s conclusion that “differential nonresponse is shockingly volatile.” Online interviewing doesn’t yield random samples, and weighting to make sure every microsegment of the electorate is proportionately represented can result in a one-person group skewing the overall result. Morris cites the 2016 Dornsife poll, in which one young black male was weighted 30 times more than the usual respondent—which made for a big jolt when he switched from Donald Trump to Hillary Clinton. This points up a problem I have with “panel” interviewing, in which the same respondent is interviewed many times—the danger that such people start seeing themselves as pundits, and grading the candidates rather than reporting their preferences. Morris’s discussion convinces me that determining the proper mix of random and internet samples is more an art than a science.

As is also, in my view, the aggregation of poll results. Despite his Twitter feud with FiveThirtyEight.com founder Nate Silver in 2020, Morris says “Silver’s embrace of probabilistic statistics and sophisticated polling aggregation has dramatically improved the way journalists cover election polls and the horse race.”

“Informed readers,” Morris says, “turn to RealClearPolitics and Pollster to know who’s ahead and to FiveThirtyEight to know whether they’ll win.” He also advises, unnervingly, relying “more on non-polling factors,” like economic growth and presidential approval ratings, “when polls go astray.” I say “unnervingly” because Morris’s first five chapters (which include a long off-the-point excursion on the unreliability of surveys in Iraq) show a poor grasp of history dominated by a sentimental populism. He says that American institutions for 150 years were shaped by a “dominant, elitist ideology,” unaware that universal white manhood suffrage circa 1830 ushered in a highly participatory and enthusiastic mass politics; the peak presidential turnout year ever was 1876. He seems unaware that in the 19th century frequent state elections, held almost every month in a two-year cycle, gave politicians and voters constant updates on partisan opinion.

Morris says that the Electoral College has been tilted toward Republicans “in recent memory” even though George W. Bush’s 51 percent of the popular vote got him a bare majority of 286 electoral votes in 2004 and Barack Obama’s 51 percent of the popular vote got him a whopping 332 electoral votes in 2012. The Trump Republican edge there may be vanishing too, as rural areas become more one-sidedly Republican and big cities, because of Republican gains among Hispanics and other “People of Color” become somewhat less overwhelmingly Democratic. And he’s not above an occasional partisan cheap shot: He uses Democratic margins in popular votes for U.S. senator as a measuring stick of partisan feeling, even though those are heavily tilted by California, which hasn’t had a seriously contested Senate race since 1994. Thanks to the state’s all-party primary there hasn’t been a Republican general election candidate there since 2012.

If the polls are broken, how would G. Elliott Morris fix them? Abandon all-phone polls—pretty much done. Admit that error margins have become larger. Don’t predict outcomes—which of course is what most election poll consumers are interested in. And, after denouncing interest groups, he wants more interest group polls.

Morris’s insistence that political surveys can increase useful knowledge is sound, but perhaps tinged by a certain naiveté. “Political surveys are one of the few tools to fight the inevitable stifling of voices that comes along with income and political inequality,” he insists, ignoring the fact that increasingly the Democratic Party has become the party of the affluent and of large corporations and big media, and that many Americans believe, with some reason, that the voices most often stifled in recent times, especially by pollsters and media, are those which to almost everyone’s surprise elected Donald Trump president in 2016 and came within 42,000 votes of doing so again in 2020. The result, alas, is that many such voters mistrust big media and won’t take part in what they consider biased polls. If you want to fix the polls, maybe you have to fix big media first.

Strength in Numbers: How Polls Work and Why We Need Them
By G. Elliott Morris
Norton, 200 pp., $28.95.

Michael Barone is senior political analyst for the Washington Examiner and was founding coauthor of The Almanac of American Politics. He was vice president of the polling firm Peter D. Hart Research Associates, 1974-81.





Source link

Friends, this isn’t the time to be complacent. If you are ready to fight for the soul of this nation, you can start by donating to elect Joe Biden and Kamala Harris by clicking the button below.

                                   

Thank you so much for supporting Joe Biden’s Presidential campaign.

What do you think?

Written by Politixia

Leave a Reply

Your email address will not be published.

Trolling between Mehmet Oz and John Fetterman in Senate race takes a turn

Maura Healey Could Make History in Run for Massachusetts Governor