logologo

Easy Branches allows you to share your guest post within our network in any countries of the world to reach Global customers start sharing your stories today!

Easy Branches

34/17 Moo 3 Chao fah west Road, Phuket, Thailand, Phuket

Call: 076 367 766

info@easybranches.com
Regions United States

How to Read Political Polls Like a Pro

It's easy to misread polls, or put too much stock in a bad one


  • Aug 26 2024
  • 0
  • 0 Views
How to Read Political Polls Like a Pro
How to Read Political Polls Like a Pro

This article is part of The D.C. Brief, TIME’s politics newsletter. Sign up here to get stories like this sent to your inbox.

No poll is predictive. Every single one of them is a measure of a specific window inside a specific space. No one wins—or loses—based on the polls, and it’s often irresponsible to say that someone is winning in the polls. It’s trite, but when strategists say the only poll that matters is the one taken on Election Day, they’re not wrong.

[time-brightcove not-tgx=”true”]

That said, many of us can’t help but obsessively track the polls. If you check into any armchair pundits’ social media feeds, you’re going to get an overdose of polling nuggets. But so much of the chatter is admittedly superficial and wrongly assumes nuance or caution can be set aside.

Reading the polls can be tricky if you don’t know what you’re doing. On its own, a single survey does not really describe anything more about the race than how one group of selected voters responded to one set of questions during a moment in time. Taken en masse—and there are pollsters out there who will scream Don’t Do It!—the numbers can offer a framework for a race, especially for donors who want a clear return-on-investment proof point. (Indeed, when donors began trying to force Joe Biden to step aside after his disastrous June debate, many were responding more to polling dashboards from Real Clear Politics and FiveThirtyEight than their own concerns about what happened on that debate stage.)

As the campaign seasons hits its final weeks and early voters begin to cast ballots, here’s a guide on what to make of the polls, based on what campaign professionals consider whenever a new poll pops up on their screens.

Who conducted the poll?

There are some truly terrific polling firms working on behalf of news organizations, advocacy groups, and campaigns themselves. But there are also some pretty funky shops these days. (Not to pick on Morning Consult, but they ended 2022’s cycle with an 8% correct call on their races.) On top of that, anything put out by a party with an interest in the election’s outcome should be taken with an enormous caveat—if taken at all.

Mainstream news organizations spend small fortunes tracking the electorate and its trend lines, not just on the big-ticket races but also on voter attitudes toward various subjects. (Two gold-standard organizations—the Pew Research Center and Gallup—are most laudable in this space for having collected decades of longitudinal data on dozens, if not hundreds, of topics.)

Others, like advocacy groups, also conduct polling, usually for internal use. Occasionally, they’ll find a nugget that is statistically legitimate and, in their view, newsworthy. But don’t count on such data points to come to light if they threaten to set back that group’s agenda. The city pound wouldn’t be touting a poll if it found people love stray puppies.

Then there are the campaigns, who always have roughly the same rebuttal to any tough question: That’s not what our internals show. That was always Donald Trump’s answer when the numbers showed him down in 2016 and 2020. And that was the case from Biden’s camp earlier this year as he flailed toward his exit. There’s always a way for a campaign to slice their data to make itself look better.

What kind of voters did the pollster survey?

This is the question that’s going to be more important as we get closer to Election Day. Most polls start the cycle with registered voters, which is a broader screen than where things eventually end up in the homestretch. That’s when other filters are layered in to figure out who is actually likely to vote. Among all registered voters, Democrats historically have enjoyed an advantage. But among likely voters, Republicans often narrow that gap because groups viewed as low-propensity voters—those with lower incomes or from communities of color, blue-collar workers—show up on Election Day with less frequency, and they tend to be Democrats.

As the polls continue to tighten, you’re going to be hearing a lot about registered-versus-likely modeling. To identify the likelies, pollsters may start with the simple questions of whether someone is registered to vote and if they plan to vote this year. Others add in questions about whether the person on the line voted in the last few elections or how excited they are about the elections. Everyone cooks this sauce a little differently, which is why it’s statistically tricky to compare one poll against another.

Then, there’s the weighting. Put simply: this is a back-end system for pollsters to make sure the people who bothered to answer the questions matches what the pollsters expect turnout to look like. A basic version of this: in a 100-person survey, a pollster might adjust so the 80 women who answered have their power redacted to the roughly 55% of the electorate they comprise, while the 20 men remaining have their power amplified toss 45% of the balance. Then they might adjust for different demographics like college education, age, race, income, etc. Again, this is an oversimplification but it helps explain why oversampling on its own is not a reason to dismiss the results of a poll.

Is this a national poll, a swing state poll, or something else?

Not all polls are created equal. Sometimes, on big questions like abortion, global affairs, even pop-culture figures, a national poll is the way to go, provided there’s some controls built in for regional differences in the weighting described above. Other times—especially for presidential races—the nationwide numbers are pointless and the real battle is playing out in roughly seven universally accepted battlegrounds. And for state-based runs for Governor or Senate, that’s the whole ballgame, albeit one that’s increasingly difficult to find funding for rigorous, independent polling. 

How was the poll conducted?

For decades, most purist pollsters insisted the only reliable source for their research were live, human-to-human interviews conducted by phone. Historically, that worked because they were calling landlines and it was hard for most Americans to ignore the ringing phone hanging in the kitchen. Then came caller ID, cell phones, and, well, the Internet. Now, pollsters have begrudgingly adopted online polls as statistically acceptable with the right controls; in 2000, just one firm used online, opt-in polls but in 2022 that number rose to 46, according to Pew’s survey of peers. The same report found nine pollsters using text messaging as part of their data collection. 

Pew now does most of its polling via a tightly framed online system. Most polling houses do at least some online data collection these days, and it’s actually made, to most minds, the numbers more reliable when compared to the results from a 2016 cycle. (That might not have been a standout year in polling but was not as off as many think. People just read the headlines and ignored details.)

The online shift has increased efficiency but produced a dataset that cannot be easily compared to earlier polls that asked the exact same question. Voters’ words to a pollster and their clicks on a screen often yield different levels of honesty. So while it’s possible to track changes at this point month to month, it would be a mistake to take surveys from, say, 2000 and try to extrapolate public opinion shifts from there.

When was the poll in the field?

Context is everything, especially when it comes to polling. The surveys that asked opinions about Kamala Harris before she swapped in for Biden were touted by her naysayers who claimed she was too unpopular to get across the finish lines. That frame quickly fell apart when she became the apparent Democratic nominee. Pollsters rightly predicted a surge in Harris’ support once she became the only thing standing between Trump and the White House.

A comparable value of context on the other side of the aisle would be the chasm between the before and after figures for Trump around his failed assassination attempt. Despite years of acrimony from Democrats, he did see a bump in approval for having gone through that indefensible act of political violence.

But a reminder on big shifts in polls: they tend to return to a stasis pretty quickly. After Trump’s Access Hollywood tape came out, including his brag about sexual assault, his polling numbers went back to their norm in about three weeks. And after Obama ordered the successful killing of Osama bin Laden, his got back to regular territory in six weeks.

What is the margin of error?

A New York Times/ Siena poll released on Aug. 10 indicated Harris ahead of Trump in Michigan by a 50-46 margin. While that sounds like Harris is leading, look closely at the margin of error, which provides an estimate of how likely the result could be off, based on factors like the size of the randomly selected sample. The margin of error of the Michigan result was 4.8 points. That means Trump and Harris are actually statistically matched. Harris could actually be at 45.2% and Trump could be as high as 50.8%. 

A good rule of thumb: for someone to be leading in a poll, they need to cover twice the margin of error to be considered outside of the gray zone. So in the Michigan example, Harris would be leading if she was 10 points ahead of Trump.

Finally, there’s this grim reality: In roughly 1-in-5 cases, the polls taken in the final three weeks of a campaign are just plain wrong, according to a fascinating bit of data wonkery from FiveThirtyEight. In 2022, that universe of pollsters got the pre-Election Day call correct just 72% of the time.

How does this poll compare to the polling average?

Sometimes, a poll is just garbage. In some rare cases, the pollsters themselves say so and chuck the whole thing. The respected Des Moines Register/ CNN/ Mediacom survey did it in 2020 when folks inside the quant hive there didn’t believe what they were seeing in Iowa and decided sharing those findings would be irresponsible.

Outliers are polls that show a divergence from what others surveys are showing. They aren’t necessarily junk or the product of shoddiness; it’s just that even a by-the-books pollster can still occasionally produce a hinky poll. Even the best miss the mark.

This is where polling averages like those run by Real Clear Politics can be useful in tracking trends and momentum. But data nerds warn against making too much of them. It would be wrong to try to chart changes between a February CNN poll and a March Fox News poll. They ask the questions differently. Their back-end formula for weighing voting universes is different. It’s the kitchen equivalent of tracking the sweetness of sugar and flour just because they come from similar bags.

That said, plenty of us do look at so-called polls of polls. But like all opinion surveys, it’s helpful to remember that they should never be taken as predictive and that they’re already out-of-date by the time the data are released.

Make sense of what matters in Washington. Sign up for the D.C. Brief newsletter.

Related


Share this page
Guest Posts by Easy Branches
image