Political polls are everywhere. How do they work? Can they tell us who will win?

One poll has former President Donald Trump ahead by 3 percentage points. The next has Vice President Kamala Harris leading by 4.

Then, three in a row show they’re tied. The next has Harris up by 1 point and the one after that has Trump ahead by 2.

The collective insight from the flood of national polls leading up to the final day of voting on Tuesday: The presidential race is too close to call.

“The polls are razor thin. Very tight. The outcome will likewise be very thin, very tight. It could go one way or another,” said Steve Vancore, a longtime Tallahassee-based pollster and political strategist.

In Florida, by contrast, the outcome is fairly certain for the first time in a generation. Trump is the likely winner of the state’s 30 electoral votes. His margin of victory, assuming it happens, is the main unknown.

Within the last week, polls show Trump in Florida is +6, +6, +9, +12 and +5.

And the next contest on the ballot — U.S. Sen. Rick Scott, R-Fla., vs. Democratic challenger former U.S. Rep. Debbie Mucarsel-Powell — is much closer, according to many of the same surveys.

The polls are ubiquitous in the days leading up to the presidential election. It can be difficult to decipher what they mean, and how much faith to put in their accuracy.

At least 125 national polls were released in October. There were 21 in Florida. And dozens more attempted to offer insights into what voters are thinking in the seven battleground states that could go to either Harris or Trump and determine who wins the 270 electoral votes needed to win the presidency.

Aficionados look at polls for a range of information about elections or issues, including whether a candidate is headed up or down and has the potential to pull off a surprise.

One thing they can’t do is say for certain who will win, said Kevin Wagner, a Florida Atlantic University political scientist. Each one is a snapshot, and the picture can change the next day.

Interest in polls

“The public seems to have a great appetite for polling. They want to know who’s winning,” he said. Wagner is also co-director of FAU’s PolCom Lab, a collaboration of the School of Communication and Multimedia Studies and Department of Political Science, which conducts public opinion polling.

When FAU releases a poll, it often generates attention across the country and is “among the most covered content the university produces,” said Joshua Glanzer, the university’s associate vice president for media relations and public affairs.

Polling is often “very misinterpreted,” said Brad Coker, CEO and managing director of Mason-Dixon Polling & Strategy. He’s conducted thousands of polls across the nation since he started polling in Florida in 1983.

Small number

Skeptics scoff at the idea that a sample of 1,500 people or less can accurately reflect what millions of Americans are thinking.

Wagner raised the issue himself during a recent presentation at the Osher Life Lifelong Learning Institute at FAU’s Boca Raton campus. He said a national poll could use 800 responses on the low end to 1,500 for a well-funded survey.

“We want to know what a large group of people think by asking a small group of people,” he said. “How can I ask a small group of people and get a realistic projection of what a large group of people think?”

Wagner likened it to tasting soup.

“If I give you a large bowl of soup and I want to know if that soup tastes spicy or good, and any other thing I want to know about that soup, do I have to drink the whole bowl? No,” he said. “A spoonful or two will do — as long as all the ingredients that make up that soup [are well mixed] and end up on your spoonful.”

That explains the theory, and also why public opinion researchers spend lots of time trying to make sure their samples represent the people who are most likely to vote. They also increase the weight given to some people if the survey didn’t get enough responses from a particular demographic group.

Kevin Wagner, a political scientist at Florida Atlantic University, is co-executive director of its He is co-director of FAU's PolCom Lab, a collaboration of the School of Communication and Multimedia Studies and Department of Political Science. (Alex Dolce/Florida Atlantic University)
Kevin Wagner, a political scientist at Florida Atlantic University, is co-executive director of its PolCom Lab, a collaboration of the School of Communication and Multimedia Studies and Department of Political Science. (Alex Dolce/Florida Atlantic University)

Margin of error

Polls aren’t precise, even though they’re often depicted that way. And, pollsters said, people should be cautious about reading too much into them.

Each number in a percentage is actually a range of possibilities involving a margin of error and statistical probability.

If Candidate A has 53% and Candidate B has 47% and the margin of error is plus or minus 3 percentage points, A and B could be tied at 50 percent each.

Or, it could be 56-44.

So, for example, the Florida Atlantic University poll released Tuesday that showed Scott with 50% to 46% for Mucarsel-Powell doesn’t strictly mean he’s 4 percentage points ahead and is likely to win by that amount.

He might.

But with a margin of error of plus or minus 3 percentage points, Scott could have anywhere from 47% to 53% and Mucarsel-Powell could be in a range of 43% to 49%. Scott’s in a better position — it’s always better to be ahead than behind, Wagner said — but the contest is most likely in those ranges.

And, Coker said, there’s a small chance that the person behind by a few points would end up winning by a few, but it’s not impossible.

Coker said he pays more attention to the person who’s ahead when it is 5 or 6 points.

Vancore explained it this way: Flipping a coin 500 times would in theory come up heads 250 times and tails 250.  But it actually could be 245 heads and 255 tails. That is expected, random variation.

“The public expects a level of precision in polling that pollsters don’t even expect,” Vancore said.

Polling professionals don’t make a big deal of small changes of a percentage point. And they discount the precision implied by an even smaller change, from say 50.9% to 51.1%.

Wagner said people want something more exact, “That is not satisfying, is it? You want a prediction, somebody to tell you what is exactly going to happen. But all the poll tells you is a range within a margin of error.”

Impact on results

Many, but not everyone, involved in politics thinks polls can influence the outcomes of elections, particularly if one candidate is clearly dominating the polls in a high profile contest.

A drumbeat of polls suggesting that Candidate A is going to overwhelmingly defeat Candidate B can discourage voters on B’s side from actually showing up to vote — and that can hurt all the party’s candidates.

Political scientists and pollsters said they didn’t see that as especially likely this year, at least in battleground states, where every vote could make a difference.

But Florida shows what can happen.

During the 2022 midterm elections campaign, it was obvious for months, and polls repeatedly showed, that Republican Gov. Ron DeSantis was going to handily beat Democratic challenger Charlie Crist.

Analysts across the political spectrum said that contributed to a collapse in Democratic turnout. And that in turn contributed to a range of Republican wins and Democratic losses for lower-level offices.

In Palm Beach County, for example, two County Commission seats flipped from Democratic to Republican.

When the polling “is relatively close, I don’t think it’ll have an effect on turnout. But when it is a blowout like DeSantis-Crist it does affect turnout,” Vancore said.

Andy Thomson, a member of the Boca Raton City Council, saw that first-hand.

He was a Democratic candidate for state Representative in southeastern Palm Beach County in 2022 when the polls showed the top-of-the-ticket candidates for governor and U.S. Senate headed for defeat.

“If voters see polls coming out showing their candidate getting crushed, it absolutely affects how important they think it is to go out and vote,” Thomson said. “If, on the other hand, they see a close race I think it does provide extra motivation to vote.”

Thomson lost 51.7% to 48.3%.

He doesn’t think voters lose their motivation to turn out if they see their favorite candidate cruising toward a victory. “Everyone wants to be on the side of a winner, and they still will remain motivated to go vote,” Thomson said.

The phenomenon has been likened to fans at a football game. If one team is winning by 30 points, more fans leave before the end of the fourth quarter than if one field goal could change the outcome.

“Americans don’t like to vote for losers,” Wagner said. “We want to think if we’re going to take the time to go vote, that our vote is going to matter, that our candidate is going to win or have a good chance of winning or we need to feel as if our vote is going to influence the outcome.”

Who gets polled

The way pollsters reach voters is evolving.

Phone calls from live people asking questions for years was the gold standard. And for veteran pollsters like Coker and Vancore it still is.

But technology and changing habits have made that more difficult. Once, almost every American household had a landline phone. Now the vast majority rely on cell phones.

And far more people screen their calls and don’t answer, which means live calling is more time consuming and expensive. (Coker said men are more likely to answer unknown numbers than women.)

Many pollsters have moved to other methods or a combination of approaches including live callers, automated calls that ask people to punch in numbers to respond to questions, text messages that send a link people can use to complete the survey, and online panels in which people can opt in to answer questions.

Wagner said the non-live caller method can help fill in gaps of people who are less likely to answer their phones and participate in a telephone poll.

Weighting

A big challenge for pollsters is making sure that the sample is representative of the population being surveyed.

Even if a survey has enough people to make it statistically valid, it doesn’t necessarily have a representative sample. It might have too few young voters or too many voters in one party.

So pollsters make judgments about how to weight the responses they did receive to more accurately reflect their share of the voting population. In practice that could mean increasing the weight given to one demographic group when generating the poll’s overall results.

The failures in polling, and the discrepancy among polls, often come from assumptions made by the people designing the surveys. The thorniest problem is figuring out how to get a sample of people likely to vote.

That was especially problematic in 2016, when Trump generated excitement among people who weren’t traditionally heavy voters, and they turned out to vote.

Wagner said pollsters can’t simply ask if people plan to vote and rely on that to figure out likely voters because more people will say they plan to vote than actually do.

“They don’t want to admit that they don’t vote,” he said.

Anthony Man can be reached at aman@sunsentinel.com and can be found @browardpolitics on Bluesky, Threads, Facebook and Mastodon.

Leave a Reply

Your email address will not be published.