This past week, our team at the Northwest Progressive Institute released the initial results of our second 2021 survey of the Seattle electorate, which was conducted for us by Change Research of California. We found Bruce Harrell sixteen points of Lorena González for Mayor, Ann Davison nineteen points ahead of Nicole Thomas-Kennedy for City Attorney, Teresa Mosqueda eight points ahead of Kenneth Wilson for Council #8, and Sara Nelson four points ahead of Nikkita Oliver for Council #9, with significant numbers of voters still undecided. We also found huge majorities of voters undecided in the school board races.
As of today, one or more of our findings have been mentioned during the evening newscasts of all four of Seattle’s local television stations and have also received extensive radio and online coverage. The Seattle Times’ Daniel Beekman wrote an excellent story summarizing the findings that appeared in print and on seattletimes.com. Lastly, our findings have received quite a bit of attention and discussion on social platforms like Facebook, Reddit, and Twitter.
We’re heartened that so many people are interested in our research.
We know that trying to figure out how to read and assess poll data can be challenging. Our survey was one of only a few independent surveys to be conducted in Seattle during this general election, and the only one to have fielded this month (the others fielded in September). Since other recent polling in Seattle conducted for interested parties has not been publicly released, our data is basically standing alone on its own instead of being one survey among many.
When you’ve got only one poll to look at in a given time period, you cannot make comparisons with other polls to ascertain trends and commonalities. All you can do is judge whether the one poll in front of you is credible or not.
Today, to address some of the questions and comments we’ve gotten about our poll since Tuesday, we are delighted to welcome Ben Greenfield to NPI’s Cascadia Advocate. Ben is the Senior Survey Data Analyst at Change Research responsible for the fielding of our surveys along with his colleague Ben Sullivan.
We hope you enjoy this Q&A and find it helpful for putting our results in context.
Andrew Villeneuve, NPI: Ben, thanks for joining me to discuss our work together this year! We’re delighted to have had the opportunity to launch our research polling partnership with Change Research in Seattle. It’s been a fascinating election cycle and we’re not even to the end of it yet!
Ben Greenfield, Change Research: Working with you and the Northwest Progressive Institute team has been a great pleasure for us — and I can’t disagree with you on it being a fascinating election cycle!
Andrew Villeneuve, NPI: Our Seattle polling this year consisted of two surveys: one in July that preceded the Top Two election and one this month preceding the November general election. We don’t know what the results of the general election will be, but we know that our first survey was able to anticipate a lot of the dynamics we saw in the Top Two election, with seven of the eight candidates who advanced to the runoff round having placed first or second in our polling.
Ben Greenfield, Change Research: Indeed. While it’s always important to remember that polling provides a snapshot of where voters are at a given moment before they cast their ballots — and not a prediction of where they’ll go — we were pleased to see that the poll we conducted together accurately captured some of the key dynamics and voter preferences in the Top Two election.
Andrew Villeneuve, NPI: After we released our general election poll findings this week, we started getting inquiries and comments about our survey’s methodology and sampling. One of those questions pertained to the diversity of the sample: 82% of the survey takers identified as white, which is a higher percentage than in other surveys of the Seattle electorate this year, like the Strategies 360/KOMO poll from last month. But different surveys are modeled on different universes. We chose to poll likely voters instead of registered voters, and consequently, our sample is modeled on the last similar election, which occurred in November 2017. How is polling likely voters different from polling registered voters or even surveying the population of a city like Seattle as a whole, and what ramifications did our choice have for the survey’s ethnic/racial composition?
Ben Greenfield, Change Research: Pollsters have different methods for projecting who’s likely to turn out in a given election and who’s less likely, and that can differ from election to election. Our turnout model took into account both past voting patterns in Seattle municipal elections and survey respondents’ self-stated likelihood of voting. Both turnout history and self-stated likelihood of voting are imperfect predictors of turnout, but both have some relationship to actual turnout, and we believe our combination of these factors leaves us with a view of the electorate that approximates what we’ll actually see.
As far as the ethnic/racial composition of the survey, our projections are again based on each group’s historical turnout rate. In municipal elections, turnout has historically been higher among white voters than voters of color, and as a consequence of that, the historical electorates, and our survey, have been whiter than the entire population of Seattle.
Andrew Villeneuve, NPI: In our survey, 7% of respondents identified as Asian or Pacific Islander, 5% of respondents identified as Hispanic or Latino/a, 3% of respondents identified as Black or African American, and 1% of respondents identified as American Indian or Alaska Native. Since the likely electorate next month will be overwhelmingly white, you opted to create a combined “people of color” subsample. Can you explain why it isn’t feasible to break out each group of voters separately, and why you chose this approach instead?
Ben Greenfield, Change Research: Essentially, the smaller a sample is, the larger the margin of error — meaning that if we survey five random Seattle residents, we can be much less confident that they represent the entire population than if we surveyed 5,000. We never publish breakdowns of responses from groups smaller than fifty, because the margins of error are just so high. Since none of these individual racial/ethnic groups had at least fifty responses, we couldn’t publish any of their breakdowns individually. In order to ensure we were not just singling out the views of white voters, we created a breakdown of all people of color.
Andrew Villeneuve, NPI: Seattle and other cities like New York hold their city-level elections in odd-numbered years. Turnout is typically much lower in odd-numbered years. In fact, in November of 2017, the last time Seattleites elected a mayor, fewer than half of the registered voters turned out. And that was high compared to other cities. If Seattle were to switch to holding its elections in even-numbered years, as NPI has been advocating all cities in Washington do, do you agree we’d see a more diverse electorate voting on these city positions?
Ben Greenfield, Change Research: Yes, because we tend to see more diversity in even-numbered years, not only along racial/ethnic lines, but also across different socioeconomic and age groups.
Andrew Villeneuve, NPI: Our general election survey consisted of 617 interviews, the same as our Top Two survey. This was improperly characterized on Twitter by a couple folks as a low sample size. In fact, it’s the highest sample size of any of the surveys conducted in Seattle this cycle with publicly released results. Lorena González’s pollster GQR uses sample sizes of 400; Elway/Crosscut also use a sample size of 400, and Strategies 360/KOMO had a sample size of 450. Having a higher sample size doesn’t necessarily mean a survey is more accurate, but it does mean that our survey has a lower margin of error. For those unfamiliar with accepted polling practices, can you explain what a typical sample size is and why the composition of the sample is far more important than the size?
Ben Greenfield, Change Research: Sample sizes can differ by geography. For example, it’s not uncommon to see 1,000-person polls nationwide, but you’d rarely see a sample that large in a city like Seattle. Though a larger sample size will result in smaller margins of error, a poll is only as good as its sample. For example, a survey of 800 people who show up at a Trump rally is not going to reflect the views of voting Seattleites. But a 617 person poll with a representative sample of voters across all areas and all backgrounds in the city can.
Andrew Villeneuve, NPI: As we have disclosed through the publication of our survey methodology, some of our survey participants were recruited to participate in part using ads placed on Instagram and Facebook in addition to text message. It might seem illogical that a poll conducted online with respondents recruited from Facebook and Instagram could be credible or trustworthy, but as we like to say, it’s the method that matters, not the medium. Can you speak to how Change Research builds its samples and ensures that they are representative?
Ben Greenfield, Change Research: While many polling firms recruit their participants by calling their phones, where response rates are incredibly low and call screening is high, we reach voters where they are, and allow them to take surveys on their own time. Between social media targeting and SMS [SMS stands for Short Message Service] messages to anyone with a cell phone on record, we are able to reach the vast majority of voters, and ensure that we are receiving a proportionate response rate from voters of every age, gender, race or ethnicity, political persuasion, region of the city, and so on.
Andrew Villeneuve, NPI: Another comment we saw questioned whether the general election survey’s results had any validity given that Councilmember Teresa Mosqueda received 39% in the poll after getting 59% in the Top Two election, with 26% undecided. Still another commenter argued that it was absurd that so many people could be undecided this close to Election Day. But, in fact, it’s not uncommon for significant percentages of voters to be undecided in “nonpartisan” local elections like this in the days leading up to an election, or for some of the support a candidate previously received to be tepid, is it?
Ben Greenfield, Change Research: Not at all. Especially in municipal elections, a large percentage of voters tend to make up their minds in the final days before they cast their vote. What’s more, particularly in nonpartisan elections, it’s common for people to give the candidates a fresh look in a general election, and not always default to the candidate they voted for initially.
Andrew Villeneuve, NPI: In our statewide polling, we consistently see a higher number of undecided voters in races where no party preference is provided on the ballot, like State Supreme Court races. (Washington elects its justices to six-year terms, unlike at the federal level, where they are appointed and serve for life.) Other localities around the United States have partisan local elections instead of “nonpartisan” elections. Change Research does work all around the country. Do you find that in partisan local elections, there’s typically fewer undecided voters than in “nonpartisan” local elections like those Washington’s cities have?
Ben Greenfield, Change Research: Yes. It’s very common in partisan local elections to see voters certain about who they’ll vote for even if they indicate zero familiarity with the candidates — they’ll just choose based on the candidate’s party. Since this is obviously not possible in nonpartisan races, there are often many more undecided voters.
Andrew Villeneuve, NPI: Seems like it’s also worth noting that Teresa Mosqueda’s opponent Kenneth Wilson did not poll any higher than the percentage all of her challengers collectively received in the Top Two election (he got 31%). Teresa Mosqueda could still end up with most or nearly all of the undecided voters in the general election. We have characterized her as the favorite. But a victory for Wilson is also a possibility. Polls such as our survey can’t predict the future, as you noted, but they can help us guess more effectively what could happen by providing evidence supporting one or more plausible outcomes.
It’s important to remember that a whole week has already transpired since our survey finished fielding. We can presume the electoral dynamics have already changed a bit. It’s an election in progress. Last time, we saw two candidates get big surges of support after our last survey fielded. One was the candidate we’ve been discussing, Kenneth Wilson, who polled at 1% and ended with over 16%. The other was Sara Nelson, who polled at 11% and ended with 39.47%.
Ben Greenfield, Change Research: Exactly. So many dynamics can change in an instant: a candidate receives a key endorsement; a video goes viral, etc.
Andrew Villeneuve, NPI: Ben, thanks so much for this discussion. As I said, we’ve enjoyed working with you this year and look forward to continuing to do so. Any concluding thoughts for our readers as we approach the general election?
Ben Greenfield, Change Research: I’ll say what I said at the beginning: polls are just snapshots, not predictions. Anything could happen between now and when the final ballots are cast, and none of these races have been won or lost.
So if you’re eligible, vote!
Thanks again to Change Research for joining us to talk about the science behind our polling! If you have a question or concern we didn’t answer here, you can leave a comment or reach out to us privately using our contact form.
And, as Ben said, remember to return your ballot by November 2nd at 20:00 (8:00 PM) if you’re a Washington State voter. We have guidance on how to vote on those “advisory votes” you’ll see at the top of the ballot at VoteMaintained.org and there are many organizational endorsement guides available if you’d like to take your research beyond the voter’s pamphlet statements.