"Past Performance No Guarantee of Future Results"
It's a slow Saturday today for some reason. Laundry is in the dryer and I'm watching Giada De Laurentiis cook up some cool Thanksgiving leftover recipes, to be followed by The Barefoot Contessa. Any day the TV is not locked on the Cartoon Network is a good one.
Early in the week, professional pollster Mark Blumenthal promised a look at what exactly happened in Ohio on Election Day and why the renowned Columbus Dispatch mail-in poll went horribly wrong on the statewide issues that were on the ballot.
Caveat: This is just a summary, because Blumenthal really looked at this in-depth. If you're interested in the gory details, read his entire post.
While you normally hear and phrases like "past performance is no guarantee of future results" when you're reading a financial prospectus, for example, Blumenthal says the same applies to the Dispatch poll. Here's why:
While we may never know the exact reasons why, it is clear in retrospect that problems stem from the very different challenges the Dispatch poll faced this year and the modifications to its methodology made in response.
Why is methodology so important? Because you can't just slap together a list of questions for people to answer. The questions developed for a survey must have 2 qualities: reliability and validity. If it is reliable, it will yield consistent results. If it is valid, it will measure what it's supposed to measure. Here's a metaphor that is typically used to explain the relationship between reliability and validity. Now back to the poll.
Blumenthal offers the following reasons why the Dispatch poll failed in 2005:
1) It has always been less accurate in statewide issue races than in candidate contests.
2) It had never before been used to forecast an off-year statewide election featuring only ballot issues.
3) It departed from past practice this year by including an undecided option and not replicating the actual ballot language - two practices that helped explain the poll's past accuracy.
Blumenthal says, "While the Dispatch polls typically offer an undecided option on other polls, they typically drop the undecided option on the final pre-election poll in order to better replicate the actual voting experience. This time, however, according to an Election Day email exchange I had with Darrel Rowland of the Dispatch, they were concerned that with the state issues involved, "not to include that could have greatly distorted Ohioans' stance on these issues." "
As for the length and complexity of the ballot language, Blumenthal says, "Imagine the effort required by the voter to try to digest all of this information (which apparently appeared in very small print on the ballot), or just the impression left by seeing all that verbiage, either in the voting booth or in the fifteen-page "Issues Report" available from the Secretary of State's office." Just how long? Issue 1 was 606 words long; Issue 2, 210 words; Issue 3, 932 words; Issue 4, 616 words; and Issue 5, 351 words.
Most importantly, the Dispatch poll did not attempt to replicate the ballot language. Respondents were provided with a greatly condensed version of each issue. Doing so may have introduced two important errors. First, Blumenthal says, they did not replicate the experience real voters had when confronting over 2700 words of ballot text. Second, by simplifying the concepts involved they may have unintentionally yet artificially framed the choice around the substance of the proposals (vote by mail, redistricting reform, etc). The real campaign framed those choices around more thematic arguments that tended to lump all the proposals together (which would really fight corruption, improve democracy, provide "loopholes" for special interests, etc.).
According to Blumenthal, "While I cannot cite academic research on this point, the nearly universal experience of those who follow initiative and referenda campaigns is that when confused or in doubt, regular voters will default to the "no" vote. That is why support for ballot issues almost always declines in tracking polls as Election Day approaches."
4) Its response rate this year was roughly half that obtained in recent elections, including a similarly low turnout election in 2002.
One of the counter-intuitive aspects of the Columbus Dispatch Survey is that it seems to do better at getting a representative sample of likely voters despite having had a lower response rate than comparable telephone studies conducted since 1980. This is possibly because telephone surveys do worse at identifying likely voters because "the social desirability of being an active participant in the democratic process often leads to an overrporting" of likelihood to vote, past voting and interest in politics.
So while Dispatch survey respondents were more representative of the voting electorate than the "likely voters" identified by telephone surveys, there's incomplete evidence that this advantage didn't exist in 2005. Self-identified Democrats outnumbered Republicans by 10 percentage points...even though "the returns [typically] lean a little Republican, which reflects Ohio's recent history of tilting a bit toward the GOP." Also, the geographic distribution of voters may have been off (presumably a bit heavier in Democratic areas). When that happens the responses are usually weighted to reduce the discrepancy but Blumenthal suspects that "weighting by party or region probably would not have reduced the discrepancy significantly. These differences are clues to what may have been a "response bias" that was related more to the vote preference than two political party."
This year, the Dispatch Poll response rate fell off significantly. It was only 11% for the poll conducted in late September and 12% on the final survey. Turnout alone does not explain the difference. Blumenthal suggests two possibilities. First, it was a mail-in vote survey about voting by mail: "Issue 2 was a proposal to make it easier to vote early or by mail in Ohio. So we have a survey that was, at least in part, about voting by mail. Wouldn't we expect a higher response rating among those who want to vote by mail on an election survey that attempts to replicate voting...by mail?" Second, uncertainty and confusion equals non-response: voters who were confused or uncertain decided to simply pass on completing the survey. Voters who were familiar with the issues and supported them were more likely to complete the survey.
5) The timing of the poll would have missed any shifts over the final weekend, and the final poll showed support trending down for all four initiatives. Meanwhile a post-election survey showed that nearly half the "no" voters made up their minds in the days after the Dispatch poll came out of the field.
1,533 respondents November 9-13 who reported casting ballots in the special election were interviewed after the election. Among those who said they "generally voted no" on the reform issues, nearly half (44%) made up their minds and "decided to vote no in the closing days of the campaign" rather than having been against them all along [emphasis added]. The actual ballot language may have helped make the case for 'no,' Blumenthal says. One of their central messages was that the reform proposals would open "gaping loopholes for special interests." Imagine what conclusions a voter might reach on encountering all that fine print.
Finally, Blumenthal addresses the question of fraud. Bob Fitrakis and Harvey Wasserman posted an article on their website FreePress.org, concluding that either the "uncannily accurate" Dispatch poll was wrong, or "the election machines on which Ohio and much of the nation conduct their elections were hacked by someone wanting to change the vote count."
Blumenthal asks, "Were the results surprising given the survey's history? Yes. Were they "staggeringly impossible?" Of course not. Fitrakis, Wasserman and Friedman "Brad" seem to think polls (or at least, those polls that produce results they like) are imbued with magical powers that make them impervious to error. They are not." Furthermore, 82 of Ohio's 88 counties cast their ballots last week on election equipment that left a paper trail.
Go read the entire post. It will take you a couple of times to digest it but it is very informative.
Alternate link for comments