Friday, November 05, 2010

Rasmussen Polls Were Biased and Inaccurate

Rasmussen Polls Were Biased and Inaccurate;
Quinnipiac, SurveyUSA Performed Strongly

Every election cycle has its winners and losers: not just the among the candidates, but also the pollsters.

On Tuesday, polls conducted by the firm Rasmussen Reports — which released more than 100 surveys in the final three weeks of the campaign, including some commissioned under a subsidiary on behalf of Fox News — badly missed the margin in many states, and also exhibited a considerable bias toward Republican candidates.

Other polling firms, like SurveyUSA and Quinnipiac University, produced more reliable results in Senate and gubernatorial races. A firm that conducts surveys by Internet, YouGov, also performed relatively well.

What follows is a preliminary analysis of polls released to the public in the final 21 days of the campaign. Our process here is quite simple: we’ve taken all such polls in our database, and assessed how accurate they were, on average, in predicting the margin separating the two leading candidates in each race. For instance, a poll that had the Democrat winning by 2 percentage points in a race where the Republican actually won by 4 would have an error of 6 points.

We’ve also assessed whether a company’s polls consistently missed in either a Democratic or Republican direction — that is, whether they were biased. The hypothetical poll I just described would have had a 6 point Democratic bias, for instance.

The analysis covers all polls issued by firms in the final three weeks of the campaign, even if a company surveyed a particular state multiple times. In our view, this provides for a more comprehensive analysis than focusing solely on a firm’s final poll in each state, since polling has a tendency to converge in the final days of the campaign, perhaps because some firms fear that their results are an outlier and adjust them accordingly.

(After a couple of weeks, when results in all races have been certified, we’ll update our official pollster ratings, which use a more advanced process that attempts to account, for instance, for the degree of difficulty in polling different types of races.)

The 105 polls released in Senate and gubernatorial races by Rasmussen Reports and its subsidiary, Pulse Opinion Research, missed the final margin between the candidates by 5.8 points, a considerably higher figure than that achieved by most other pollsters. Some 13 of its polls missed by 10 or more points, including one in the Hawaii Senate race that missed the final margin between the candidates by 40 points, the largest error ever recorded in a general election in FiveThirtyEight’s database, which includes all polls conducted since 1998.

Moreover, Rasmussen’s polls were quite biased, overestimating the standing of the Republican candidate by almost 4 points on average. In just 12 cases, Rasmussen’s polls overestimated the margin for the Democrat by 3 or more points. But it did so for the Republican candidate in 55 cases — that is, in more than half of the polls that it issued.

If one focused solely on the final poll issued by Rasmussen Reports or Pulse Opinion Research in each state — rather than including all polls within the three-week interval — it would not have made much difference. Their average error would be 5.7 points rather than 5.8, and their average bias 3.8 points rather than 3.9.

Nor did it make much difference whether the polls were branded as Rasmussen Reports surveys, or instead, were commissioned for Fox News by its subsidiary Pulse Opinion Research. (Both sets of surveys used an essentially identical methodology.) Polls branded as Rasmussen Reports missed by an average of 5.9 points and had a 3.9 point bias. The polls it commissioned on behalf of Fox News had a 5.1 point error, and a 3.6 point bias.

Rasmussen’s polls have come under heavy criticism throughout this election cycle, including from FiveThirtyEight. We have critiqued the firm for its cavalier attitude toward polling convention. Rasmussen, for instance, generally conducts all of its interviews during a single, 4-hour window; speaks with the first person it reaches on the phone rather than using a random selection process; does not call cellphones; does not call back respondents whom it misses initially; and uses a computer script rather than live interviewers to conduct its surveys. These are cost-saving measures which contribute to very low response rates and may lead to biased samples.

Rasmussen also weights their surveys based on preordained assumptions about the party identification of voters in each state, a relatively unusual practice that many polling firms consider dubious since party identification (unlike characteristics like age and gender) is often quite fluid.

Rasmussen’s polls — after a poor debut in 2000 in which they picked the wrong winner in 7 key states in that year’s Presidential race — nevertheless had performed quite strongly in in 2004 and 2006. And they were about average in 2008. But their polls were poor this year.

The discrepancies between Rasmussen Reports polls and those issued by other companies were apparent from virtually the first day that Barack Obama took office. Rasmussen showed Barack Obama’s disapproval rating at 36 percent, for instance, just a week after his inauguration, at a point when no other pollster had that figure higher than 20 percent.

Rasmussen Reports has rarely provided substantive responses to criticisms about its methodology. At one point, Scott Rasmussen, president of the company, suggested that the differences it showed were due to its use of a likely voter model. A FiveThirtyEight analysis, however, revealed that its bias was at least as strong in polls conducted among all adults, before any model of voting likelihood had been applied.

Some of the criticisms have focused on the fact that Mr. Rasmussen is himself a conservative — the same direction in which his polls have generally leaned — although he identifies as an independent rather than Republican. In our view, that is somewhat beside the point. What matters, rather, is that the methodological shortcuts that the firm takes may now be causing it to pay a price in terms of the reliability of its polling.

*-*

The table below presents results for the eight companies in FiveThirtyEight’s database that released at least 10 polls of gubernatorial and Senate contests into the public domain in the final three weeks of the campaign, and which were active in at least two states.


The most accurate surveys were those issued by Quinnipiac University, which missed the final margin between the candidates by 3.3 points, and which showed little overall bias.

The next-best result was from SurveyUSA, which is among the highest-rated firms in FiveThirtyEight’s pollster rankings: it missed the margin between the candidates by 3.5 points, on average.

SurveyUSA also issued polls in a number of U.S. House races, missing the margin between the candidates by an average of 5.2 points. That is a comparatively good score: individual U.S. House races are generally quite difficult to poll, and the typical poll issued by companies other than SurveyUSA had missed the margin between the candidates by an average of 7.3 points.

In some of the house races that it polled, SurveyUSA’s results had been more Republican-leaning than those of other pollsters. But it turned out that it had the right impression in most of those races — anticipating, for instance, that the Democratic incumbent Jim Oberstar could easily lose his race, as he eventually did.

YouGov, which conducts its surveys through Internet panels, also performed fairly well, missing the eventual margin by 3.5 points on average — although it confined its polling to a handful of swing races, in which polling is generally easier because of high levels of voter engagement.

Other polling firms that joined Rasmussen toward the bottom of the chart were Marist College, whose polls also had a notable Republican bias, and CNN/Opinion Research, whose polls missed by almost 5 points on average. Their scores are less statistically meaningful than that for Rasmussen Reports, however, because they had only released surveys in 14 and 17 races, respectively, as compared to Rasmussen’s 105 polls.

Related Posts From www.FiveThirtyEight.com

The Uncanny Accuracy of Polling Averages*, Part IV: Are the Polls Getting Worse?
The Uncanny Accuracy of Polling Averages*, Part III: This Time, It’s Different?
‘Enthusiasm Gap’ Was Largest in Presidential Swing States
Did Polls Underestimate Democrats’ Latino Vote?
Agreeing to Disagree: Size of Republican Wave Hard to Predict

No comments: