Now that the dust has settled, we can take a look at how each of the pollsters did and assess their performances in the past election.
While it is true that polls are only a snapshot in time, and so are limited in their ability to predict into the future, polls are judged by how closely they align with election results. There is no other way to judge them. Certainly, it is possible that these polls were tracking voting intentions on their field dates accurately, and so cannot be judged for missing out on changing intentions over the last days or hours of the election campaign. While that is a valid argument, a means of assessing pollsters is required and comparing their final poll results to the actual results of the election is the only measuring stick we have. And pollsters themselves use that measuring stick, so we can certainly hold them to their own standards.
All polling firms have been assessed by their final poll results. Nanos Research has been assessed according to their final two-day report that was featured by CTV and The Globe and Mail. Crop, Innovative Research, and Environics are being assessed by their final polls, though they were all taken one week or more before the day of the vote. That may seem unfair, but these are the final numbers we have from these firms and they need to be assessed by some measure. Consider it a penalty for not releasing data closer to the end of the campaign.
Pollsters are assessed by their average error per party. In other words, being off by a total of 20 points for the five national parties combined would be the equivalent of having an average error of 4.0 per party. The party who performed best is highlighted in white. The polling firms are placed in the order of the date of their final poll. We'll start at the national level.
See the bottom of this post for a discussion of how the margin of error could be taken into account in this assessment.
Note that only Compas, who was nevertheless very far off, was the only polling firm to over-estimate Conservative support. All others under-estimated their support by almost two points or more. The New Democrats were relatively well-tracked, but only Abacus and Ipsos-Reid had the Liberals lower than their actual result.
Disregarding the results from Environics and Innovative, the best polling method turned out to be the online panel, with an average error of 1.4 points per party. Traditional telephone polling scored 1.6 points, while IVR stood at 2.0 points' worth of error per party.
Also note that those pollsters who do not prompt for the Green Party (Ipsos-Reid) or any party (Nanos Research) generally predicted the Green Party's eventual tally better than those who prompted for the Greens.
Now let's move to the regional assessments, going west-to-east and so starting with British Columbia.
Here, Léger Marketing and Compas scored best with an average error of two points per party. Harris-Decima, at an error of 2.5 points, also did well in this province.
Nanos Research and Ipsos-Reid, who both put the Liberals in the mid-20s, did worst here.
As at the national level, only Compas over-estimated Conservative support. All of the others under-estimated their support, mostly to the benefit of the Liberals and New Democrats. But results varied widely, with Liberal support being pegged at between 10% and 26%, while the NDP was scored at between 25% and 40%. Small sample sizes are partly to blame. Green support, on the other hand, was well tracked.
In Alberta, Angus-Reid did best with an average error of only 1.3 points. The next best was Abacus Data, at 2.1 points. Harris-Decima did worst.
The pollsters had an easier time discerning Conservative support in this province, with two of the pollsters being exactly right. Others (Ipsos-Reid) inflated Tory support while yet others under-estimated them (Harris-Decima, EKOS, Forum). The NDP and Liberals were well tracked, though only Abacus had them in single-digits. All-in-all, the pollsters did well in Alberta.
Compas and Nanos, however, grouped Alberta with Saskatchewan and Manitoba. In these three provinces, Nanos bested Compas by an average of 0.7 points. Compas appeared to give some of the Liberal support to the Tories, while Nanos gave some of the Tory support to the NDP.
In the more usual grouping of Saskatchewan and Manitoba, Angus-Reid did best with an average error of 1.5 points, closely followed by Ipsos-Reid. EKOS struggled here.
Generally, the pollsters had an easier time pinpointing Conservative support here, with three of the pollsters within about a point of their final result. The New Democrats were also well represented, with the biggest error being only of 3.1 points among those polling firms active in the final days of the campaign. The Liberals were also well tracked, while only Ipsos-Reid correctly had the Greens at 3% in these two provinces.
Ontario was the most important province to poll correctly. Angus-Reid did the best here, with an average error of only 1.7 points. They were closely followed by Harris-Decima, while Ipsos-Reid did the worst among those who polled in the final days of the campaign.
But the problem here, as elsewhere, was in recording Conservative support. Again, only Compas over-estimated the Tories while all others had them at 41% or lower. That error had great consequences in determining whether they would win a majority or minority government.
The pollsters, except Compas, over-estimated NDP support, but only by a little. Some (EKOS, Harris-Decima, and Angus-Reid) were very close to accurately predicting the NDP's support, but others had them well over their actual result. The Liberals were generally well polled, however, two pollsters being on the money and another three being within a point.
Quebec provided the biggest surprise on election night, but amazingly the pollsters did very well in the province. They did better than they did in Ontario, which usually has larger sample sizes.
Ipsos-Reid takes the crown in Quebec, but was closely followed by Nanos, Forum, and Léger. Compas and Crop did the worst, though they were still relatively close.
Three pollsters were almost exactly right in predicting the NDP's result, while four others had the NDP over 40%. The pollsters had a bit more trouble with the Bloc, with only Nanos and Forum indicating that they would end up in the low 20s. The pollsters did an excellent job in recording Liberal support, but did a little worse with the Conservatives. All in all, though, the pollsters did an excellent job in Quebec.
That was not the case in Atlantic Canada, which was the worst polled region in Canada. Granted, it usually is polled in very small numbers but the same amount of people are usually surveyed in the Prairies, and there the pollsters did much better.
Harris-Decima was closest with a very good average error of 1.3 points, while Léger was close as well. But EKOS, Angus-Reid, Compas, and Ipsos-Reid were all off by six points or more, with seven pollsters putting the NDP in first in the region.
Results varied wildly, with the Conservatives pegged at between 26% and 44%, the NDP between 28% and 46%, and the Liberals between 11% and 30%, among pollsters active at the end of the campaign. Only Harris-Decima had the NDP below 30%, while most under-estimated Liberal support. Atlantic Canada was a wash.
And that brings us to our final ranking. The pollsters have been ranked on two scores: their national average error (recorded in the first chart at the top of this post) and their average regional error. Combining the two rankings anoints Angus-Reid as the best pollster in 2011, but also awards Harris-Decima, Léger Marketing, and Nanos Research with honourable mentions.
Forum Research and Abacus Data, in their first federal campaigns, did very respectably.
EKOS struggled while Compas was the worst pollster active in the final days of the campaign. Environics and Innovative Research might have done better had they polled closer to election day.
However, national results can be close merely because all of the regional errors cancel each other out. That was the case with Ipsos-Reid, which drops to 9th place in the average regional error ranking.
Instead, Léger Marketing takes the top spot on its regional results with an average error of 2.4 points. Harris-Decima and Angus-Reid were close behind with an average error of 2.7 and 3.0 points, respectively, while Abacus Data placed 4th on the regional ranking.
Regionally, of those active in the final days of the campaign the online pollsters were off by an average of 2.9 points. Traditional telephone surveys were off by 3.7 points, while IVR surveys were off by 3.9 points. Though it could have been blind luck, the much-maligned online surveys performed best in the 2011 federal election.
Margin of Error Update
Some pollsters and commenters have brought up the issue of the margin of error, and whether that should be taken into account to judge the performances of the polling firms.
From a statistical standpoint, polling firms should be able to accurately predict the result of each party within the poll's margin of error. Though it is hardly the focus of any press release or media report, the margin of error is (or should be) always included in any poll report, and thus if a poll gets the result correct within the margin of error it is a technically accurate poll. But this benefits polling firms with smaller samples, who have larger margins of error within which to work.
If we use this standard, at the national level we would have to eliminate all but Ipsos-Reid, Abacus Data (assuming a random sample for this online pollster), and Nanos Research. These are the only three firms whose national poll findings were within the margin of error of the sample.
If we take it to the next level, assessing each polling firm by the margin of error for each individual party (i.e., the margin of error for the Green Party is not the same as the margin of error for the Conservative Party because of their different levels of support), we have to eliminate all but Nanos Research from this assessment. Ipsos-Reid would fail to pin down NDP support taking this margin of error into consideration, while Abacus would be wrong for the Green Party.
But why stop at the national level? We do not elect presidents - regional data is as important, if not more important, than the national horserace numbers. If we bring it down to the regional level, then even Nanos has to be eliminated as their result of 23.6% for the Liberals in British Columbia (rather than their actual 13.4%) is outside of this particular sample's 7.7% margin of error. No pollster would survive this assessment, as no pollster would fit this criteria even taking into account the 95% confidence.
Just as problematic methodology will not be corrected by over-sampling, it can also be masked by the large margin of error of smaller samples. In the end, Nanos should be commended for having its national poll results within the margin of error, with honourable mentions also going to Ipsos-Reid and Abacus Data. But from my perspective, polling firms that conduct surveys with large samples, thereby giving us more reliable regional results, should not be thrown under the bus. Polls are reported for the consumption of the general public, and the general public is interested in how accurately the polls predict actual outcomes, both nationally and regionally. While I agree that some consideration should be taken for the margin of error in assessing the performance of the pollsters, which I have now done, I believe that my original assessment stands.