In response to a very good, but also somewhat dismissive, article by Joan Bryden of the Canadian Press on polls, pollsters, and methods, I decided to take a look at how the pollsters had performed in the last two federal elections.
I've looked at this before - it is how I came to rank the pollsters according to how they have performed in recent federal and provincial elections. I use this ranking to determine one of the three factors I use to weight the polls included in my model. The ranking takes into account the accuracy of the pollsters in calling the Alberta, Quebec, and Canadian elections of 2008, the Nova Scotia and British Columbian elections of 2009, and the 2010 New Brunswick election.
But this article by Joan Bryden touched a bit of a nerve. After all, I write about polls on a daily basis. Are they all such unreliable twaddle?
If they are, it is remarkable how all of the different pollsters are able to generally provide the same inaccurate results using different polling methods. There's a reason we don't see the Tories at 50% in one poll and 10% in another, and it has something to do with accuracy.
The pollsters rarely disagree with one another in a very strong way, and though we have no election to check their accuracy against we do have the poll results of their peers. All of the pollsters have the Conservatives between 34% and 37%, and the Liberals between 25% and 29%. That is pretty consistent, and since they all agree with one another we have a good reason to believe they are all accurate within their margins of error. We don't have one poll putting the NDP ahead in Alberta and the Conservatives ahead in Quebec - though the regional results vary the usually tell the same general story.
The article does make a very good point about margins of error, however. We should all take more care in reporting on results in the context of the MOE. I try to do that as often as I can, but perhaps I should take more care in the future.
But we shouldn't dismiss polls because they use imperfect methods. As the article points out, no method is without its flaws. But that is why a site like ThreeHundredEight is useful. By aggregating the polls and weighting them, we can cancel out some of the problems of each individual poll.
The results of the last two elections show why polls certainly aren't unreliable, and why aggregating the results can be more accurate. We'll start with the 2006 federal election. In the chart, E = Election Day, so E-1 means the poll was completed one day before the election. All of this data is from the Wikipedia pages for the 2006 and 2008 elections.In 2006, the Conservatives, NDP, and Bloc performed slightly better in polls taken within five days of the vote. The Liberals fared worse. Nanos was the closest, with an average margin of error of 0.3 points. EKOS was next best, with an average MOE of 1.44 points, while Strategic Counsel and Ipsos-Reid were not far behind.
The average MOE of these four polls was 1.23 points, but when we average out the polling results we get an MOE of only 1.16, bettered only by Nanos. This is an indication of why aggregating poll results tend to give a more accurate result than most individual polls. Nate Silver on FiveThirtyEight has also written about this.
In 2008, the pollsters were not as close. For whatever reason the Conservative vote was hard to pin down. Either the polls were somewhat inaccurate, or voting intentions shifted in the last few days. The Dion interview, released at the very end of the campaign, may have had an effect.Nevertheless, the pollsters were not very far off. The best result was Angus-Reid's, off only by an average of 0.88 points. EKOS, again, was off by 1.44 points while Ipsos-Reid, Nanos, and Harris-Decima averaged an MOE of about 1.8 points. The worst performers were the Strategic Counsel and Segma, but both had an average MOE of less than the usual +/- 3.1.
The average MOE was 1.75 points, but the MOE on the average poll result was only 1.38 points, better than all pollsters except Angus-Reid.
Note that Angus-Reid uses an online polling methodology, while Nanos (the best result in 2006) uses the traditional telephone survey.
Unfortunately, ThreeHundredEight wasn't active during the 2008 federal election, so I don't know if I would have done better than a simple average. However, I was active during the 2008 Quebec election.Averaging out the results of the last three polls conducted for that provincial election (by CROP, Léger, and Angus-Reid, the only pollsters active in the last week) gives a MOE of 1.9 points, still quite good. My projection, however, was only off by an average of 1.1 points.
UPDATE: A comment in response to this post has gotten me thinking about the value of pre-writ polls. Are we just gauging the horse race for our own score keeping? Political parties run their own internal polls in pre-writ periods to get a handle on what Canadians are thinking. If this is information they find valuable enough to pay for, and are partly basing their own decisions on the results of these polls, shouldn't we have access to the same kind of information? If the NDP decides to support the next budget, will it be because they truly believe it to be a positive budget, or will it be because they fear an election? Does that matter? Well, that's a question for individuals to answer for themselves.
Polling faces a lot of challenges today: how to reach the cell phone users, how to handle online polls, how to deal with low response rates. But that doesn't mean they are useless or unreliable. On the contrary, they are both useful and reliable - within context, of course. And the media does have a responsibility to provide that context. But let's not throw the baby out with the bathwater.