Last night was a very bad one for Adrian Dix and the New Democrats, who expected victory as much as the pollsters did. And with good reason: a stabilizing, maybe even growing, lead over the B.C. Liberals with hours to go before the polls opened. Instead, British Columbians collectively woke up and changed their minds and swung about 13 points towards Christy Clark. Or, more likely, something disastrously wrong occurred in the polling industry.
I wrote about the implications for the four party leaders for The Huffington Post Canada, and took a look at why the polls went wrong for The Globe and Mail.
Why did they go wrong? I have no explanation this morning. In Alberta, there was the late swing. There was the novelty of the Wildrose Party. There was the relative lack of polling in the final days. There was the inexperience of the pollsters who were active. There was the immensely more well-oiled organization of the Progressive Conservatives.
In British Columbia, there was no indication of a late swing. If anything, there was a sign that Clark's momentum had reversed itself. The New Democrats were not an unknown quantity. There was polling being done as late as Monday. There was the experience of two pollsters with long and successful histories in British Columbia. There was the much-vaunted GOTV organization of the NDP. And yet all the polls said the New Democrats would win, and all the polls were wrong.
(Note: the chart below includes the average standard deviation between the polls from each pollster, meant as an attempt to determine whose numbers were fluctuating the most. It seems like a moot point now.)
My vote projections did second-best, mostly because I had a mechanism for diluting the support of the Greens and Conservatives. On the Liberals and NDP, I was as wrong as anyone else.
The forecasted ranges captured every vote and seat result with the exception of the NDP. Those ranges are designed to account for an Alberta-level event, but even so they were unable to predict that the New Democrats would under-perform in the popular vote to such a great degree. The ranges, implying that the polls should always be considered potentially spectacularly wrong, were apparently a good idea, but if ranges of this size need to be included in every election the usefulness of the forecasting model is virtually zero. In even a modestly close election, they will always span almost the entire spectrum since most ridings come into play at that point.
I have not had the time to input the actual vote results into the seat projection model yet, as I need to calculate the regional vote totals. I will do so as soon as possible. I suspect that the projected results will end up being very close to the actual results, as they have been in almost all the 10 elections I have worked on in the past. I will write a fuller post-mortem in the coming days.
There is no question that seat projection models like mine work. They are an effective way to translate poll results into seats. This is not voodoo magic, it is a rather simple endeavour. The challenge is being the least possible amount of wrong, which is the best that forecasters can hope for. But the models are only as good as the available information.
I have to admit that my confidence in the quality of that information - polling - has been profoundly shaken. Alberta was an aberration, and there was some good reason as to why it occurred (which I now have doubts about). Quebec was only a minor flub, which can be attributed in part to superior Liberal organization (or can it?). But this is a complete disaster. There is no reason why this should have happened, which leads me to believe that the reason it happened is because the pollsters did a bad job.
It might not be their fault exactly. Perhaps it is no longer possible to consistently and repeatedly build a sample that is reflective of the population. Can online panels be reliably effective when they aren't national? Work will have to be done to determine why this is happening and how it can be avoided. I have no doubt that the pollsters will eventually tackle the new challenges that they face. The question is how long it will take and whether it can be done in a country like Canada.
It puts into question the validity of the work I do. I write about polls every day for this site, for The Globe and Mail, for The Huffington Post Canada, and for The Hill Times. I give radio and television interviews about them. It is my full-time job. I've always approached it as a professional and have tried to provide insightful analysis of polling, separately from my role as a forecaster. No one in Canada who doesn't work for a polling firm writes about polls as much as I do.
How can I credibly continue to do so when I myself doubt that the results are reliable? While I was shocked when I saw the results last night, a part of me was not surprised that I was shocked and that they got it wrong all over again. If I go into every election assuming that disaster is more likely than triumph, what is the point?
This site was meant to be a way to cut through the confusion in polling and give a good idea of what, as a whole, the polls are saying. The site can still do that, but if what the polls are saying is not reflective of reality, what use is it?
My projection was wrong because the polls were wrong. Again. I am sorry that it was so. I can blame the pollsters for providing me with unreliable information, but I am nevertheless responsible for what is posted here, for the defense of polling I have mounted for the last few years, and for whatever confidence I expressed when analyzing the numbers in an attempt to inform readers about the state of the race in British Columbia and elsewhere. I apologize for that. Where do we go from here?