I wrote about the issue of turnout, and in particular the profile of those who do vote, in my article for The Globe and Mail today. I suggest you read it, as I am only going to summarize here some of the points that are made in that piece.
Both the Ipsos-Reid and Insights West polls were able to replicate the final vote tally, suggesting that their polls are broadly reflective of actual B.C. voters.
Note: An earlier version of this post said that Ipsos-Reid's exit poll showed turnout by age. This is false - they weighted their exit poll from turnout in the 2009 B.C. election.
If we compare Ipsos-Reid's final poll of the campaign to their exit poll, we see that the Liberals stole votes from the New Democrats in every age category.
The 18-to-34-year-olds who voted had similar views to the 35-to-54-year-olds who were polled on the eve of the election, and the 35-to-54-year-olds who voted had similar views to the 55 and older respondents of the final poll. It would seem that people who vote are more like the broader, older population than those who do not. Anecdotally, that makes a lot of sense to me in a low turnout election.
So did the pollsters get the B.C. election wrong? Yes and no - they may have been in the ballpark when it came to the general population, but they failed to correct for the voting population. In the end, the polls were meant to determine likely outcomes of the campaign. To put it in the context of the market research that is the bread-and-butter of polling firms, the failure to identify voters and how they felt about the campaign was equal to a failure to identify a company's likely customer base, and how they feel about an advertising campaign. A poll is of little use to a diaper company if it is identifying the shopping habits of childless adults, and especially adults who have no intention of having children.
A side note: Ipsos-Reid posts their weighted and unweighted sample sizes in all of their polls, which makes it possible to do this sort of analysis. Other pollsters absolutely must do the same. When I looked at Forum's last poll, the same amount of information is not available, but it is possible to reverse-engineer some of their weightings. It seems that Forum uses a similar weighting scheme as Ipsos-Reid does (as they should if they are trying to match the census data). If they hadn't, however, and merely reported the voting intentions of those who responded to their poll, they would have had the Liberals at 43% to 41% for the NDP. Forum seems to report its unweighted sample sizes, and from that we can determine that 66% of Forum's final sample was over the age of 55. That appears to be way too much. But perhaps, in some cases, the people who answer a telephone poll will more closely resemble the people who vote. This does seem to make some intuitive sense, as these days it requires a generosity of civic duty and time to submit to a random telephone poll (and vote), whereas an online poll might attract a different kind of person.
But turnout was not the only factor contributing to the miss. Alberta had a dramatic change of heart in the final days and hours of the campaign, and that was responsible for some of the error in polling there. In British Columbia, there might have been a more modest change of heart that amplified the errors made in identifying likely voters.
According to the Insights West post-election poll, 11% of voters made their final decision on election day, including 12% of B.C. Liberal voters. The poll found that 17% of Liberal voters had even considered voting NDP prior to casting a ballot, enough to drop the Liberals to about 36% or 37% support if all of them had stuck with the NDP. That just happens to be the consensus level of support the Liberals had going into election day.
The Ipsos-Reid exit poll echoed the Insights West poll in finding that 11% of voters had made up their minds on May 14, and the 9% result they had for the B.C. Liberals is quite close to the Insights West result as well.
The exit poll found that 11% of Liberal voters had intended to vote NDP during an earlier part of the campaign, a not dissimilar result to Insights West's post-election poll. But this is not a magic bullet, as 8% of NDP voters said they intended to vote Liberal at some point in the campaign. It sort of cancels things out, but the fact that 33% of Green voters said they had at one point considered voting NDP may put the balance back into a shifting electorate that swung a few points' worth in the final 36 hours of the campaign. It is not enough to explain the total error, but combined with the turnout issue it might explain much of it.
Why did the switch occur? Insights West seems to suggest that a lot of it had to do with the perception that the Liberals were better on the economy and had run a better campaign, as well as a lack of trust in Adrian Dix. These issues were identified as one of the contributing factors in their decision to move from the NDP to the Liberals by more than one-third of the switchers. Ipsos-Reid also found that the issues of debt, the economy, and government spending were major vote drivers for the B.C. Liberals.
Another factor that cannot be ignored, and one that is especially problematic for the polling industry as a whole, is the expectation that voters had going into their polling stations. The polls set the tone for the campaign, but misled voters with potentially significant consequences.
Even among Liberal voters, only 22% thought they were casting a ballot for the party that would form the next majority government, while 60% thought they voting for a losing cause or, at best, a minority government. New Democrats went in with much more confidence, of course, with 75% expecting a majority and only 1% thinking the Liberals would pull off the win.
More significant, however, may be what the British Columbians who cast a ballot for the Greens and Conservatives thought would happen. A majority of Greens thought the NDP would win outright, while a plurality of Conservatives thought the same. A significant number of Greens and Conservatives thought the next government would be a minority one, giving a Green or Conservative MLA a lot of influence. But only 11% of Conservatives and 2% of Greens thought the Liberals would win a majority. Considering that, according to the poll, 72% of Green voters and 87% of Conservative voters thought that Christy Clark did not deserve to be re-elected, would they have voted differently if the polls were predicting a majority victory by the Liberals?
This is why the pollsters have an important responsibility to get their election calls right, which means a greater emphasis on ensuring they have proper models in place to estimate turnout. But according to most pollsters, that is a huge challenge.
A quick note on who had the best internal polls. Clearly, the New Democrats were not well-served by the polls since they expected victory. The B.C. Liberals may have been better served, but we cannot know for sure if their internal pollsters are highlighting where they went right and not mentioning where they went wrong. This is one of the reasons why public polls are needed, as otherwise all we'd know about the state of the race is the leaked (and spun) internal numbers from each of the campaigns.
But in terms of methodology, there does seem to be a clear difference. The NDP appears to have been relying on province-wide polling, as those in the media were, and were deceived by the numbers (as those in the media were). The Liberals, on the other hand, appear to have identified some 30 swing ridings and polled them furiously, ignoring those ridings considered safe or not in play. From these polls, they were able to identify how the campaign was going more precisely. This seems to have been the successful method used by the Progressive Conservatives in Alberta as well - tighter, deeper polling, the sort that media outlets cannot afford.
What the rest of us can do about it
What are we to do, then? We cannot hope to have the sort of in-depth polling that political campaigns have since the newspapers and television networks that commission polls cannot afford anything of that quality, while the polls that are given away for free also have to be done on the cheap.
If disclosure and transparency increased, those of us who are interested and have the time to do so can parse through the data more closely and derive whatever information we are looking for. That is one thing that can be easily done, is done elsewhere and should be required in Canada. If any government MPs are reading this, a change to the Election Act would be appreciated!
But for myself, I have to take a different approach to the polls and the forecasts that are published here. For the next campaign - which looks likely to be in Nova Scotia, which should (hopefully) be a less problematic one as the Halifax-based Corporate Research Associates have a good track record - I am considering what new methods I can employ.
The seat projection model needs nothing more than minor tweaking, the sort that takes place after each election when more information is available (i.e., the performance of independents). The focus needs to be on ensuring the numbers plugged into the model are more accurate.
But is there anything I can do? Estimating these sorts of swings and error levels before they occur is virtually impossible, and I am just as likely to miss it one way and make the projections worse as I am to get it right. Instead, I will hope that the pollsters do improve their turnout models and I will report the aggregates without any adjustment - a forecast that is entirely based on what the polls are saying.
That will be the base, but I need to have some means to give readers an idea of how the polls could be wrong. In the B.C. election, I calculated those estimates with the polls themselves. The projection was based on the estimated margin of error of the samples included in the projection. The forecast was based on the volatility in the polls.
Leaning so heavily on the polls themselves seems to be a bad idea, considering the problems that have occurred in the last three provincial elections. Instead, I will try to estimate the likely error of the polls based on how the polls have been wrong before, showing what an average over- and under-estimation has been for parties in similar situations in other elections (i.e., incumbent governments). That should provide a good indication of how much error we can expect in the polls, and I will also include a best guess as to whether an over- or under-estimation is more likely.
I'd also like to develop a turnout model that can be included in addition to the high/low and poll forecasts. At this stage, I'm favouring something simple: dropping the 18-34s from every poll and doubling the 55+. I will look into this more deeply, but I suspect it will provide better results in the majority of cases. Anything more complicated is probably not necessary (the simpler a model can be, the better - usually).
Almost every pollster that has written a post-mortem and with whom I've talked or corresponded, whether or not they were active in British Columbia, has identified the hit to the polling industry's reputation that the last few elections have inflicted as a major problem. Some are upset that polls using different methodologies are causing people to paint the entire industry with the same brush.
I suspect that because of these concerns, those pollsters who will be active in future campaigns will invest extra time and money into ensuring their polls are right. They need to rehabilitate their industry's reputation, and those that missed the call in Alberta and B.C. need to rehabilitate their own. The incentive of proving one's own methodology to be accurate will be even stronger, particularly for those that were not active in B.C. For that reason, I am optimistic that the next set of elections will have better polling. Perhaps naively so.