Friday, May 31, 2013

Liberals lead in Ipsos-Reid federal poll, Harper hurt by Wright/Duffy affair

Ipsos-Reid released its latest numbers for the federal scene in a poll for CTV News, showing that the Liberals continue to hold an important lead over the Conservatives (one that is widening by their reckoning). The poll also gauged public opinion on the Wright/Duffy affair, and found that a lot of it is sticking to the Prime Minister - even a majority of Conservative voters aren't sure if they believe Stephen Harper's side of the story or not.
Ipsos-Reid was last in the field Apr. 26-30, two weeks before any of this recent affair was reported in the news. Since that poll, the Liberals picked up one point and led with 36% support, while the Conservatives dropped two points to 30%. The New Democrats were up two points to 27%, while support for the Bloc Québécois and other parties was unchanged at 4% apiece (Ipsos does not include the Greens in their national surveys).

None of these changes in support are significant, suggesting a relative status quo. But within the context of the recent news, a drop for the Tories is not entirely surprising and one has to lean towards it being real.

An obligatory note in the context of the election in British Columbia: these numbers are not a reflection of the next election's outcome, but a measuring of support among the general population. Also, I consider it more unlikely that the kind of miss seen in B.C. and Alberta could occur at the national level, because the population is much less homogeneous. What motivates turnout in British Columbia might not be the same as in Atlantic Canada, or what causes a last minute change of heart in Alberta might not occur in Quebec. I would be shocked if an error like we saw in B.C. happened in a federal election (cut to scene in the future where I am dismayed at the way the polls missed the 2015 federal election).

But let's take a moment to consider the question of turnout. Using my back-of-the-napkin turnout model (drop the 18-34s and double the 55+), the Liberals end up at about the same level of support with 37%, while the Conservatives are bumped up more significantly to 34%. The New Democrats fall to 22%. Those would be the kind of numbers that could easily work to the Tories' advantage in terms of vote distribution. However, the Liberals still being ahead is a good sign for them: they were in front among respondents 55 or older with 39% to 37% for the Conservatives.

About one-in-five respondents to the poll either said they would not vote or were undecided, virtually unchanged from Ipsos-Reid's last poll.

Ipsos-Reid's reports contain a lot of information, including both weighted and unweighted sample data. This is the level of disclosure all firms should have. As I did last week with the most recent poll from Forum Research, let's take a look at how Ipsos-Reid's sample breaks down (I will do this more often so that we can see what we're looking at when new polls are released).

As you can see, the base sample needed very little torquing to get it to resemble the general population. That is a good thing, but it is also part of the design of Ipsos-Reid's (and any online pollster's) methodology. The process of recruiting respondents ensures that one or another demographic group is not under- or over-sampled. These numbers, then, speak not to the ability to build a representative sample randomly, but that the poll itself will not require distorted weighting schemes to get it close to the mark.

One question that is still being debated by the industry, and one that is being studied by firms who use the methodology itself, is whether or not the people who respond to online polls are different from the people who don't. Not in terms of their age or income levels, but just in terms of their values and perspectives. Numerous elections suggest that most online pollsters have it down pretty well most of the time, but it is unlikely to be a debate that will be put to rest soon.

Back to the numbers themselves. They are rather good for the Liberals, with sizable leads in Atlantic Canada and Quebec, a less sizable one in British Columbia, a tie in Ontario, and a narrow gap between themselves and the Conservatives in the Prairies. Most of the regional results show little change from Ipsos-Reid's last poll, but with the exception of British Columbia all of the minor fluctuations are to the Liberals' advantage.

The results in Quebec are perhaps most interesting. They show the same wide lead that other surveys (including the most recent CROP) have shown, with the Liberals at 39% to 29% for the NDP. But notable in this poll is that the Bloc Québécois has fallen eight points to only 15%. That is a very low number for them, and a drop that is about equal to the (theoretical, in this case) margin of error. The Parti Québécois has dropped in popularity in Quebec as well, suggesting that the malaise of Pauline Marois might be biting into the Bloc's support.
Their low numbers have significant consequences, as they put the Bloc out of contention for any seats whatsoever.

Nationwide, the Liberals narrowly edge out the Conservatives with 129 to 127 seats, while the New Democrats win 81 and the Greens retain their one seat.

There are two issues that are keeping the Liberals from pulling away in the seat count. The most important is Ontario. If the Liberals are in a tie with the Conservatives, they are at a disadvantage because of their concentration of support in and around Toronto. Secondly, the Liberals are at a disadvantage in Quebec as well due to their concentration of support in and around Montreal. Traditionally, at least - the latest CROP poll suggests that the Liberals are more than competitive outside of the Montreal area. If a re-alignment among francophones does occur, seat models may under-estimate Liberal strength in the province.

Of course, the driving force for these improving Liberal numbers is Justin Trudeau (along with the recent spate of problems for the Tories). On the questions of trust, having what it takes to lead, leading an open, responsible, and transparent government, and promoting democracy, Trudeau beat out both Harper and Thomas Mulcair in Ipsos-Reid's poll. Mulcair placed in third on all of these questions, except when it came to government transparency. But it should be noted that, in every case, the three leaders were relatively bunched up together. Only on democracy and trust can it be said the Trudeau was well ahead of his rivals.

However, the problems in these numbers for the Conservative government are quite obvious. The number of people who strongly disapprove (35%) of the government's performance was almost equal to the proportion who either strongly (7%) or somewhat approved (30%), and that's not including the 27% who somewhat disapprove. Only 31% agreed that the Conservatives deserve re-election, a disastrous number for them.

The Wright/Duffy affair is certainly dragging down the Conservatives. Only 13% of Canadians believe that Harper did not know about the arrangement between Nigel Wright and Mike Duffy, while 42% believe he did know (which implies he is lying) and another 44% are not sure (which is almost as bad). Even among Conservatives, 54% said they were not sure if Harper would have known or not, while 12% believe that he did. That is rather remarkable - two-thirds of Conservative supporters have doubts that the Prime Minister is telling the truth. If there is a silver lining, though, it is that 79% of Conservatives think the whole affair is of minor importance.

But those are toxic opinions, and the opposition has good reason, over and above the public interest, to keep the story in the news and to keep asking questions. The Conservatives will welcome the summer recess when it occurs later in June. Where will their numbers be when they return in the fall?

Thursday, May 30, 2013

Ontario Liberals competitive again, but majority seems out of reach

The Ontario Liberals have avoided defeat with the budget support of Andrea Horwath's New Democrats. The polls suggest that an election was not something that Kathleen Wynne had any particular reason to fear, as her party has been in a tie with the Progressive Conservatives in four of the last six provincial surveys and the New Democrats, though still polling above their 2011 election result, have settled into third place.

But while she has kept her party in a position where they can reasonably hope for re-election if a campaign goes well enough, there are some indications that Wynne has not been able to rebuild some of the bridges that were burnt in Dalton McGuinty's last campaign as Ontario leader.

Though the Liberals lost seats throughout the province in 2011 (three each in northern and central Ontario, two in eastern Ontario, and one seat in each of the Toronto, GTA, and Hamilton/Niagara regions), their majority government was primarily lost in southwestern Ontario. In that part of the province, the Liberals lost eight seats that had been theirs at dissolution. It was the region that had the largest numbers of errors in the 2011 projection, suggesting that the OLP had lost disproportionately in the region.

If the Liberals are to have a plausible hope for something more than a squeaker majority, they need to re-gain ground in the southwest. The first real test of whether Wynne can win in the region will come when by-elections are held in London West and Windsor-Tecumseh, probably within the next two months.

The chart below shows how the parties have been polling in southwestern Ontario since the 2011 election. This morning's Forum Research poll has not been included since the report is not yet on their website.

Certainly, Wynne has managed to improve Liberal fortunes in the region as she has done throughout Ontario, at least compared to the trough the Liberals found themselves in around the time McGuinty announced his plans to resign as premier. But in no poll has the party improved upon the 33.2% of the vote they took in the region in 2011.

Meanwhile, the Progressive Conservatives have only been on an incremental decline (though some surveys have bettered their 39.8% election result) while the New Democrats have made important gains. This means that the Liberals are in no position to win back the seats they lost, and could be put under even more pressure in some of the ridings they currently hold where the New Democrats have a good base.

How might the Liberals have performed in the southwest if Windsor's Sandra Pupatello had won the party's leadership race? She might have given the OLP the boost needed in the region to grapple back some of those lost seats. But at what cost?

Wynne seems to have been the best choice the Liberals could have made to solidify their Toronto base. The OLP was in serious trouble of losing the favour of Torontonians to the benefit of the New Democrats, who had moved ahead in the city in some polls at the end of 2012. But after Wynne became leader, the Liberals soared in Toronto and are currently polling at or above their election result of 47.3%.

This gives the Liberals a very solid base of some 17 seats in the city, and with the New Democrats taking a hit they could even wrest away a couple more from the NDP. Some of that surge in support is spilling over into the GTA, giving the Liberals another dozen or so seats they can count upon. With roughly 30 safe-ish seats in and around Toronto, it puts the Liberals in a strong position to challenge for re-election.

But that is where their numbers in southwestern Ontario and the other corners of the province come in. The latest poll from Ipsos-Reid suggests that the Liberals trail in central Ontario by five points and in eastern Ontario by 10, in addition to their deficit in the southwest. The Liberals are still very competitive in northern Ontario, which puts a few more seats into their column, but they are a long way from being in solidly majority territory. Recent moves about investing in public transportation for the Greater Toronto and Hamilton area may help bolster Wynne's support in that region, but will do little to boost Liberal support outside of the metropolitan area. There is even the potential for push back if the perception is that the Liberal government is raising the taxes of Ontarians outside of the GTA to pay for Toronto's public transportation (despite the current proposals only calling for tax increases in the GTHA itself).

This is why the by-elections in London West and Windsor-Tecumseh have inflated importance. As a test of the Liberals' ability to hold on to seats outside of Toronto under their Torontonian leader, the results will help determine whether Wynne has what it takes to win an election campaign provincewide. If the Liberals win both of the by-elections, the opposition parties may not be inclined to defeat the government in the autumn, and might wait to re-assess their chances in the spring of 2014. If the Liberals lose one or both of them - and in particular to the NDP - we may find ourselves in another campaign when the leaves start to fall.

Monday, May 27, 2013

Sampling, weighting, and transparency in public polls

If the election in British Columbia has convinced me of anything, it is that there needs to be a greater culture of transparency and disclosure in the polling industry and a more healthy relationship between pollsters and the media. Those in the media (including, peripherally, myself) need to ensure that the polls they are publishing are of the highest standards possible, and a policy of full disclosure on the part of polling firms will help those in the media better judge what potential issues might exist in any given poll.

There also has to be a greater understanding of what makes a good poll. Building a representative sample is an absolutely basic requirement. Understanding who is most likely to vote in polls conducted during an election campaign is something that needs to be improved and emphasized going forward.

Last week, I took a look at the role turnout plays in accurate campaign polling. Today, I'll look at the question of building a representative sample, and the role disclosure plays in knowing whether a sample is representative or not.

Late last week, The National Post published the results of the latest federal poll from Forum Research. It showed the Liberals at 44%, the Conservatives at 27%, and the New Democrats at 20%. Aside from a three-point drop for the Tories, more or less straddling what would be a statistically significant shift, those numbers were mostly unchanged from Forum's last federal poll released just after Justin Trudeau became Liberal leader. But the numbers are eye-popping, to say the least.

The full report for Forum's poll was shortly thereafter published on Forum's website. There are some things to praise in Forum's disclosure policy. The firm seems to release virtually every crosstab it has, from age to religion to ethnicity to marital status and more. Few polling firms do this, including some of the most well-known and successful in Canada. Nanos Research, for example, shows only the sample sizes of its regional numbers in its national polls but no other demographic breakdowns. Harris-Decima and Angus-Reid report plenty of demographic results, but no sample sizes beyond the number of people polled nationally.

From that perspective, Forum's disclosure policy is among the better ones in Canada. But from what I can determine the sample sizes that Forum reports in its tables are unweighted. That it isn't easy to figure out what the sample sizes represent is unfortunate, that Forum doesn't also include its weighted samples is more so.

Here again, Forum is in good company as several other polling firms have similar disclosure policies. But only Ipsos-Reid and Abacus Data routinely release both the weighted and unweighted results and sample sizes of their polls, what should be considered the bare minimum of disclosure acceptable to the media that is publishing polling results. In the United States, this is common practice.

Some polling firms are reluctant to release weighted results because they consider their weighting schemes to be proprietary. That is within their right - but when polls are for public consumption and have the ability to influence the national discussion and election campaigns, they have a responsibility to be upfront about their methodology.

Since Forum releases its unweighted sample data, we can take a look at its ability to build a representative sample with its interactive voice response methodology. We do not have this ability with many other firms, and that is a problem. That is one reason why Forum is the unlucky one to have its polling dissected in this analysis. Forum is also the newest national pollster to be regularly releasing polls and it is - by far - the most active polling firm in Canada.

The weighting is the hardest part

Who answers a poll conducted by automated telephone dialing? Forum's report gives us a clue, and the results are revealing. Respondents to Forum's most recent federal poll were older, richer, and whiter than the general population.

The chart above (and those below) shows the difference between the proportion of respondents in each age category in Forum's latest poll compared to the most recent data from the census. As Forum's report says, "where appropriate, the data has been statistically weighted to ensure that the sample reflects the actual population according to the latest Census data". From what I can tell from Forum's report, this is what was done.

And it would have had to have been. Fully 60% of respondents to Forum's poll were 55 or older, whereas this cohort only makes up 33% of the Canadian adult population. Only 9% of respondents to the poll were between the ages of 18 and 34, whereas they make up 28% of the adult population. Those are big differences.

Does that matter? The weighting takes care of all that. And 18-34-year-olds don't even vote.

Yes, it does matter. To have a good poll you need to be able to build a representative sample. It is very difficult to get a perfect sample every time, so weighting is used to round those hard edges. Weighting is not supposed to be used to completely transform a poll - and the fact that Forum had difficulty reaching younger people suggests there are issues with the methodology itself. If it was a truly random sample, there wouldn't be such a discrepancy. The question of reaching the voting population is a separate issue - polls need to be able to reach everyone, and then likely voters can be pulled from the larger sample.

And what we are seeing is not Forum's turnout model in action (or is it? Again, we don't know because it isn't explained). Forum's methodological statement itself said the sample is weighted to match the census, and if this was its turnout model it would not resemble any sort of turnout that has occurred in recent elections.

Apart from building a random sample, the size of the samples that are put together is something to consider as well. The total number of respondents aged 34 or younger was 158 in this poll of 1,709 decided or leaning voters. Assuming a random sampling of the population (a stretch for any methodology that does not call both mobile phones as well as landlines and calls-back repeatedly if the first attempts fail), the margin of error for a sample of 158 people is about +/- 7.8%.

But once weighting is applied, the sample will be inflated to represent about 479 people with a margin of error of +/- 4.5%. However, the sample remains only 158 people with the larger margin of error. Any errors that might have crept into the small sample will be amplified as the sample itself is re-weighted to about three times as many people. This is why Forum tends to have larger samples than other polls - it needs to in order to have usable demographic samples. Still, even a standard poll of 1,000 people should reach about 280 younger Canadians, not 158.

On the flipside, Forum polled 1,025 people aged 55 or older, a sample that would have a margin of error of +/- 3.1%. But that sample will have to be re-weighted to instead represent some 564 people (which normally has a margin of error of +/- 4.1%.) So Forum's poll has a much better idea of how older people will vote, but unnecessarily so. If the actual support of younger Canadians falls on the far edges of the margin of error of that small sample, the potential for a very inaccurate result increases as the weightings are applied.

The chart below shows the unweighted and weighted samples as well as the relevant census numbers for Abacus Data's latest federal poll, which was released in April. Because of Abacus's disclosure policy, we have all the information we need to run these calculations and determine how Abacus is weighing its data. We can also see that it is possible to build a broadly representative sample, in this case via an online panel.

The discrepancies between the weighted and unweighted samples here are relatively small (though here again, reaching younger Canadians is difficult - a hurdle that every polling firm is trying to get over). The amount of weighting that is being applied, and thus the amount of potential distortion of the results of the poll, is accordingly minor. This is what any media outlet publishing a poll should be able to figure out for themselves before publicizing the numbers.

(Note: the small differences among the 30-44 and 60+ age groups between the weighted and census numbers is due to the influence of other weights, i.e. gender, region, etc.)

It might not be Forum's fault that younger people won't answer automated polls. Perhaps that is one of the reasons it has been able to have some success in recent elections: its methodology's limitations mimic turnout in some fashion. But that would then be a happy accident, not a feature, and could become problematic if turnout changes in some dramatic way (i.e., how younger Americans came out to vote in 2008).

Whose landline is it anyway?

But it is not just about reaching people of certain ages. The sample sizes reported in Forum's latest poll show divergences from the population on other measures as well, giving us one indication of the kind of people who answer IVR polls.

Wealthier people are very happy to answer Forum's polls. While those households who make under $100,000  were adequately sampled (though most  were somewhat under-sampled), those households with an income of $100,000 or more were far more likely to answer the phone.

Married and widowed people also appear more willing to respond to a poll. Single people were far more unlikely to respond to Forum's automated poll, one assumes because they were out on the town. But were these numbers re-weighted to correct for the discrepancy? In the case of the unattached, the same issue of amplifying potential problems of small samples is present.
On religion, the Forum sample was good (though non-Catholic Christians seem to be slightly more willing to pick up the phone). On ethnicity, however, the discrepancies were quite large.

But this is one measure that is hard to figure out. The most applicable census data would be the single ethnic response, since that is what Forum is asking respondents to do. In the last census where this data was available, 32% said they were ethnically Canadian of some kind, whereas 64% of Forum's respondents, or double, did the same. Those who were 'other European' were under-sampled by half - one assumes that includes French Canadians -  while those who were not Canadian, British, or European in ethnicity were under-sampled by two-thirds. That is perhaps the most important number, since it is important to get a hold of ethnic minority communities in polls. What weightings were applied for these numbers? We don't know because the weighted sample sizes are not included.

And what kind of effect might the various re-weightings have when they are applied one on top of the other?

Then there are the memories of past voting experience. This is a problem for pollsters, as people might not remember how they voted or their memories might be playing a trick on them (confusing, say, a by-election or a provincial election for the last federal election). Choosing whether or not to account for this in an election campaign is tricky, and this was one issue grappled with by pollsters in British Columbia.

As you can see, the Forum sample had more Liberals than it should have and fewer New Democrats (it was about right for the Tories). Does Forum apply a weighting for past voting behaviour? We don't know, so we are left to wonder if it is taking this skewed sample into account or not.

But Forum is almost penalized for its disclosure policy. Because it releases unweighted sample sizes we can ask these questions - and that is a good thing. For all we know, those pollsters who are not releasing unweighted data could be building even more unrepresentative samples. This is why disclosure and transparency is so important. Before publishing, the media should be asking for this information if it is not given up front.

Let the record show

But with shrinking newsrooms and increasing responsibilities, journalists don't have the time to wade through this data, so they often publish what they get. With all the resource constraints in the industry, it is hard to blame them for it. And, in any case, how can you argue with a track record? Forum has done as well or better than most other firms over the last two years.

Because I am focusing on Forum to such a great extent it would only be fair to take a look at how its polls have done since the firm first emerged in 2011. A free promotional opportunity:
All in all, Forum's track record in its first three elections was quite good, while its results in the last three elections was good compared to competitors. A few notes about these numbers, though:

Forum was the closest pollster active in the final days of the Alberta campaign. I have it ranked second out of eight pollsters here due to an Ipsos-Reid poll that was out of the field just before the campaign began, but within the 30-day window I use to rank pollsters. Nevertheless, Forum still had Wildrose ahead, and based on the information that has come out of Alberta since the election Forum, and the other firms active over the final weekend, should have been able to capture at least some of the swing earlier.

In Quebec, Forum has emphasized that it was the only firm to place the Liberals as the Official Opposition. That is true, and to Forum's credit it was able to gauge Liberal support better than the Quebec-based pollsters. But Forum also showed that the PQ would easily win a majority government. CROP and Léger were showing that the PQ was not in a strong majority position, and that the role of runner-up was generally up for grabs.

And in British Columbia, Forum did get close to the final result but the numbers were outside of the margin of error of its final poll and, unlike in Alberta and Quebec, the firm was out of the field with six days to go. Forum has said that since its seat projection hinted at a Liberal majority government, it was the only firm to project a Liberal victory. As projecting seats has nothing to do with polling capabilities, this is irrelevant in comparing Forum's numbers to other pollsters. And Forum has made no explanation about how it does seat projections, despite repeated requests, and the report gave no details about how its numbers came about.

In fairness, all pollsters try to portray their track record in the most positive light. And it is unfair for polling firms who were not active in Alberta or British Columbia to claim, in the context of those misses, that they have a better track record than the firms that participated in those campaigns. We do not know if they would have done any better, so the comparison is apples to oranges.

Disclosure, transparency, responsibility

A policy of full disclosure does few favours to a polling firm. It opens them up to criticism and gives competitors the chance to 'reverse-engineer' their weighting schemes. Better, then, to give as little as possible since the numbers will be published anyway.

That is the crux of the problem. The media is not in position to demand more transparency as many outlets don't pay for polls (especially outside of a campaign), but they absolutely must. Free or cheap polls get good traffic, generate a lot of interest, and are 'exclusives', so they are hard to turn down. It is also difficult to turn a poll down on principle because of non-disclosure when another newspaper will publish the numbers without question. They are in an understandable bind.

This is what should change. We need to demand more disclosure and more transparency before publishing. Pollsters showing their work in greater detail will have all the more reason to ensure that they are doing it well. The media has a responsibility to report accurate information and to ensure that the information it is getting is reliable. Journalists take great pains to do so on other matters, so it should be natural to treat polling the same way.

Pollsters also have an incentive to publish everything. When they get it right, no one can claim that they were lucky. When they get it wrong, the numbers they published will probably have some hint as to why they got it wrong - it would be better to point to previously published numbers and say that they should have paid more attention to them, than to put out unpublished numbers after the fact to explain away errors. And a polling firm that is confidently publishing its data is a polling firm easier to trust.

Democracy benefits as well. Accurate public polling is a necessity in a free democracy as otherwise campaigns would be dominated by the leaked and spun numbers of internal polls, and voters would cast their ballots with less information than they should have available to them.

A culture of transparency will help foster a climate of quality, responsible polling - and make it easier to call those out who are doing a bad job. We saw this in the United States, where some polls were under-sampling and under-weighing visible minorities, over-estimating Republican support as a result.

I have thought about my role in this debate, and seriously considered not writing about polls that do not release full data. But this site can do a better service by drawing attention to these issues instead of ignoring a large chunk of polling done in Canada. Hopefully, the election in British Columbia will act as the catalyst for positive change in the industry and for those who write about it.

Wednesday, May 22, 2013

What happened in B.C., and what to do about it

The scandals in Toronto and Ottawa have pushed aside the existential questions from pollsters about the B.C. election, but the post-mortems on what happened in British Columbia are still emerging. An exit poll by Ipsos-Reid and a post-election survey by Insights West hint at some important clues.

I wrote about the issue of turnout, and in particular the profile of those who do vote, in my article for The Globe and Mail today. I suggest you read it, as I am only going to summarize here some of the points that are made in that piece.

Both the Ipsos-Reid and Insights West polls were able to replicate the final vote tally, suggesting that their polls are broadly reflective of actual B.C. voters.

Note: An earlier version of this post said that Ipsos-Reid's exit poll showed turnout by age. This is false - they weighted their exit poll from turnout in the 2009 B.C. election.
If we look at Ipsos-Reid's final poll of the campaign, we see that the three age groups they (and most other pollsters) use were more or less portioned out evenly. But when we see who actually voted in 2009 (and, conceivably, did so again in 2013), the problem with that sort of weighting is clear. Most significantly, voters 55 or older made up half of all voters, rather than one third.

If we compare Ipsos-Reid's final poll of the campaign to their exit poll, we see that the Liberals stole votes from the New Democrats in every age category.
But what struck me is how the results of the final poll from each age group were close to the exit poll's results in the older age group.

The 18-to-34-year-olds who voted had similar views to the 35-to-54-year-olds who were polled on the eve of the election, and the 35-to-54-year-olds who voted had similar views to the 55 and older respondents of the final poll. It would seem that people who vote are more like the broader, older population than those who do not. Anecdotally, that makes a lot of sense to me in a low turnout election.

So did the pollsters get the B.C. election wrong? Yes and no - they may have been in the ballpark when it came to the general population, but they failed to correct for the voting population. In the end, the polls were meant to determine likely outcomes of the campaign. To put it in the context of the market research that is the bread-and-butter of polling firms, the failure to identify voters and how they felt about the campaign was equal to a failure to identify a company's likely customer base, and how they feel about an advertising campaign. A poll is of little use to a diaper company if it is identifying the shopping habits of childless adults, and especially adults who have no intention of having children.

A side note: Ipsos-Reid posts their weighted and unweighted sample sizes in all of their polls, which makes it possible to do this sort of analysis. Other pollsters absolutely must do the same. When I looked at Forum's last poll, the same amount of information is not available, but it is possible to reverse-engineer some of their weightings. It seems that Forum uses a similar weighting scheme as Ipsos-Reid does (as they should if they are trying to match the census data). If they hadn't, however, and merely reported the voting intentions of those who responded to their poll, they would have had the Liberals at 43% to 41% for the NDP. Forum seems to report its unweighted sample sizes, and from that we can determine that 66% of Forum's final sample was over the age of 55. That appears to be way too much. But perhaps, in some cases, the people who answer a telephone poll will more closely resemble the people who vote. This does seem to make some intuitive sense, as these days it requires a generosity of civic duty and time to submit to a random telephone poll (and vote), whereas an online poll might attract a different kind of person.

But turnout was not the only factor contributing to the miss. Alberta had a dramatic change of heart in the final days and hours of the campaign, and that was responsible for some of the error in polling there. In British Columbia, there might have been a more modest change of heart that amplified the errors made in identifying likely voters.

According to the Insights West post-election poll, 11% of voters made their final decision on election day, including 12% of B.C. Liberal voters. The poll found that 17% of Liberal voters had even considered voting NDP prior to casting a ballot, enough to drop the Liberals to about 36% or 37% support if all of them had stuck with the NDP. That just happens to be the consensus level of support the Liberals had going into election day.

The Ipsos-Reid exit poll echoed the Insights West poll in finding that 11% of voters had made up their minds on May 14, and the 9% result they had for the B.C. Liberals is quite close to the Insights West result as well.

The exit poll found that 11% of Liberal voters had intended to vote NDP during an earlier part of the campaign, a not dissimilar result to Insights West's post-election poll. But this is not a magic bullet, as 8% of NDP voters said they intended to vote Liberal at some point in the campaign. It sort of cancels things out, but the fact that 33% of Green voters said they had at one point considered voting NDP may put the balance back into a shifting electorate that swung a few points' worth in the final 36 hours of the campaign. It is not enough to explain the total error, but combined with the turnout issue it might explain much of it.

Why did the switch occur? Insights West seems to suggest that a lot of it had to do with the perception that the Liberals were better on the economy and had run a better campaign, as well as a lack of trust in Adrian Dix. These issues were identified as one of the contributing factors in their decision to move from the NDP to the Liberals by more than one-third of the switchers. Ipsos-Reid also found that the issues of debt, the economy, and government spending were major vote drivers for the B.C. Liberals.

Another factor that cannot be ignored, and one that is especially problematic for the polling industry as a whole, is the expectation that voters had going into their polling stations. The polls set the tone for the campaign, but misled voters with potentially significant consequences.

In the Ipsos-Reid exit poll (recall that it was taken during election day before the results were known) fully 48% of voters expected the New Democrats to win a majority government. Another 28% expected a minority government of one shade or another, while a bare 11% correctly thought the B.C. Liberals would win a majority.

Even among Liberal voters, only 22% thought they were casting a ballot for the party that would form the next majority government, while 60% thought they voting for a losing cause or, at best, a minority government. New Democrats went in with much more confidence, of course, with 75% expecting a majority and only 1% thinking the Liberals would pull off the win.

More significant, however, may be what the British Columbians who cast a ballot for the Greens and Conservatives thought would happen. A majority of Greens thought the NDP would win outright, while a plurality of Conservatives thought the same. A significant number of Greens and Conservatives thought the next government would be a minority one, giving a Green or Conservative MLA a lot of influence. But only 11% of Conservatives and 2% of Greens thought the Liberals would win a majority. Considering that, according to the poll, 72% of Green voters and 87% of Conservative voters thought that Christy Clark did not deserve to be re-elected, would they have voted differently if the polls were predicting a majority victory by the Liberals?

This is why the pollsters have an important responsibility to get their election calls right, which means a greater emphasis on ensuring they have proper models in place to estimate turnout. But according to most pollsters, that is a huge challenge.

A quick note on who had the best internal polls. Clearly, the New Democrats were not well-served by the polls since they expected victory. The B.C. Liberals may have been better served, but we cannot know for sure if their internal pollsters are highlighting where they went right and not mentioning where they went wrong. This is one of the reasons why public polls are needed, as otherwise all we'd know about the state of the race is the leaked (and spun) internal numbers from each of the campaigns.

But in terms of methodology, there does seem to be a clear difference. The NDP appears to have been relying on province-wide polling, as those in the media were, and were deceived by the numbers (as those in the media were). The Liberals, on the other hand, appear to have identified some 30 swing ridings and polled them furiously, ignoring those ridings considered safe or not in play. From these polls, they were able to identify how the campaign was going more precisely. This seems to have been the successful method used by the Progressive Conservatives in Alberta as well - tighter, deeper polling, the sort that media outlets cannot afford.

What the rest of us can do about it

What are we to do, then? We cannot hope to have the sort of in-depth polling that political campaigns have since the newspapers and television networks that commission polls cannot afford anything of that quality, while the polls that are given away for free also have to be done on the cheap.

If disclosure and transparency increased, those of us who are interested and have the time to do so can parse through the data more closely and derive whatever information we are looking for. That is one thing that can be easily done, is done elsewhere and should be required in Canada. If any government MPs are reading this, a change to the Election Act would be appreciated!

But for myself, I have to take a different approach to the polls and the forecasts that are published here. For the next campaign - which looks likely to be in Nova Scotia, which should (hopefully) be a less problematic one as the Halifax-based Corporate Research Associates have a good track record - I am considering what new methods I can employ.

The seat projection model needs nothing more than minor tweaking, the sort that takes place after each election when more information is available (i.e., the performance of independents). The focus needs to be on ensuring the numbers plugged into the model are more accurate.

But is there anything I can do? Estimating these sorts of swings and error levels before they occur is virtually impossible, and I am just as likely to miss it one way and make the projections worse as I am to get it right. Instead, I will hope that the pollsters do improve their turnout models and I will report the aggregates without any adjustment - a forecast that is entirely based on what the polls are saying.

That will be the base, but I need to have some means to give readers an idea of how the polls could be wrong. In the B.C. election, I calculated those estimates with the polls themselves. The projection was based on the estimated margin of error of the samples included in the projection. The forecast was based on the volatility in the polls.

Leaning so heavily on the polls themselves seems to be a bad idea, considering the problems that have occurred in the last three provincial elections. Instead, I will try to estimate the likely error of the polls based on how the polls have been wrong before, showing what an average over- and under-estimation has been for parties in similar situations in other elections (i.e., incumbent governments). That should provide a good indication of how much error we can expect in the polls, and I will also include a best guess as to whether an over- or under-estimation is more likely.

I'd also like to develop a turnout model that can be included in addition to the high/low and poll forecasts. At this stage, I'm favouring something simple: dropping the 18-34s from every poll and doubling the 55+. I will look into this more deeply, but I suspect it will provide better results in the majority of cases. Anything more complicated is probably not necessary (the simpler a model can be, the better - usually).

Almost every pollster that has written a post-mortem and with whom I've talked or corresponded, whether or not they were active in British Columbia, has identified the hit to the polling industry's reputation that the last few elections have inflicted as a major problem. Some are upset that polls using different methodologies are causing people to paint the entire industry with the same brush.

I suspect that because of these concerns, those pollsters who will be active in future campaigns will invest extra time and money into ensuring their polls are right. They need to rehabilitate their industry's reputation, and those that missed the call in Alberta and B.C. need to rehabilitate their own. The incentive of proving one's own methodology to be accurate will be even stronger, particularly for those that were not active in B.C. For that reason, I am optimistic that the next set of elections will have better polling. Perhaps naively so.

Monday, May 20, 2013

2013 IIHF World Championship round-up: Sweden wins gold, Canada fifth

The Swedes broke the decades-long hosting curse, winning the gold medal on home ice with a 5-1 win over the Swiss (who played a lot better than the score suggests). The Americans also won their first medal since 2004 with a 3-2 shootout victory over the Finns to take the bronze.

At the start of the tournament, I had posted a forecast for the results. The forecasts performed about as well as they did for the 2010, 2011, and 2012 tournaments that were used for calibration. It averaged 3.4 error in ranking per nation, with three nations pegged correctly and a fourth pegged to within one rank. It did do a worse than the 65% confidence for accurately placing each team within three spots that the model had for the three previous tournaments, but it was a good exercise (and a fun excuse to write about the tournament).

The forecast also provides a good measure of whether teams under-achieved to any great degree. The biggest under-performer in that sense was Norway, but as mentioned in the original post that had more to do with an anomaly in the model. Russia, however, which had a team that could have won gold, was a big disappointment.

I'll insert a break here. I realize this is a political site, but hockey is just as much of a Canadian pastime! Hopefully those of you who are not interested will forgive this post. But it is still the weekend.

Thursday, May 16, 2013

B.C. post-mortem, polling methodologies, and where to go from here

Note that this post was written before the final count that was concluded on May 29, 2013. The final count changed the results slightly, with the Liberals winning 49 seats and 44.1% of the vote and the New Democrats taking 34 seats and 39.7% of the vote. The riding which flipped over to the New Democrats was Coquitlam-Maillardville, which the projection had originally forecast for the NDP. That increased the accuracy to 82.4%, or 70 out of 85 ridings. The Electoral Track Record has been updated to reflect the final counts, but the post below has not been.

Now that the dust has settled a little and those in the polling industry (along with myself) have had some time to reflect on Tuesday's results in British Columbia, it is time to take a look at how the projection model performed. But I'd also like to discuss the methodological debate in Canadian polling, how this site has approached it, and the future of this site within the context of a plummeting faith in polling.

The model did about as well as it could considering how different the election's results were to the final polls of the campaign. The model is not capable of second-guessing the polls to the extent that it could have predicted an eight-point NDP lead turning into a five-point Liberal win.

The forecast ranges were included to try to estimate how badly the polls could do if another Alberta-like scenario played out, and aside from the NDP falling two points below the forecasted low they were able to capture all of the vote and seat results at the provincial level. They were not, however, able to capture the performance of the Liberals and New Democrats in metropolitan Vancouver and in the Interior/North, demonstrating just how unbelievably well the Liberals did in these two parts of the province. Their vote came out in huge numbers here (and/or the NDP's stayed home), and the Liberals won the election.

Of course, the forecast ranges are somewhat absurdly wide. But that is more of a reflection of how unpredictable elections have become in Canada. They are absurdly wide, and yet still needed.
The parties did about as well as expected on Vancouver Island, however. If turnout was one of the factors in explaining why the polls missed the call, the Liberal ground game did its work in the rest of the province, while Vancouver Island was left to the NDP.

In all, the seat projection made the right call in 69 of 85 ridings for an accuracy rating of 81.2%, while the potential winner was correctly identified (by way of the projection ranges) in 73 of 85 ridings, for a rating of 85.9%.  This shows how the election was really won in just 12 ridings, as the projection ranges (which did not consider a Liberal victory likely) only missed those 12.
Metropolitan Vancouver was where the election was primarily won. The projection gave the NDP between 45% and 51% of the vote and the Liberals between 36% and 41%. Instead, the Liberals took 46.1% of the vote in the region (as this site defines it) to only 40.4% to the NDP. The Liberals won 24 of the 40 ridings, instead of the 14-16 they were expected to win.

The Interior/North was also a major factor in the Liberals' victory. They were expected to win the region with between 38% to 45% of the vote, narrowly beating the NDP out at between 37% and 45%. This gave the Liberals between 12 and 22 seats and the NDP between 9 and 16. Instead, the Liberals won 24 seats with 48.2% of the vote, while the NDP won only 7 seats with 35.4% of the vote.

On Vancouver Island, the NDP won 11 seats, the Liberals two, and the Greens one. The projection did not give the Greens any seats, but expected 11 to 14 for the NDP and 0-3 for the Liberals. The NDP was expected to take between 44% and 53% of the vote, the Liberals between 27% and 35%, and the Greens between 10% and 17%. The NDP actually took 43.9% to 34.2% for the Liberals and 17.2% for the Greens. It would seem that some of the Conservative vote (they took 4%) went to the Liberals and some of the NDP vote went to the Greens, but overall the island played out mostly as forecast.
As usual, the seat projection model was not at fault. If the polls had been accurate, the model would have projected 49 seats for the B.C. Liberals and 36 for the B.C. New Democrats, mirroring the result closely. The ranges would have been 37 to 57 seats for the Liberals and 27 to 46 for the NDP, while up to one Green would have been projected and two independents.

The right call would have been made in 76 of 85 ridings, for an accuracy rating of 89.4%, while the potential winner would have been correctly identified in 81 of 85 ridings, for a rating of 95.3%. The challenge remains getting the vote totals closer to the mark. Frustratingly, that is the one thing I have the least control over.

How the projection model would have been wrong in a few individual ridings is interesting, and reflects just how important local campaigning can be. Three of the incorrect nine ridings (with the actual regional vote results) included Delta South, Vancouver-Point Grey, and Oak Bay-Gordon Head. The model would never have been able to boost Andrew Weaver's support enough to give him the win without some improper fiddling with it on my end. In Delta South, Vicki Huntingdon's support was stronger than would have been expected. And most significantly, Christy Clark's rejection in her own riding is all the more starkly shown. She did not lose it because the overall race was close - the overall swing should have kept the riding in her hands.

Polling methodology and what went wrong

All eyes have turned to how the pollsters are doing their work. Some of the pollsters are looking at their methods and trying to figure out what went wrong and what can be done to avoid these issues in the future. Others are crowing that this or that poll they did a week before the election turned out to be prescient, and it appears that some lessons will not be learned.

A hypothesis does seem to be forming as to what happened. I'd identify a few factors:

Turnout - Turnout was only about 52% in this election, and that can throw off a pollster's numbers to a large degree. However, turnout was also very low in the 2009 election and the polls did a decent job that time. Turnout is not a silver bullet, then, but the effect turnout had in 2009 may not have been the same as in 2013.

Motivation - According to Ipsos-Reid's exit poll (which I will return to in the future), very few British Columbians thought the Liberals would win a majority government (only about one-in-ten), while one-half thought the New Democrats would win. This might have depressed turnout even more, with some New Democrats not bothering to vote since they felt they would win, and some Liberals turning out in greater numbers to ensure their local MLA would get re-elected, even if the party itself would be booted out of government. Conceivably, though, Liberals not bothering to vote for a lost cause should have cancelled things out. And in most cases, people tend to vote in greater numbers for a perceived winner.

Election Day Shift - Yes, it is unbelievable that the polls were right all along and a dramatic change of heart occurred in the final hours. But Ipsos-Reid's poll showed that 9% of Liberal voters made-up their minds in the voting booth. If all of those voters had instead voted for a different party, the Liberals would have been reduced to about 40%. That would have been closer to most polls, but still much higher than even the margin of error would have considered possible. And, of course, some of those 9% might have just been wavering Liberals who did not make up their mind until the last minute, but had told pollsters they were still intending to vote Liberal. While certainly part of the equation, it cannot be all of it.

Bad polling - This is probably the main reason why the polls missed out on the call. The other three factors may have been worth a few points each, but there does seem to have been a problem in building a representative sample. Pollsters will need to figure out why that is.

One of the problems that has been identified most often (especially by those pollsters who use other methods) is that most of the polls used online panels. These have had success in the past, including the 2009 B.C. election, but perhaps online panels are less able to consistent give good results - particularly in provincial campaigns where the panel may be smaller. But this cannot be the only reason, as Angus-Reid's online polling in Manitoba - a province with a quarter of the population of British Columbia - was stellar in its 2011 provincial election.

Nevertheless, the track record of online firms has taken a hit. Telephone polls using live-callers still seem to have the most success. Reaching people by telephone - including mobile phones - probably remains the best way to do quality polling. It is also a good way to do expensive polling.

Is the extra accuracy worth the extra cost? That might not be the case when it comes to market research. Whether it is 36% or 44% of people who say they have trust in your company's brand is not vital information, as long as it is in the ballpark. Even at their worst, online polls have been in the ballpark (the Liberals and NDP were not polled to win the election in Alberta, and nor were the Greens or Conservatives ever pegged to have more than marginal support). But in an election, the quality of a poll, and not the cost, should be the deciding factor in whether or not to report it.

The chart below reveals some information that I have up to now kept to myself. Pollsters are rated in my weighting system by their track record. That track record extends back over 10 years, with more recent elections being more heavily weighted. The difference between one firm and the next is usually not very large, and some of the difference is due to the elections in which these firms have decided to take part. Those that stayed out of Alberta and B.C. are inevitably going to have better ratings than those who didn't. I have considered overhauling the rating system to take into account these sorts of considerations, but I have not yet done so. Because I haven't, I am reluctant to actually rank the polling firms publicly by name.
But I am willing to rank them by methodology. These are the 10 firms in Canada I consider to be major firms, and the method they have used in their most recent election campaign. They are polling firms that release national, regional, or provincial polls on a regular basis. The chart shows each firm's average error per party in any election in which they were active, going back ten years.

As you can immediately see, the polling firms that conduct their surveys using live-callers occupy the top three ranks. The online and IVR polling firms have had less success. The difference is not huge, however - on average, the third best firm misses the call by fewer than 0.5 percentage points per party than the seventh best.

However, it is clear that polls conducted over the telephone with live-callers have had a better track record. That does not mean that they will always have a better result: in the 2011 federal election, Angus-Reid's online panel had the lowest per-party error. But it does suggest that the online panels still have some work to do.

Where to go from here

There were moments yesterday when I contemplated the end of ThreeHundredEight. Why run a site about polling when polling in Canada is so horrid?

But the polling is not always horrid, and even when it seems to be on the bad side there are some indications of something else at play. Alberta is an obvious example, but maybe British Columbia's errors have some mitigating factors as well.

Even if that is not the case - and I am not convinced that it is - polls are not going away and I still believe that they are a useful tool. The electorate deserves to know what the collective wisdom of the people is on various issues, including on the question of who should govern them. But the electorate deserves good, reliable information. Bad information is much worse than none at all, but polls are not going to disappear.

Though I could never claim to be impartial on the question of whether polls should be paid any attention at all (if they are ignored, I would need to find a new line of work), I can continue to be an impartial observer, analyst, and (when need be) critic of the industry. In its own tiny little way, ThreeHundredEight can be part of the solution.

That means more of a focus on methodological and transparency issues, sweeping trends, uncertainties in the polling data, and wider questions about what the numbers mean, if anything at all. It means less focus on the horserace, more caution in reporting numbers, a forecasting model that emphasizes what we don't know, and more reserve in giving attention to questionable polls. And when a poll is questionable, drawing attention to the reasons why.

It might mean a drop in traffic and it will certainly require more work and effort on my end. And like all junkies, I might relapse. But I think it will be a worthwhile endeavour. I welcome your thoughts in the comments section.

Wednesday, May 15, 2013

Polling industry dealt major blow in B.C. election

Last night was a very bad one for Adrian Dix and the New Democrats, who expected victory as much as the pollsters did. And with good reason: a stabilizing, maybe even growing, lead over the B.C. Liberals with hours to go before the polls opened. Instead, British Columbians collectively woke up and changed their minds and swung about 13 points towards Christy Clark. Or, more likely, something disastrously wrong occurred in the polling industry.

I wrote about the implications for the four party leaders for The Huffington Post Canada, and took a look at why the polls went wrong for The Globe and Mail.

Why did they go wrong? I have no explanation this morning. In Alberta, there was the late swing. There was the novelty of the Wildrose Party. There was the relative lack of polling in the final days. There was the inexperience of the pollsters who were active. There was the immensely more well-oiled organization of the Progressive Conservatives.

In British Columbia, there was no indication of a late swing. If anything, there was a sign that Clark's momentum had reversed itself. The New Democrats were not an unknown quantity. There was polling being done as late as Monday. There was the experience of two pollsters with long and successful histories in British Columbia. There was the much-vaunted GOTV organization of the NDP. And yet all the polls said the New Democrats would win, and all the polls were wrong.

(Note: the chart below includes the average standard deviation between the polls from each pollster, meant as an attempt to determine whose numbers were fluctuating the most. It seems like a moot point now.)

Forum Research ended up doing the best, but they should not gloat. As in Alberta, they were the best of a bunch of bad polls. They were in the field six days before the election, when the Liberals appeared to be at their peak in polls by other firms, and in all likelihood were just lucky not to release any new numbers. And it was odd that they didn't, as Forum has released 11th hour polling in Quebec, Alberta, and even Labrador.

My vote projections did second-best, mostly because I had a mechanism for diluting the support of the Greens and Conservatives. On the Liberals and NDP, I was as wrong as anyone else.

The forecasted ranges captured every vote and seat result with the exception of the NDP. Those ranges are designed to account for an Alberta-level event, but even so they were unable to predict that the New Democrats would under-perform in the popular vote to such a great degree. The ranges, implying that the polls should always be considered potentially spectacularly wrong, were apparently a good idea, but if ranges of this size need to be included in every election the usefulness of the forecasting model is virtually zero. In even a modestly close election, they will always span almost the entire spectrum since most ridings come into play at that point.

I have not had the time to input the actual vote results into the seat projection model yet, as I need to calculate the regional vote totals. I will do so as soon as possible. I suspect that the projected results will end up being very close to the actual results, as they have been in almost all the 10 elections I have worked on in the past. I will write a fuller post-mortem in the coming days.

There is no question that seat projection models like mine work. They are an effective way to translate poll results into seats. This is not voodoo magic, it is a rather simple endeavour. The challenge is being the least possible amount of wrong, which is the best that forecasters can hope for. But the models are only as good as the available information.

I have to admit that my confidence in the quality of that information - polling - has been profoundly shaken. Alberta was an aberration, and there was some good reason as to why it occurred (which I now have doubts about). Quebec was only a minor flub, which can be attributed in part to superior Liberal organization (or can it?). But this is a complete disaster. There is no reason why this should have happened, which leads me to believe that the reason it happened is because the pollsters did a bad job.

It might not be their fault exactly. Perhaps it is no longer possible to consistently and repeatedly build a sample that is reflective of the population. Can online panels be reliably effective when they aren't national? Work will have to be done to determine why this is happening and how it can be avoided. I have no doubt that the pollsters will eventually tackle the new challenges that they face. The question is how long it will take and whether it can be done in a country like Canada.

It puts into question the validity of the work I do. I write about polls every day for this site, for The Globe and Mail, for The Huffington Post Canada, and for The Hill Times. I give radio and television interviews about them. It is my full-time job. I've always approached it as a professional and have tried to provide insightful analysis of polling, separately from my role as a forecaster. No one in Canada who doesn't work for a polling firm writes about polls as much as I do.

How can I credibly continue to do so when I myself doubt that the results are reliable? While I was shocked when I saw the results last night, a part of me was not surprised that I was shocked and that they got it wrong all over again. If I go into every election assuming that disaster is more likely than triumph, what is the point?

This site was meant to be a way to cut through the confusion in polling and give a good idea of what, as a whole, the polls are saying. The site can still do that, but if what the polls are saying is not reflective of reality, what use is it?

My projection was wrong because the polls were wrong. Again. I am sorry that it was so. I can blame the pollsters for providing me with unreliable information, but I am nevertheless responsible for what is posted here, for the defense of polling I have mounted for the last few years, and for whatever confidence I expressed when analyzing the numbers in an attempt to inform readers about the state of the race in British Columbia and elsewhere. I apologize for that. Where do we go from here?

Tuesday, May 14, 2013

Final projection: Dix's B.C. NDP heavily favoured to win

Election night update: The results will fall within the high and low forecasted outcomes, but they were considered to be an unlikely event. It seems, instead, that with the polling we have in Canada we can expect these sorts of surprises more often. It is very disappointing.

The B.C. New Democrats under Adrian Dix should win tonight's election in British Columbia, pushing Christy Clark's B.C. Liberal Party to the opposition benches and ending their 12-year tenure in government.

ThreeHundredEight.com's final projection for the B.C. election has the NDP winning a majority government with the Liberals forming the Official Opposition. They should also be joined on the opposite side of the legislature by at least one independent MLA.

As recently as yesterday, based on the polls that had been in the field up to May 10, there was some doubt as to what the likely outcome of the election would be. The B.C. Liberals appeared to be closing the gap, and there was enough volatility to believe that the last weekend of the campaign could prove decisive. But the polls released yesterday, two of which were actually in the field yesterday, show the parties' support to have stabilized, giving Dix's NDP a comfortable lead.

The likely outcome

The projection gives the New Democrats between 44.1% and 47.9% of the vote, with 46% considered the most likely outcome. They should win between 44 and 55 seats, while 49 is considered most likely. That puts the NDP safely in majority territory. Unless the polls are glaringly inaccurate, there is every indication that Dix will be the next premier of British Columbia.
Click to magnify
The Liberals are slated to take between 35.8% and 39.6% of the vote, or 37.7% more precisely. That should give them between 26 and 41 seats, while 35 is considered the most likely outcome. Clark's Liberals should then be able to form a robust opposition, and give the party some foundation upon which to rebuild. 

The B.C. Greens under Jane Sterk are projected to finish third with 7.8% of the vote, or between 6.8% and 8.8%. They are not expected to win a seat, though they should put up some very strong numbers in the Greater Victoria region. There is an outside chance for an upset - in particular the riding of Oak Bay-Gordon Head should be watched. The projection model is probably unable to fully reflect the potential strength of Andrew Weaver's campaign, due to the low level of support the party received in the riding in the 2009 election.

John Cummins's B.C. Conservatives are projected to finish fourth with 5.2% of the vote, or between 4.3% and 6.1%. They are also not considered to be in the running to win a seat.

Another 3.2% of British Columbians (or between 1.6% and 4.8%) are expected to vote for independent candidates and smaller parties. As many as four independents could be elected, but the projection model considers the re-election of one independent to be the most likely outcome.

Click to magnify
With only a matter hours between the time the final polls of the campaign were in the field and the beginning of voting in British Columbia, the Liberals are estimated to have only a 1.7% chance of making up the difference or proving the polls wrong. The New Democrats have a 98.3% chance of ending up with more votes tonight than the Liberals.

They also have an 83.3% chance of winning more seats, giving the Liberals a 16.7% chance of proving the polls and the projection ranges wrong enough to emerge as the victors. Those chances take into account the possibility that the Liberals could win more seats with fewer votes than the NDP, but the odds are not very high. Nevertheless, recent elections have rewarded an abundance of caution.

Being prepared for the unexpected

The forecasting model is designed to consider the possibility of an Alberta-level event, both in terms of the potential inaccuracies in the polls and a late swing in voting intentions. But there is little indication that something like Alberta is in the works, whereas my final projection for that election was soaked in uncertainty. I am far more confident in this final projection than I was with Alberta's, but the forecasting model does consider it possible for the Liberals to eke out a victory. With the volatility we have seen in the regional-level polling, the Liberals could win as many as 60 seats or be reduced to as few as five. The NDP could win as many as 78 or as few as 23. These are extremely unlikely outcomes. The polls would need to be disastrously wrong to cause such a surprise.

Click to magnify
The forecast ranges for the Greens and Conservatives are perhaps a little more realistic. Polling for these small parties can be more difficult, particularly when it comes to trying to capture individually strong local campaigns. The Greens have been polling surprisingly well in the Interior and North despite an incomplete slate, and the forecasting model thus gives them the potential to pull off an upset there. More likely, however, is that the polls and the seat projection model could be unable to accurately record what is going on in some of the ridings in the Victoria region. For these reasons, the forecasting model considers as many as eight seats a possibility for the Greens, though anything about two should be considered very implausible.

For the Conservatives, pulling off a surprise somewhere in the Interior should not be ruled out. However, the numbers have not been heading in the right direction for them.

But for all these forecasts, we're merely looking at the plausible rather than the probable. The tighter projection ranges are the most likely outcomes, especially considering the stability of the final polls of the campaign as well as the track records of the firms who were in the field yesterday (both Angus-Reid and Ipsos-Reid have long and successful histories in British Columbia).

Regional breakdowns

The New Democrats are very well positioned in the southwestern corner of the province, with a strong lead in metropolitan Vancouver and a very wide one on Vancouver Island.

In and around Vancouver, the New Democrats are projected to take between 45% and 50.6% of the vote, giving them 24 or 25 seats. The Liberals should take between 35.8% and 41.2% support and between 14 and 16 seats. The Greens are expected to capture between 5.1% and 7.9% support, while the Conservatives are strongly considered likely to finish fourth with between 3% and 5.2% of the vote. One independent is expected to be elected as well.

Click to magnify
On Vancouver Island, the New Democrats should take between 43.9% and 52.9% of the vote and win between 11 and 14 seats. The Liberals are projected at between 27% and 35.2% of the vote and as many as three seats. The Greens should finish third with between 10.4% and 16.6% of the vote, while the Conservatives should take between 3.7% and 7.9%. Though the Greens are not projected to win any seats, the forecast puts them in the running for as many as three.

In the B.C. Interior and North, the race is far more competitive. Either the NDP or Liberals could win the most support, with the slight edge being given to the Liberals. They should take between 37.8% and 45% of the vote, while the NDP stands at between 37.3% and 44.5% support. That gives the Liberals between 12 and 22 seats and the NDP between nine and 16. The Conservatives or Greens will finish third, with the Conservatives favoured at between 5.2% and 9% support. The Greens should captured between 4.5% and 8.1% of the vote. As many as three independents could be elected in the region, with between 2.2% and 6.5% support.

What the polls have shown

There is no doubt that the Liberals were able to close the gap due to an energetic campaign and a decent debate performance by Clark, aided by a very safe NDP campaign that allowed the Liberals to dominate the agenda. But after taking an initial hit, the NDP vote stabilized and the Liberals were unable to make enough ground to turn things around in the final weeks.
Click to magnify
The projection adjusts the polls slightly in order to take into account the over-estimation of support that polls have traditionally been guilty of in the case of Green parties throughout Canada, and more generally of small parties without a seat in the legislature (in this case, the Conservatives). The model also estimates the support of independents and other parties independently of the polls. 

Without these adjustments, the numbers change slightly. The weighted, unadjusted poll average would then give the NDP 44.2% of the vote to 36.2% for the Liberals, 9.8% for the Greens, 6.9% for the Conservatives, and 2.9% for the others. Conservative support in the Interior and North would sit at 9.2%, while the Greens would be at 16.6% on Vancouver Island. Without these adjustments, the Greens and Conservatives might be considered more likely to win a seat. But there would be no more consequential differences in the projected winner overall.

How the leaders have fared

Campaigns clearly matter. Prior to the campaign, Adrian Dix had a double-digit lead over Christy Clark on the question of who would make the best premier. The latest polls suggest that the gap has closed to only three points. The last three surveys have averaged 30.7% for Dix on this question, with Clark at 27.7%. The campaign had a significant effect on Clark's ratings on this score, and for a brief moment she was even polling better than the NDP leader.

Click to magnify
Clark's approval rating also improved during the course of the race, but still averaged only 34.7% at the campaign's end. Dix's approval rating was 41.7%, putting him in a tie with Jane Sterk. Her approval soared during the campaign, but she had already surpassed Clark by the end of March. Cummins's approval rating held more or less steady, and finished at 21%.

In terms of their chances of election, only Adrian Dix is favoured to be in the legislature after the dust settles. Too much should not be made of the individual riding projections (the overall numbers are more important), but probability calculations make their forecasts more interesting.

Riding projections
Dix is a lock to win Vancouver-Kingsway, with the model estimating his re-election odds to be 94%. Clark's re-election chances are not nearly as good, however. The NDP is actually favoured to win her riding, with an estimated 62% chance of winning, but that is not much better than a coin flip. Sterk is expected to put up some strong numbers in Victoria-Beacon Hill, but the NDP's Carole James is given an 87% chance of being re-elected. And the Liberals are given a 62% chance of holding on to Langley, where Cummins is running. The NDP is considered more likely to win it than Cummins, though the model is almost certainly under-estimating the party leader's drawing power.

The importance of a campaign

Considering just how long the New Democrats under Adrian Dix have been leading in the polls in British Columbia, and the dozen years the Liberals have been in power, an NDP victory should not come as too much of a surprise. At the campaign's outset, the New Democrats were leading by 18 points. That Clark's Liberals were able to reduce that lead to only eight points and put themselves in a position where the foregone conclusion became a potentially historic comeback is a testament to the importance of an election campaign. 

In the end, however, the result is what counts. The New Democrats have been favoured to win this election for many months. That they will probably win it with a smaller margin than they had enjoyed for the 12 months or so prior to the campaign's start does not invalidate those expectations. Enough British Columbians changed their minds during the last four weeks to change the tone of the race, but the numbers do not lie. It was always going to take a pitch-perfect campaign, and a lot of luck, for Clark's Liberals to overcome the huge hill that had formed in front of them. They put up a strong fight, but it may have been too much.

The polls were getting a little uncertain of themselves in the last week of the campaign, but they are now clear and consistent. Reasons for doubt existed in Alberta, and plausible excuses were made for that debacle. There will be no such excuse this time. Unless the polling industry is on the verge of an even more humiliating and unlikely collapse, Adrian Dix will become the next Premier of British Columbia.