Wednesday, October 9, 2013

Nova Scotia: how did the polls and the projection do?

After the trauma of the elections in Alberta and British Columbia, it was almost more surprising to see last night's results in Nova Scotia generally align with what the polls were suggesting they would be.

Some pollsters did better than others. My detailed analysis of how the polls did can be found on The Globe and Mail's website. The article is available to Globe Unlimited subscribers only. If you like this kind of analysis and want to see more of it on The Globe and Mail as well as here on ThreeHundredEight.com, it needs your support. If you don't already have an online subscription to The Globe and Mail, please consider signing up as you might find my articles available to subscribers-only more often. You can find details on how to sign-up via the link above.

The projection did moderately well, particular in terms of the Liberals. Their vote haul of 45.5% fell just outside the likely ranges, which bottomed out at 45.8%. But there was a 21% chance that their vote would fall between the minimum and low projected ranges. Their total of 33 seats was projected exactly right.

At 26.9%, the New Democrats were just 0.8 points off from their projected vote totals. Falling between the average and high ranges, as they did, was considered a 68% chance probability. That they only won seven seats instead of the projected range of 12 to 15 was a surprise, but this was largely due to the performance of the Progressive Conservatives. Those seven seats they won did fall between the minimum and low range, though that was projected to be only a 5% likely occurrence.

The Tories ended up just outside the projected maximum level of support with 26.4% instead of 26.3%, and 11 seats instead of nine. That is unfortunate, to say the least. But this is partly due to the large percentage of the projection taken up by the final poll from Forum Research. More on why that was a problem later.

The Greens took 0.9% of the vote, just below the projected low range. Falling between the minimum and low level of support was considered a 35% chance, so nothing untoward there.

But one of the main reasons why the projection did miss on the seats for the Progressive Conservatives was that the polls did not do a particularly good job of gauging regional support, especially in Cape Breton. This is how it broke down last night:
The polling in Halifax and in the rest of the mainland was generally good, though no one got it bang-on. Cape Breton was a little more difficult. The full sample from Abacus, spanning the entire final week, was relatively close but the last set of numbers from them had the PCs doing much better, while the last set of numbers from Forum and CRA had them doing much worse. The samples were generally too small in Cape Breton, though, to get a good bead on the race.
The model did have some trouble translating the regional numbers into good seat totals. With the actual regional numbers plugged into the model, it would have projected a likely outcome of 22-33 seats for the Liberals (28 to be exact), 13-17 seats for the NDP (13), and 7-16 seats for the Progressive Conservatives (10). The Liberals ended up at the high end of that range and the PCs right in the middle of it, but again the New Democrats would have been over-estimated, falling in the minimum-to-low range. This is actually the first time in ThreeHundredEight's history of projecting a dozen elections that the model performed worse with the actual results plugged into it,

This demonstrates two things. The first is the limitations of a seat projection model in smaller provinces - on average, only about 8,100 votes were cast in any riding. That means that local factors can be especially important, and that is shown by the direction in which the model made errors: every which way. Normally, the model makes errors in the same direction as a party is over- or under-estimated. In Nova Scotia, however, that was not the case. For example, in the eight ridings the Liberals won where they were not projected to win, the NDP was projected to take six of them and the Tories two. In the seven ridings the PCs won where they were not projected to win, the Liberals were projected to win five of them and the NDP two.

Secondly, it highlights just how poorly the New Democrats did. They should have won more seats with the amount of votes they captured, particularly in Halifax. They ended up losing too many races that should have been winnable, with the Liberals benefiting.

In an election where just over half of those ridings that were not entirely new changed hands, the projection model was not particularly strong at the riding level. It made 18 errors, calling 33 of 51 ridings correctly for an accuracy rating of 65%. Taking into account the projected likely ranges, the likely winner was identified in 37 of 51 ridings, for an accuracy rating of 73%. That is not very good, but shows the hazards of small elections. The idea that the overall numbers are more important, and that riding-level errors balance each other out, is especially true in smaller elections.

The probability ratings for the riding projections did a decent job, however, with an average confidence of 72% in the ridings incorrectly called compared to 81% in the ridings correctly called. Half of the ridings incorrectly called had a confidence level of 70% or less.

One of the problems with this campaign was the emergence of the Forum Research poll on the eve of the election. Because the model weighs a poll by its median date, it rewards a firm like Forum that does its polling on a single day. This is actually not really the very best practice since it is better to poll over a few days to iron out the data and ensure that it isn't too dependent on the factors that can skew a poll when it is taken on a single day (i.e., are the people available to take a call on a Monday night different from those who can take a call on a Sunday night?). Coupled with Forum's large samples (due to the cost effectiveness of IVR polling), the final Forum poll took up almost three-quarters of the projection.

I don't think it is appropriate to reward this kind of poll, so going forward I will be weighing polls by the final date in the field, particularly during an election campaign. Doing so would have made CRA's poll weighted for Oct. 3 instead of Oct. 2 and Abacus's weighted for Oct. 6 instead of Oct. 5. At the very least, this would have brought the Progressive Conservative result into the maximum range and undoubtedly produced a better projection.

It is gratifying, however, that the polls did a good job in Nova Scotia. That makes the coverage this site provided throughout the campaign useful, since it was an accurate reflection of the ups and downs of the last four weeks. The lessons learned from this campaign will be digested and the new data added to the model's calibrations, and on we go to the next vote, slightly more confident than we were yesterday that we can trust what the polls are telling us.