Friday, June 13, 2014

Ontario election post-mortem: likely voter models fall flat, eligible tallies good

In the end, the race was not nearly as close as expected, with Kathleen Wynne's Liberals winning a majority government of 58 seats, with the Tories taking 28 and the New Democrats winning 21. The results have cost Tim Hudak his job, as his party took just 31.2% of the vote, with the Liberals improving on 2011 with 38.7% support and the NDP marginally upping their tally to 23.7%. The Greens had a good night, relatively speaking, with 4.8% of the vote.

Note: This post has been updated to reflect the reversed result in Thornhill, originally awarded to the Liberals but reverted to the PCs by Elections Ontario. The projection originally gave Thornhill to the PCs.

How about the polls and the projection model? The polls were not the miss that some commentators have suggested. As I write in The Globe and Mail today, the traditional numbers reported by pollsters actually did quite well, under-estimating the Liberals enough to put a majority in doubt but generally tracking the race accurately. However, the likely voter models employed during the campaign, and favoured by the projection model here, did not do the job at all. Every pollster that used a likely voter model did worse with it than they did with their estimates of eligible voter support.

As a result of this miss by the likely voter models, the projection model was off a fair bit. The projected support levels for the Liberals, New Democrats, and Greens fell within the likely ranges, which is a success. The PC number, however, was below even the 95% confidence interval.

In terms of seats, the Liberals ended up between the high and maximum expected ranges and the PCs between the minimum and low ranges. That is no coincidence, as the vast majority of the projection model's misses were seats projected to go PC, but that actually went Liberal. The NDP result was only one seat off the projection. So, a more mixed record there.

But the riding model itself did extraordinarily well. It only missed the call in 10 ridings, for an accuracy rating of 91%. That is the best performance of the model since the 2011 provincial election in Manitoba, when 56 of 57 ridings were accurately called. Taking into account the likely ranges (which should be the focus of anyone looking at these riding estimates), in only six ridings was the potential winner not identified. That ups the accuracy rating to 94%, or 101 out of 107.

Of those 10 misses, seven of them were seats projected to go PC, but instead went Liberal. One of them was expected to go PC, but instead went NDP, while two of them were expected to go NDP, but went Liberal. Half of the misses were in the 905 area code, where the PCs did unexpectedly poorly.

Nine of the 10 misses were called with 67% confidence or less, with five of them being called with less than 60% confidence. Only Durham, erroneously called for the Tories at 80% confidence, was a serious outlier.

The model would have done slightly better had I ignored the likely voter models. The vote projection would have been 37% for the Liberals, 33% for the PCs, and 24% for the NDP, with 50 seats going to the Liberals (or between 46 and 56 at the most likely range), 35 to the Tories (31-39) and 22 to the NDP (19-24). So, the Liberals and PCs would have still fallen outside the likely range, though less dramatically.

The model would have called 98 of 107 ridings correctly, for an accuracy of 92%, while the accuracy rating when incorporating the likely ranges would have been 102 out of 107, or 95%.

But what about if the polls had been dead-on? The seat projection model needs to be able to turn actual popular vote results into accurate seat projections, otherwise it would be hopeless in turning poll numbers into seats. On this score, the model did quite well.

The actual results for all three parties would have fallen within the likely ranges, with the Liberals on the higher end and the PCs on the lower end. This is one indication of how the Tories really had a poor night.

The overall call, instead of 'likely Liberal, possible PC victory', would have been 'Liberal victory, possible majority'.

The projection model would have called 98 of 107 ridings correctly with actual results, but when including the likely ranges at the riding level the model would have identified the potential winner accurately in 104 of 107 ridings, for an accuracy rating of 97%. The three ridings that bucked the trends? Cambridge, Durham, and Sudbury. The misses would have been called with an average confidence of only 55.4%.

All in all, the seat projection model performed as it should. The vote projection model did more poorly, but it can only be so good as the polls put into it. A look at the regional projections, though, gives us a look at where the polls missed the call - or, perhaps, where the parties over- and under-achieved.

The Liberals out-performed the polls across the board, but nowhere did they do so dramatically. Their actual result fell within the projected likely ranges in every region except eastern Ontario, where their 38.9% result was slightly up on the expected high of 37.2%. Nevertheless, they were up one to three points in each region.

The Tories under-performed in every region of the province. They were below expectations by three points in the north/central and eastern regions and four to five points in Toronto, the 905, and the southwest. Their result fell below the expected ranges in every region. They had a horrible night where turnout is concerned.

The New Democrats aligned quite closely with the aggregate projection, with differences of no more than 0.5 points in every region except the southwest, where they took slightly more of the vote than the higher likely range expected.

The Greens, shockingly, outperformed the polls in every region. That is a very rare occurrence.

But what if the projection had ignored those likely voter models? In most cases, the projection would have been closer. But still, the Liberals would have outperformed the polls in every region and the PCs would have under-achieved across the board. The NDP would have more traditionally under-achieved in most regions as well, as would have the Greens.

So, let's get to grading the pollsters. This should be done with a great deal of caution. Recall that most of these polls have a margin of error (theoretical or otherwise) of about two to four points for each party. And these sweepstakes, decided by a few decimal points, or not nearly as meaningful as many make them out to be. But let's put the results on the record.

In the chart below, I've ranked the pollsters by cumulative error for the four main parties, including both likely and eligible voter tallies. I've also highlighted in yellow every estimate that was within two percentage points of the result. In this regard, kudos needs to go to EKOS Research, the only firm to call two of the three main parties within two percentage points. In terms of the eventual outcome, their poll was probably the most informative, though they had the NDP too low.

Angus Reid Global's poll of all eligible voters of June 8-10 turned out to have the least total error, at just six points. Their numbers suggested a Liberal minority, however, as did Abacus's poll of eligible voters of June 9-11.

ThreeHundredEight.com would have ranked third among pollsters (or fourth among number sets), though if the likely voter models had been ignored the total error would have been just 4.4 points, putting the projection at the top of the list.

Oracle placed narrowly ahead of EKOS's eligible voter numbers, but the portrait of the race they painted (PC lead of one point) was not reflective of the outcome. After that, the errors become more serious, though both Forum Research and Ipsos Reid (eligible only) did have the Liberals in front.

You might be wondering why Nanos Research's poll is not included in the list. With a total error of just 1.5 points, the poll would have been - by far - the most accurate (Nanos had it as 37.7% OLP, 31.2% PC, 23.7% NDP, 5.3% GPO). But the Nanos poll was out of the field on May 26, 17 days before the election. While it is possible that voting intentions remained static during those 17 days, that is not something we can assume. And if we're allowing a 17-day-old poll to be used as a measuring stick, then the Abacus Data poll of May 28-31 (37% OLP, 30% PC, 24% NDP) was almost as good.

So where do we go from here? Clearly, the likely voter models are still in an experimental phase. When employed in Nova Scotia and Quebec, the first time we have seen them used in recent provincial elections, they only marginally improved the estimations, if they did not worsen them. We may come to the conclusion, then, that for the time being Canadian polling is not yet capable of estimating likely turnout with more consistent accuracy than their estimates of support among the entire population.

This is counter-intuitive, however. Likely voter models should improve things, particularly when turnout was only slightly above 50% yesterday. Going forward, ThreeHundredEight.com should perhaps rely solely on those eligible numbers, until the likely voter models consistently prove their worth, and run a lesser, simultaneous model that takes into account likely voter estimates. This may provide the best of both worlds and give readers food for thought and all the information available, though it will not clarify things more fully. The challenges of polling elections in the modern age continues.