Tuesday, July 8, 2014

Nanos poll puts Tory ahead

The polling from Forum Research has dominated the Toronto mayoralty race, with the last non-Forum poll (by Ipsos Reid) dating from November 2013. In those polls by Forum, Olivia Chow has consistently led by a comfortable margin, with John Tory mostly placing second since the latest Rob Ford fiasco. But a new poll by Nanos Research turns that on its head: Ford is still out of the running, but Tory is given the lead, with Chow running in second.

What's going on? It is difficult to say on first glance that the Nanos poll is an outlier, as we have only heard from Forum. That all the polls from Forum have been showing something similar does not, alone, discount Nanos's findings. We'd need to have multiple pollsters agreeing with Forum before we could say that Nanos is out of step. For all we know, Nanos could continue polling every month, just like Forum, and show the same consistency in its results. We need a third opinion before reaching any conclusions.

This Nanos poll was commissioned by the Ontario Convenience Stores Association, though published directly by Nanos. According to the Nanos report, "the vote and issue module was asked first in the survey followed by some proprietary questions related to convenience stores." This should ensure that the sample was unbiased by the sponsor.

The poll found that 39% of decided and leaning voters supported Tory, followed by Chow at 33% and Ford at just 22%. Karen Stintz had 4% support, with Sarah Thomson at 2% and David Soknacki at just 1%. About 11% of respondents were undecided and not leaning towards one candidate.

This is quite different from the portrait Forum has been painting. In recent polls, Chow has ranged between 35% and 40%, with Tory between 25% and 29%. The Ford numbers are a little lower than most of Forum's recent polls, but not out of the ordinary. Soknacki's 1% is also on the low side.

If we take into account the margin of error from Nanos's survey of decided and leaning voters (about +/- 4%), Tory could be as low as about 35% and Chow as high as 37%. That puts Chow in the ballpark of the Forum polls but Tory is still quite high. Forum was last in the field on July 2 and had Chow at 38% among decided voters, with Tory at 28%. Could there have been dramatic movement on July 3-5 or is there a methodological issue causing the difference?

We can probably assume that methodology is the main factor here. Forum conducts its polls via interactive voice response (robo-dialing) and over a very short time period (usually a few hours), which rules out the possibility of call-backs. This is when a pollster tries to get in touch with a randomly-dialed number when the respondent at first doesn't pick up. This lessens the error that could creep in from only sampling a subset of the population at home on a certain day of the week.

Nanos, on the other hand, conducted this poll with live-interviewers and dialed both cell phones and landlines (Forum does not make clear whether it does so) over a few days, which allows it to carry out call-backs (Nanos points out that it conducted a maximum of five call-backs). The differences in the sampling methods may thus be behind the variation between the polls.

It isn't the first case of divergent views from Nanos and Forum. In their gauging of the Ontario provincial scene, both firms could rarely come to any agreement. In January, Nanos gave the Liberals an eight-point edge over the Tories. In the same month, Forum gave the PCs the lead by three points. A disagreement on who led in Ontario occurred again in February, and again in April, as well as on numerous occasions in 2013. So to see Nanos and Forum at odds here is nothing new. Nanos did not poll late enough in the Ontario provincial campaign for us to measure its performance versus Forum, however.

What about in the 2010 mayoral election? Both Forum and Nanos were active, but well before Election Day. The final result of that contest was 47% for Ford, 36% for George Smitherman, and 12% for Joe Pantalone. Forum did a poll 11 days before the vote, and gave 44% to Ford, 38% to Smitherman, and 16% to Pantalone. Not a bad performance. Nanos, polling nine days before the vote, gave 44% to Ford, 41% for Smitherman, and 15% for Pantalone. A little worse, but we're talking margin of error differences.

(Note, there has been some chatter on Twitter about the reliability of this poll, since it was commissioned by a lobby group that has some past links to Tory. The people casting aspersions on the poll have every political incentive to do so, considering they are supporters of the Chow campaign. What they are implying is that the poll has been manipulated in Tory's favour to please the OCSA, but the logical extension of that smear is that Nanos Research manipulated its own poll. No one is saying that directly because they have no proof and to suggest such a thing would be libelous. It is also implausible. Nanos's success in business banks on being reliable and objective, and it is ridiculous to believe that the company would purposefully manipulate a poll and then release it publicly, data tables and all. The poll was not even published by the OCSA, which would raise some flags, but directly by Nanos, something that the polling firm would have absolutely no reason to do if the numbers were manipulated to please a client. It might be too much to expect from some political operatives, but they should exercise a little caution before they accuse individuals of being lying shills.)

The poll did some have other interesting tidbits. Like all its polling, Nanos asks respondents to list their first two choices. Combining them gives us an indication of each candidate's potential ceiling. Tory's is the highest (his approval rating has been the highest in Forum's polling as well) at 57%, with Chow not far behind at 46%. After that, no one has a real chance of pulling out a victory. Ford's ceiling tops out at just 27%, Stintz's at 17%, Soknacki's at 10%, and Thomson's at 4%.

Nanos also asked respondents to name their top issue - unprompted. This means survey takers had to come up with an issue themselves, rather than choose from a list. By far, transit was the top issue at 35%, followed by property taxes at 17%, jobs and the economy at 16%, and traffic at 14%. The embarrassment of Ford took only 4%, but that is not too surprising. Ford's antics might be a reason someone won't vote for him, but that doesn't change the election being about a more important issue that is relevant to a voter's daily life.

In terms of whether it is Tory or Chow who is leading, we're better off to wait and see what other polls are going to say about this race before determining who is off the mark (both could be, of course!). It will also be interesting to see if Nanos will continue polling, and if so, what its results will be. For now, we can just say that the race remains primarily between Tory and Chow.

31 comments:

  1. Outside of wondering about the relatively small size of the poll it is encouraging in showing Ford making no progress really. I'll leave it to the voters to sort out the Chow-Tory question.

    ReplyDelete
  2. While Nanos didn't poll late enough to get a great assessment of their numbers compared to the election results, we can compare Nanos to other polls in the field at the same time.

    Nanos' poll for May 26 was 37.7 OLP 31.2 PC 23.7 NDP (pretty much bang on the actual results).

    Abacus' general voters polls were 34/32/25 for May 24 and 37/30/24 for May 31. Forum's May 26 poll was 36/36/20.

    If you look at the rest of the pollsters at that time I think it remains pretty clear that Nanos was showing more or less the same picture as the Abacus-Angus Reid group, and not the same picture as the Forum-EKOS group or the Ipsos-Oracle group.

    Or put another way, Nanos was a better pollster than Forum in the Ontario 2014 election. I think that fits pretty well with the overall track record of each firm. Forum's quick-and-dirty polls really give them a mixed track record of brilliant calls and spectacular misses IMHO, making them very hard to rely on.

    I'd note too btw that Nanos was the most accurate pollster in the 2011 federal election (though Angus Reid was more precise).

    ReplyDelete
  3. Besides the small poll size apparently it included a question about the sponsor who turn out to be a major Tory supporter ?? Does that invalidate the results ??

    ReplyDelete
    Replies
    1. I covered this above.

      Delete
    2. I know you did but I retain the right to be unconvinced. Politics is such a dirty business and we all know virtually anything goes.

      Delete
  4. Don't like what the poll says, claim the pollster is biased against or for one candidate. Not sure I am a fan of that tendency...

    ReplyDelete
  5. The poll size is ridiculously small. 600 in a city the size of Toronto. Those results therefore mean very little. That's the first thing Éric should have mentioned.

    ReplyDelete
    Replies
    1. Sample of 600 is not ridiculously small. A sample that size has a margin of error of +/- 4%. A standard poll of 1000 people has a margin of error of +/- 3%.

      (Size of the city is not important, a sample of 600 people has a similar margin of error whether the total population is 35 million or 5 million).

      Delete
  6. Eric in this case margins of error and other statistical things are irrelevant when the sample size is so small. I agree with Jean. Given a city of over 1 million people 600 is ridiculous !

    ReplyDelete
    Replies
    1. No, that isn't how it works. A sample of 600 people in a city of 100 million or 1 million has a margin of error of about +/- 4%.

      Delete
  7. Eric
    Let's look at this in a clearer way. Assume a city of 600,000 population

    1% = 6000 people

    1/10 of 1% = 600 people

    Toronto has well over 600,000 people.

    So you are saying that numbers generated by less than 1/10 of 1% of the population represent 100% accuracy. Sorry but it sure don't !!

    ReplyDelete
    Replies
    1. You're right, it doesn't represent 100% accuracy. And no pollster would ever claim it would. But a sample of 600 from a city of 100 million or 1 million people would have a margin of error of +/- 4%, 19 times out of 20.

      Delete
    2. Well guess what. You just found the 1 time out of 20 !!

      Because aside from the built in bias of that poll the numbers simply don't work !!

      Delete
    3. So you don't like the results therefor the poll is wrong Peter?

      That's not the way the world works.

      Delete
    4. No Ryan this isn't about right or wrong re results. This is about the specious use of a very small sample to predict a result.

      Delete
    5. Given the Moe Tory could be at 43% or 35%; Chow at 37% or 29% and Ford at 26% or 18%. All 3 major candidates still have everything to play for.

      Delete
    6. Peter, the sample is not abnormally small, and the poll is not a prediction of the election in October, but an estimate of current support levels.

      Delete
    7. Sorry Eric but less than 1/10 of 1% is ridiculously small.

      Accept it !!

      Delete
    8. We routinely analyze polls of 1,000 Canadians, or roughly 0.003% of the entire population. Are they ridiculously small?

      I'm afraid you are simply mistaken, Peter. This is math, not some matter of opinion.

      Delete
    9. Peter, as someone who teaches courses that include introductory statistics, Eric is completely correct on this. When you write "margins of error ... are irrelevant when the sample size is so small" you reveal a complete lack of understanding of the subject, because the sample size is precisely the basis for calculating the margin of error. And Eric is also correct that you consider the absolute size of the sample, not relative to the population. Now if you want to bring up issues of bias (in the statistical sense), as others have on this thread, that's debatable, because we don't have a clear and unambiguous way to measure bias. (If we did, we'd just remove it.) But the random fluctuation that comes from sampling, that is exactly where the sample size makes a difference, and we do have a precise way to quantify it: the margin of error, in conjunction with the confidence level (almost always 95%).

      Delete
    10. Peter - by that standard we should ignore any federal poll with a sample size of less than 35,000. That's clearly ridiculous.

      FYI mathematically the margin of error assumes an arbitrarily large population such that your sample does not constitute a significant portion of the population. That's the whole point of random sampling.

      Nanos has a better track record than Forum so this hysteria isn't warranted. Don't dismiss this poll just because you like Chow more than Tory. Forum could still be right and Nanos wrong but there's no objective way for us to know that. Check your bias.

      Also, FYI, Nanos is one of the most Liberal-leaning pollsters federally. Are you going to have the same skepticism when they show good numbers for Team Red?

      Delete
    11. Ryan the results of the poll are irrelevant when your sample is so tiny and when the poll sponsor injects a question about themselves. As to winner as long as it ain't Ford I'm happy. Tory and Chow can each do a good job.

      Delete
    12. Any questions related to the sponsor were asked after the ballot and issue questions, and so should have no influence on the results.

      Delete
    13. Peter, you keep repeating that a sample size of 600 makes a poll irrelevant. Please go review the mathematics involved on Wiki or some other resource. Your claim is demonstrably false. Cheers.

      Delete
    14. Give it a rest Ryan

      If your candidate was in second or third place you would be squeaking to !!

      Now given the Toronto population is 1 million + a sample size of 600 is simply inadequate and the math doesn't matter !!

      Delete
    15. Peter this is getting embarrassing, please stop.

      Delete
    16. Toronto's population is roughly 2.6 million or about equal to the four Atlantic provinces. Routinely pollsters submit numbers for the Atlantic with a sample size below 100.

      Delete
  8. No wonder people don't trust polling....they come up with their own interpretations of the implications of sample size and error mean. Statistical analysis? that is for chumps, my own reasoning is better, clearly....If you want to question a poll leave the size 600, 800, 1000, 1200 alone and go into the good stuff....because those small margins of errors only work if the sample is truely a random one. Getting the random sample that is representative of the population is the biggest problem, not the sample size.

    ReplyDelete
    Replies
    1. You're right that the stated margins of error hold if and only if they reflect true sampling error from a true random sample. But of course the magnitude of the error is inversely proportional to the square root of the sample size (generally).

      Delete
  9. It's hard to know these days how much to trust these kinds of opinion polls. Obtaining an adequate sample from the target population has never been straightforward, but I fear the non-response and sampling frame biases add an almost intolerable degree of unmeasurable error. I realize that pollsters like to crow about their results (read, their point estimates) being more "accurate" than others, but in practice any that produced a confidence interval in which the final result fell is equally accurate to any other, regardless of the specific point estimate.

    Another issue is the frequent reporting of strata results without the context of much higher error. For example, some polls in the recent Ontario election attempted to delineate "likely" voters from (presumably) less "likely" ones. But that didn't result in a quoted increased margin of error (which it would most certainly imply), and instead these polls end up shaping the media horse race, and probably not in an entirely benign or neutral fashion.

    As for this specific poll, its smaller sample size is a major problem where it seems that the three major candidates have differing levels of support in different demographics. But are these strata equally likely to respond (or not respond) to the poll? The non-response bias is almost definitely NOT random.

    Finally, no publicly released poll is adequately transparent about methodology. We really need to be able to see and compare non-response rates, potentially differing non-response biases, along with the questions used.

    (As a further aside, I don't understand Nanos's preference to report his point estimates to the precision of a decimal place. It may give the poll results some flavour of accuracy, but there is absolutely no validity to reporting that level of precision.)

    ReplyDelete

COMMENT MODERATION POLICY - Please be respectful when commenting. If choosing to remain anonymous, please sign your comment with some sort of pseudonym to avoid confusion. Please do not use any derogatory terms for fellow commenters, parties, or politicians. Inflammatory and overly partisan comments will not be posted. PLEASE KEEP DISCUSSION ON TOPIC.