Sunday, February 20, 2011

Use of decimals in polls: a thought exercise

Yesterday on CBC Radio's The House, Allan Gregg of Harris-Decima spoke more about what he has said recently on the At Issue Panel and in last weekend's Canadian Press piece by Joan Bryden on polling. Among other things, he spoke about reporting polls to the first decimal point.

He called pollsters who report in this way (EKOS and Nanos being the only ones who do so) "tremendously naive, or ridiculously deceitful". Things are getting personal. But competitors criticizing each other? Stop the presses!

But does he have a "point"? (snicker)

As Frank Graves of EKOS has mentioned to me, why conduct polls with larger sample sizes if you don't take advantage of their greater precision? Harris-Decima's last poll had a sample of over 3,000 people, but they rounded off their poll results. Yet, Harris-Decima's poll mentioned a margin of error of +/- 1.8 points. That becomes a rather useless number when you round off polling results.

Let's conduct a thought exercise using Harris-Decima's most recent poll, focusing on the Conservative result.Harris-Decima reported the Conservatives being at 37% support. With rounding, however, that could put the Conservatives at anywhere from 36.5% to 37.4%, and with the 1.8 point margin of error that means the Conservatives could be between 34.7% and 39.2%. In effect, it increases the margin of error to 2.3 points, rather than 1.8 points.

So, rounding off poll results gives us a far less precise and even more cloudy picture of the situation. The standard poll has a margin of error of 3.1 points, which gives a rounded off result of 37% a range of between 33.4% and 40.5% (or 33% to 41%, if we're rounding). If that result was reported as 37.0% instead, the range would be 33.9% to 40.1% (or 34% to 40%, with rounding). And that is with the exact same poll result.

Sure, an argument can be made that reporting to the first decimal point might be providing more precision than is necessary. But if polling results are going to be rounded off, perhaps reported margins of error should be as well. It certainly sounds better to say that a poll is accurate within 1.8 points rather than two points, but it would appear to be providing the same "unnecessary" precision.

To paraphrase Nate Silver, more precision is better than less. It is up to us to use that extra precision correctly, and within context.

13 comments:

  1. Allan’s disdainful dismissal of decimal points is revealing . Amongst statistical experts , this is a controversial issue but Mickey Kause wouldn’t call Nate Silver an idiot for his decision to prefer decimal points. The more I read about significance and MOE the more I believe that we need more not less precision . Allan may well be unfamiliar with the changes going on in views on statistical significance and the differences between repeated measures time-series and one-shot polling . In some reporting periods we have as many as 10,000 cases in the two survey periods. The MOE, which is not a constant, but shrinks as one gets away from 50/50, will often be below single digits (i.e. less than 1 percent)._ When we are comparing across two samples highly significant effects could be masked if we were to round . What is the point of collecting large samples if you are going to throw out the extra precision that you have purchased? And underlying all of this lies an uncomfortable sense that I get from what Allan and others have said that all surveys are so flawed that between the low interest levels and vagaries of crappy design there isn’t any point in this . From a science perspective that’s simply untrue , There is remarkable consensus amongst experts as to what constitutes sound methodology and what doesn’t. And there are track records to consider as well.
    Allan Gregg is a pillar of Canadian polling and he has said some important things to say about the limits and abuses of modern polling. He has also said some things which provide an exagerated and unfair picture of the limitations of contemporary polling . We should all welcome this debate but we should insist on evidence and logic in determining the true limits of modern polling . At the end of the day the issue of reporting a single decimal point is a matter of discretion. On the broader question of whether polling is a scalpel or a meat cleaver I tend to see it more in the former terms and think at the very least its a better recipe for scientific results.


    Frank Graves

    ReplyDelete
  2. Fair enough, but I liked Gregg's comments. Because whatever you might think about the individual bits and pieces, his larger point was that pre-election horse race polls imply an illusion of certainty that doesn't translate into reality; using decimals might somehow increase "accuracy" of what's being reported, but it might also be meant to enhance the illusion of it.

    ReplyDelete
  3. Brian,

    But it misses the point entirely; even if the pre-writ polls are not to be taken at face value, what is the point of making them more inaccurate by rounding the results? It's making the polls even more redundant than Gregg already says they are, which shows real disdain for his chosen trade.

    ReplyDelete
  4. Frank Graves if i'm not mistaken after the 2008 election you changed your methodology to prompt "other" as an option during your surveys.

    In your latest survey you put them at 3% even though in the last election they only won 1.1% of the vote, 1% in '06, and 1.2% in '04.


    What is the point of reporting to a single decimal if your surveys seem to over estimate the historical results of "others" by 2% ?

    Do you have any plans to address this obvious problem ?

    ReplyDelete
  5. Thanks for creating very useful and informative blog. I have learned a lot from your blog. Its quite interesting with pie chats and article. Thank you once again

    ReplyDelete
  6. I agree wholheartedly with Nate's position on this. It is not the job of the pollster to protect the general public from misusing the data.

    ReplyDelete
  7. On the issue of the addition of the prompted "other' category after 2008. I think this is an improvement over the way we did it before but we are trying to deal with some trciky problems. At the top of the list in this instance is the problem of the gap between the eligible voting population and the actual voting population . Between elections our job is to model the entire eligible populaltion . We know that only roughly 60 5 will actually vote in teh next election . If we knew which 60 or so would vote then we could produce results based only on the voting population (although it is still interesting and important to know how the entire eligible populaltion feesl). The problem now is that the size and compostion of the actual vs. eligble voter popultion varies from election to election .
    There are always sginifcant gaps between what the population of eleigble voters say they would do and what the population of actual voters truly do .
    The job of the pollster between elections is to model as accurately as possible the entire eligible population . S/he should comment on possible gaps between eligible and actual voters but not try and mix up the goal of predicitng the next election with the goal of modelling the current population of voters.
    So, the main challenges emanate from this porblem. If i explicity prompt for Green Party then I get a higher number than I woudl if I didn't prompt . This alone explains the main differences between what my colleague Nik Nanos finds (without prompting ) and what we find. lots of GP supporters are younger voters who dont come out as much and I also believe that eliglble GP voters are less likely to acutally vote because they know that their vote is much less likey to be rewarded with an MP than other voters (even similalry sized Bloc voters).
    So we might find that 10% of eligble voters support the Green but only about 60 to 70% will show up and vote. If you dont prompt you get something lower than the actaul vote , let alone the eligible voter share.
    so why the "other". What we were finding is that some GP support was not weakly motivated GP support but a "protest--none of the above " vote . So we were artificially inflating the GP vote with disaffected voters who didnt like the mainstream choices . We supsect its higher in alberta beacause of WIld ROse supporter who wouuld like a federal options.
    In short we included the prompted other because we believe it provides a more accuate model of what eligible (not the final actual) voters really want.
    It really doesn't have much to do with the decimal point issue but I do see the point that Shadow was making . The more general point we want to stress is that recoginsing that there are inevitable errors in survey research we should do everything we can to remove those sorts of errors we can rather than throwing up our hands and saying its all like measuring a hot dog with a micrometer anyway . We should be as precise, explicit and scientific as we possibly can .
    Frank Graves

    ReplyDelete
  8. Nanos
    39.7 / 26.6 / 18.9 / 9.9 / 4.9
    or, alternatively,
    40 / 27 / 19 / 10 / 5

    ReplyDelete
  9. Final comment :
    The issue of whether of not one uses decimal points or not is one of the least important issues confronting polling today. It does, however, become a proxy for more prfound disagrements about the relative precision achievable through polling . Regardless of one's position on this farily beinign issue I think it's unfortunate to engage in hyperbole such as "tremendously naive, or ridiculously deceitful". I am pretty confident that neither Nik Nanos nor I deserve that deptiction. I also suggest that perhaps the pursuit of greater precision does produce tangible benefits evident in our track records (Nik probably should speak for himself on these issus).
    At the very least, perhaps as we work through the "Great Canadian Poll-Off" we could strive for a little more politesse?
    Frank Graves

    ReplyDelete
  10. Replying back to Volkov, who wrote:

    "But it misses the point entirely; even if the pre-writ polls are not to be taken at face value, what is the point of making them more inaccurate by rounding the results?"

    Well, that's the paradox. Are they "more innaccurate" if the results are rounded? Given the existence of a margin of error, the number is *already* marginally inaccurate, so what difference does the decimal make to its purported accuracy?

    I'll concede that from the standpoint of disclosure, it makes sense: if Joe Pollster got a 33.2781% result, I suppose it's more forthcoming to report 33.2781% (note my "more accurate" use of three decimal places) and not 33%.

    But from the standpoint of the audience reading the results in a news story, they shouldn't care about minor shifts or details that fall within the MOE. And talk of microlevels of accuracy convinces them otherwise.

    Gregg's right - bluntly: well-trained political people using polls wouldn't care for the decimal or believe it's accurate enough to matter - I know, I was one of those people. So if I wouldn't care, why should you be encouraged to care as if the 0.7% matters?

    "It's making the polls even more redundant than Gregg already says they are, which shows real disdain for his chosen trade."

    I don't think so. I respect pollsters immensely. They've got a tough job and do it well. Alchemy, really. But I also get that a poll is a survey of constantly shifting opinions, not a statement of predictive fact. Pointing out that horse race, pre-election polls are built on soft ground is hardly disdain. On the contrary. Undermining the idea that pollsters always have pointpoint accuracy doesn't diminish a pollster's value - because a minimum five weeks out from any possible election scenario, pinpoint predictive accuracy isn't what you hire a pollster for in the first place. You're paying them to help you understand what buttons to push, where, and whether you're hitting them hard enough, not to sit back with a crystal ball and say "if nothing changes, this will happen tomorrow even though the election won't actually happen tomorrow."

    Any good political pollster knows this, and will say so if prompted. Gregg's amusing anger aside, I think his point was that we should be prompting that kind of candor much more, and fretting about the 0.7% margins in a 1.8% MOE much less.

    ReplyDelete
  11. Again, let us decide whether we want to fret over 0.7%.

    ReplyDelete
  12. Exactly what Eric said. Sure, the 0.7% may not make a huge difference, but it makes enough of a difference to say that, hey, if such and such is off by this amount, it could be the difference between a strong lead and a relatively close race. The average person who follows these polls may not necessarily care; but for people like us, who are keeping track of these polls and the trends, and for someone like Eric who is doing projections off of these polls, it makes all the difference in the world. Especially in the latter's case, where Eric is coming from. 0.7% is the difference between winning Vancouver South and losing the Kitcheners, for instance.

    ReplyDelete
  13. Yesterday, I saw CTV report Nanos's 9.9% support for the BQ as 'the BQ have slipped into the single digits'.

    I think there's a significance to being 'in the single digits', one that probably means something the CTV viewers. If nothing else, the decimal-point issue matters here if you consider the question of whether or not CTV were in the right in describing BQ support as 'single-digit'.

    ReplyDelete

COMMENT MODERATION POLICY - Please be respectful when commenting. If choosing to remain anonymous, please sign your comment with some sort of pseudonym to avoid confusion. Please do not use any derogatory terms for fellow commenters, parties, or politicians. Inflammatory and overly partisan comments will not be posted. PLEASE KEEP DISCUSSION ON TOPIC.