Thursday, August 15, 2013

Interview with Darrell Bricker of Ipsos-Reid

In the last of my series of articles for The Globe and Mail on political polling methodology, I look at online polling. 

For this article I interviewed, among others, Darrell Bricker, the CEO of Ipsos Public Affairs. The transcript of the interview can be found below. It is a very interesting one, particularly on the topics of the business of polling and the role of the media.

In the past few weeks, I've posted the interviews with Don Mills of Corporate Research Associates and Frank Graves of EKOS. Over the next week, I'll also post the interviews I had with David Coletto of Abacus Data and Christian Bourque of Léger Marketing. 

308: Recently, Ipsos-Reid moved from traditional telephone polling to use of an online panel for its political polls. Why was that decision made?

DB: We’ve been considering the move to on-line for some time. That’s because the market research industry, especially in North America, now uses almost exclusively on-line data collection methods for quantitative studies. Phone is becoming a smaller part of the mix and is usually focused on either specific audiences (B2B), or calling lists. So, the investment in research platforms is going into on-line methods, and the “research on research” that’s being done is also focusing on on-line. The clincher for us was the 2012 US Presidential election – we had an opportunity to work extensively in the on-line space for Reuters and saw how strong it was in terms of sample stability and representativeness.

308: In the past, you have criticized the amount of weighting that has to be applied to online polls. What has Ipsos-Reid done to mitigate this problem?

DB: What I've been critical of is not the amount of weighting (although less is always better), it’s been the lack of disclosure about how much weighting is being done and according to which variables. But, this doesn’t just apply to on-line, it applies to all forms of data collection. As for our experience with on-line, we don’t actually do much weighting (usually just some light demographics), and we always disclose both our weighted and unweighted data.

308: What are the advantages of conducting your polls online instead of over the telephone?

DB: The biggest advantage is coverage. Over 80% of the Canadian population is now on-line. Another advantage is that we can control our sample “input” by heavying-up on hard to reach categories – especially with the river sampling portion of our sample frame. Also, we like the fact that we can ask longer questionnaires on-line. As you know, questionnaire length isn’t a big driver of costs for on-line surveys as it is for telephone.  Dual-frame telephone (that’s combo landline and cell) is cost prohibitive, and there’s no advantage in terms of sample accuracy, especially when non-responses are taken into account.

308: What are the disadvantages?

DB: The biggest disadvantage is that on-line research in politics is a relatively new. We’re still learning every day about what potential issues might exist. BC is a good example of this – although the miss in BC was more of an issue with predicting differential party turnout than it was about a specific methodology or under-representing a specific group in the sampling.  The way to solve these problems, in my view, is to follow good scientific practice – be your own worst critic and disclose your errors (painful as this can sometimes be) to review by your peers and other interested parties.

308: Generally speaking, how does online polling compare to other methodologies in terms of costs and effort?

DB: To do on-line well doesn’t save a lot of money. And, the amount of effort is basically the same as any other quantitative survey method.

308: Though online polls have performed well in some recent elections, for example in the 2012 US presidential vote, the methodology struggled in this year's B.C. election. Was there anything particular to this methodology that contributed to the error?

DB: The evidence shows that this is a bit of a red herring. The issue in BC was predicting which groups of the public would vote. This was a problem for ALL methodologies. The exit poll that we did on election day (which got the results very close) shows that if we had all done a better job of selecting actual voters to interview we all (regardless of methodology) would have come closer. As for on-line excluding parts of the population that don’t have access to the Internet, the truth is that these groups (usually less affluent, more transient, etc) are also among the least likely members of society to vote. For certain types of social and commercial research getting to these more marginal groups is important and using on-line to get them won’t work. But, for political research this isn’t a major issue.

308: What challenges do you face in building a representative sample of the population, considering that not everyone has access to the Internet and the potential for opt-in panels to attract a different sort of respondent?

DB: We don’t just use opt-in panels for our samples – we also use a proprietary form of river sampling that intercepts participants on the Internet regardless if they are part of an opt-in panel or not. All opt in panels have holes. They are impossible to prevent (for all of the obvious reasons). That’s why the world leaders in this space use a combination of their own and other opt in panels, and some form of river sampling. There’s a whitepaper on our website on blended sampling methods that describes what I’m talking about in detail.

308: There are debates in the industry about the problems surrounding online polling not being probabilistic, despite some good performances. Why is this, or isn't it, a problem?

DB: There are almost no probabilistic samples in any area of social science research these days. Even the ones claiming they are “probabilistic” significantly depart from the classic model and rules. In our case, we take a different approach to understanding both probability and sampling error. And, that approach borrows from the Bayesian side of statistical theory. That’s why we report a “credibility interval” instead of a margin of error with our on-line polls.  There’s another whitepaper on our website that explains how to calculate a credibility interval in detail. 

308: Ipsos has a long history of polling in Canada and worldwide. How has political polling changed over the years in this country?

DB: Susan Delacourt’s new book on political marketing in Canada does a great job of describing the history of political polling in our country. I’d start with that. But, the biggest change I’ve seen is the willingness of the media to publish polling without doing even the most rudimentary investigation of the pollster or their methods. Blame the lack of resources or the pressures of the 24 news cycle, but it’s led to an embarrassing environment in Canada that hurts polling, the media and our democracy. Want to fix it? The media needs to start demanding disclosure from pollsters and refusing to publish those who don’t comply.

308: Are there any differences between polling in Canada and elsewhere, both in terms of how polls are conducted and the challenges of polling in Canada? 

DB: The biggest difference I see in polling around the world compared to Canada is the degree to which media in other countries both value polling and are stingy about giving it coverage. For example, major media outlets in the US like Reuters, CNN, Associated Press, and the New York Times all have polling experts on staff that strictly enforce their organization’s quality standards. They are also active players in polling – each has their own proprietary poll that they pay for and release. This used to be the case in Canada. Now, only a couple of media outlets (including our partner, CTV) do this. As a result, some so-called “pollsters” in Canada simply shop their free results around to various media outlets until they get a bite. If the free poll is “juicy” enough (never mind being accurate or conducted according to reasonable standards), it gets published. If the poll is wrong, who is to blame? For the media, they have the convenience of throwing the pollster under the bus.  But, by then the media cycle has moved on and the “pollster” is already working on their next free release. It’s shameful, and Canadians deserve better. 

By the way, while I’ve used the US as the point of comparison I could have easily used France, the UK, Italy, Spain, Mexico, Australia or New Zealand.  Ironically, we did the polling in Nigeria a couple of years ago and even there the amount of disclosure and review we went through with our media client would put most newsrooms in Canada to shame.

308: Do you have an explanation as to why Canadian media treats polling differently from other countries? Newspapers everywhere are going through the same financial issues.

DB: It is a mystery to me. It just seems that Canadian media don't really take polling seriously anymore. I know that's not entirely true, but it does seem that way. A good example is the CBC making a virtue out of not covering polls for awhile. Instead of doing what the standard-setters do in other countries - which is to create a quality poll of record and challenge others to match, they decided to abandon the field all together. The BBC, AP, Reuters, etc all went in the other direction. 

308: How has the business of polling in general changed?

DB: The research business is in major transition. It’s funny that we get caught up in conversations about data collection methods like on-line vs off-line, it’s almost a bit quaint. The truth is that the marketplace has already decided much of this – and on-line is winning in all markets where it’s feasible. The people who used to set the standards for what is acceptable in research, mainly governments and academics, are being supplanted by global corporations like P&G, Unilever and Coca Cola. Outside of the US government and the EU, they are the biggest buyers of research in the world, and they are the ones setting the standards. And, the new standard is methodologically ecumenical. It’s increasingly about creating global averages, speed and direction. Whatever gets you a quick, usable answer be it on-line surveys, social listening, qualitative research, ethnography, passive data collection, Census data, that’s what will be used. 

Apart from how global packaged goods companies are redefining research, the other major trend is the domination of the research industry by a few global firms. Given the capital requirements necessary to service global clients, the big players in the market (which are mostly European) are now dominating the business and their domination can only grow. Global clients increasingly want global research partners who can deliver a similar level of quality in all markets. To do this, the major global players are acquiring companies in all the markets that matter. Yes, there will always be important boutiques in all markets, but their competitors will increasingly be the global players. And, the global players are smart, well financed and tough.

308: What changes, if any, need to be made to ensure that online polling produces good results in the future?

DB: On-line already produces terrific polling, so we’re not talking about a fundamentally flawed methodology. But, where things are moving is away from on-line surveys conducted via single opt-in panels. Increasingly, we’ll be seeing more blended samples that select people from wherever they can be found on the Internet. That’s where the big players are all headed. But, in all seriousness, if the competition to on-line is IVR (robocalls), I already know how this battle ends. To directly answer your question though, making on-line surveys better is no different from making any other survey better – we need to satisfy the primary rules of validity and reliability.