Wednesday, August 28, 2013

Liberals halt decline in Forum poll

Forum Research released its latest federal polling numbers via The National Post on Monday, showing the Liberals stopping the steady decrease that had been occurring in Forum polling since May. However, the poll has the same sampling issues I have identified before, as I will explain in detail below. But is it a sampling issue or a weighting scheme?
The poll itself, despite the Post's excited headlines about a Liberal surge, shows no significant change from Forum's last poll taken a month ago (when the Post was similarly effusive about how the summer was going well for the one-point-gaining Conservatives). The poll does show the Liberals rebounding from a steady decline in support, from 44% to 38% to 35% between May and July, however.

The Liberals were up three points to 38% and the Conservatives were down one point to 29%, while the New Democrats were unchanged at 22%. The Bloc Québécois was down one point to 6% and the Greens were steady at 4%. Another 1% said they would vote for another party, and about 4% of the entire sample seems to have been undecided (unchanged). None of these shifts appear statistically significant (assuming a good random sample, see below).

The Liberals had a four-point edge among men and an 11-point advantage among women, while they were ahead among voters between the ages of 35 and 64. Oddly enough, Forum put the Tories in front among both the oldest and youngest Canadians. They've bridged the generational gap!

Regionally, the only significant shifts occurred in British Columbia and Ontario. In B.C., the NDP was up 11 points to 37% (an unusual, but not recently unheard-of number) while the Liberals trailed at 31%. The Conservatives were down 11 points to just 22% in the province, the lowest they've been here in any poll since an April survey by Harris-Decima. The Greens were up to 10%.

In Ontario, the Liberals jumped seven points to 43%, while the Tories slipped to 34%. The New Democrats fell six points to 17% support. Of note, Forum is the only firm to have put the NDP under 19% in the province since an Innovative poll from March 2012.

Elsewhere, the Liberals led in Quebec with 38% and Atlantic Canada with 46%, while they placed second in the Prairies with 35% and third in Alberta with 18%. The Conservatives were in front in Alberta (55%) and the Prairies (42%). The Tories were at 14% in Quebec (the third consecutive poll to put them at that number) and 28% in Atlantic Canada. The NDP was at only 24% in Quebec, 23% in Atlantic Canada, 21% in the Prairies, and 20% in Alberta.
That unbalance between B.C. and the rest of the country for the New Democrats means that the party would win the plurality of its seats in that province, shifting the NDP's focus from Quebec to the west coast. The Liberals would win around 144 seats with Forum's numbers, while the Conservatives would win 118, the NDP 62, the Bloc 12, and the Greens two (only just).

If we use CROP's regional distribution for Quebec, the seat tally in that province changes to 48 for the Liberals, 12 for the Bloc Québécois, 10 for the NDP, and eight for the Conservatives. That shifts the national totals to 155 for the Liberals, 117 for the Tories, and 52 for the NDP (generally speaking, using CROP's regional distribution swings about 10 seats from the NDP to the Liberals).

Both Justin Trudeau and Thomas Mulcair showed improvement in their personal numbers, with Trudeau's approval rating up five points to 49% and Mulcair's up four points to 37%. His disapproval rating fell four points to 33%, while Trudeau's 'don't know' score dropped three points to 18%. Stephen Harper's approval rating was down to 30%, his disapproval rating up to 62%.

Harper is popular among his own supporters, with 85% approval, as is Trudeau (88% among Liberals). Mulcair has only a 58% approval rating among NDP voters, however. This carries over to the "Best Prime Minister" question, as only 46% of New Democrats actually chose Mulcair as the best option for PM, compared to 71% of Liberals who selected Trudeau and 82% of Conservatives who chose Harper. Overall, Trudeau was well ahead with 32% to 24% for Harper, 16% for Mulcair, and 9% for Elizabeth May. If we take out the "none of the aboves" and "not sures", the three leaders generally line-up with decided party support. May, however, appears to be about twice as popular as her party.

The sample

Now let's take a look at Forum's IVR sample, focusing on the distribution of respondents by age.
As you can see, the same problem that has been identified before in Forum's polling continues. The sample is way too old (61% is over the age of 55, almost double what it should be). To get the proportion of younger voters right, their sample needs to be more than tripled - magnifying any errors that are bound to creep in when the sample is so small. The sample of older voters needs to be cut in half, wasting a lot of the extra precision that is gained from over-sampling that group.

For comparison, let's take a look at the sample of Nanos's recent poll. Nanos conducted its polling with live-callers, and had detailed breakdowns included with its Bell Canada/Telus poll (we are usually not treated to such detail with their usual national polls, unfortunately).
Nanos has some of the same problems in getting younger people on the phone, as seems to be endemic to the industry, but not nearly to the same degree as Forum. Instead of having to triple the sample of young voters, Nanos has to less-than-double it. Instead of cutting the sample of older voters in half, Nanos only has to trim it by about 25%. Nanos's sample is much more representative.

To be fair, Forum has had some decent performances in recent elections. But that may be chalked up to the firm having a good turnout model. While that might be useful for our purposes, what happens when the assumptions that model is based upon turn out to be wrong? In the United States, this is what had Mitt Romney certain of victory even when the national polls showed he would lose.

But are we looking at Forum's model in these sample sizes? The proportions are remarkably similar to its national poll from May - the 35-44s decreased to 11% from 12% of the sample, and the oldest cohort increased from 33% to 34%. Is Forum showing how it weights the sample by age, or is it just an unusual coincidence? The sample sizes for household income are not the same as in the May poll, though, so it is possible that we are not looking at a weighting scheme. Unless, for instance, Forum changed how they weight by income after analyzing the results of the B.C. election.

In any case, this is something to keep in mind when looking at a poll. It should not invalidate its results, though, as Forum's sampling issues are actually quite consistent. That makes it possible to look at the trends from poll to poll. And, in that sense at least, things look to be settling in for the Liberals.

Monday, August 26, 2013

Liberals still ahead in new Nanos poll

The poll went under the radar, but a sharp-eyed reader pointed out that the latest poll from Nanos Research for Bell Canada and Telus, on the subject of telecommunications, contained voting intentions information. From the looks of the poll report, the voting questions were asked right at the beginning, suggesting there is no reason to consider the sample biased despite the results' provenance from a privately commissioned poll. Add to that the fact that Nanos's numbers are remarkable in their unremarkableness, and we have ourselves the first national political poll worth looking at in a month.
Nanos was last in the field in mid-June, though that poll was conducted online. It is hard to know exactly how to compare these two polls, since the methodology of the last one included phoning people up to invite them to complete the online survey, blurring the lines between making it an online or a telephone poll. This poll, however, was done over the telephone entirely.

If we do compare the two polls at face value, we see that there have been no statistically significant shifts of support since June for any of the parties in any of the regions, or nationally. It is the status quo, though there are a few small trends to keep an eye on.

Nationally, the Liberals were up 1.1 points to 35.3%, where they seem to have settled at after the heady days right after Justin Trudeau's leadership victory. The Conservatives were up 2.5 points to 31.9%, their best result in any poll since an Ipsos-Reid survey from April. The New Democrats were down 2.5 points to 22.8%, while the Greens were down 0.5 points to 5.9%. The Bloc Québécois was down 1.2 points to only 2.5%, and 1.6% of respondents said they would for another party.

Undecideds were about 22% of the entire sample, up three points from June.

This is a more detailed poll from Nanos than we usually get, and it has some interesting breakdowns. For instance, the Liberals were only up on the Conservatives by one point among men (34% to 33%, with the NDP at 22%) but were ahead by five points among women (36% to 31%, with the NDP at 24%). The Liberals led among voters over the age of 30, and were even four points up on the Conservatives among voters 60 and older.

Regionally, the results fall well into line with what other surveys were showing in July.

In British Columbia, the Liberals were narrowly ahead with 31.7% to 30.8% for the Conservatives and 25.4% for the NDP, with the Greens at 11.8%. Of note, though, is that the Conservatives have been consistently slipping: they were at 37.5% in Nanos's previous poll from April.

The Conservatives led with 56% in Alberta to 23% for the Liberals and 9.9% for the NDP, and were also ahead in Saskatchewan and Manitoba with 40.5% and 44.3%, respectively. In Saskatchewan, the NDP was second with 30.4% to the Liberals' 24.6%, while in Manitoba the Liberals were second with 38.6% to 14.2% for the NDP. These Saskatchewan and Manitoba numbers align broadly with the other surveys we have seen for these two individual provinces.

In Ontario, the Liberals were at 37.2% while the Conservatives were up to 33.9%. The NDP was well behind with 21.2%. They were closer in Quebec, however, with 29.8% support to 38.5% for the Liberals. But the NDP has slipped in three consecutive Nanos polls in Quebec, while the Liberals have gained in three consecutive polls. The Conservatives were at 14% (identical to the recent CROP poll) while the Bloc was down to 12.2%. While that is a very low number, Nanos has had the BQ lower than other polling firms for some time. In fact, they had them at 9.6% in the months after the 2011 election, the only time the party has ever been in single digits.

The Liberals led in Atlantic Canada with 43.5%, followed by the Conservatives at 29.3% and the NDP at 26.7%. The two parties swapped places and about eight points, but that was within the margin of error.
These numbers would give the Liberals a narrow plurality of 132 seats, compared to 129 for the Conservatives, 75 for the New Democrats, and two for the Greens (the Bloc would be wiped out). The Liberal victory is won primarily in Ontario, where a number of seats go to the party by a razor-thin margin.

But as I discussed on Friday, it is possible that these national polls will understate the Liberal seat potential in Quebec. As you can see here, even with an 8.7-point lead over the NDP, the Liberals win fewer seats in Quebec than do the New Democrats. If we apply CROP's distribution of regional support to Nanos's province-wide numbers for Quebec, though, we get a very different result:
The Liberals win 42 seats in Quebec instead of 33, and the NDP drops from 37 to 28. It doesn't change the number of non-Conservative seats in the province, but it does turn the Liberals' narrow plurality into a much more comfortable one, making the close wins in Ontario less important.

Either way, the Liberals appear to be in control of the situation. The numbers have not budged yet over the summer, with the Liberals still in the mid-30s and the Conservatives still under 1-in-3 support. It will be interesting to see if anything will shake these numbers loose over the next few months.

Friday, August 23, 2013

Why forecasting Quebec's Liberal seats in 2015 won't be easy

Sure enough, after La Presse put out the numbers for CROP's latest provincial poll on Wednesday, the inevitable federal Quebec numbers followed yesterday. They showed the Liberals continuing to be well ahead in the province, with the New Democrats taking a step backwards to a new low in CROP polling. The Conservatives were up significantly, but are still below where they stood on election night in 2011.
I wrote about this poll in my latest piece for The Huffington Post Canada, so rather than re-hash the numbers I suggest you head over there.

To briefly cover the shifts since CROP's last poll from mid-June, the NDP was down five points and the Conservatives were up six, while the Liberals were down an insignificant single point and the Bloc Québécois was up two.

Regionally, the shifts worth noting were a spike for the Liberals in the Montreal suburbs, NDP drops among non-francophones and on the island of Montreal, and Conservative gains among francophones, in Montreal, and in Quebec City. On who would make the best Prime Minister, Justin Trudeau was up two points to 31%, Thomas Mulcair was down eight points to 23%, and Stephen Harper was up two points to 12%.

What I'd like to discuss in detail, however, is the challenge that 2015 will pose in forecasting the seats the Liberals can win in Quebec.

Since Trudeau boosted the Liberals into first place in the province, this site has only rarely given them the number of seats one would expect with a double-digit lead. I have explained this due to the lack of a Liberal base outside of Montreal which limits the projection model's ability to move seats over to the Liberals. When the party was under 10% in a riding - which was very common - tripling their vote still only increases their support in that riding to 30% or less. That is not enough to win.

When a party goes from a very low level of support to a very high one, projection models can give wonky results. But not necessarily - despite the New Democrats surging from 12% to 43% in Quebec between the 2008 and 2011 election, the projection model was able to handle that without issue. With the right province-wide numbers plugged into it from 2011, the model would have given the NDP 60 seats (they actually won 59).

Why was the model able to do this? Because the NDP went from uniformly low support in every region of the province to uniformly high support. By my rough estimate, the NDP had between 11% and 14% support in the four regions defined by CROP in the poll above in 2008.

That uniformity continued into 2011. The New Democrats took about 38% of the vote on the island of Montreal, 47% in the surrounding suburbs, 40% in Quebec City, and 44% in the rest of the province. Across the board, support for the party tripled or quadrupled. With that sort of uniformity, the projection model had no trouble with it, particularly since support for the other parties decreased so much.

It is a completely different story for the Liberals. They had virtually no support in Quebec City in 2011, with only 7% of the vote. They did not fare much better in the regions of Quebec with 10%, but took 14% in the suburbs of Montreal and 27% of the vote on the island of Montreal itself.

What that means is with CROP's latest poll, we have the Liberals quadrupling their vote in Quebec City and the regions of Quebec and tripling it in the suburbs, but less than doubling it on the island of Montreal. That causes trouble for a swing model. With an increase from 14% to 41% province-wide, the model triples the Liberal vote everywhere. In other words, it is increasing Liberal support by too much on the island of Montreal (wasting a lot of votes for the Liberals) and not enough outside of the city and its suburbs (not giving the party enough votes).

To address this problem, I worked on the projection model to make it capable of taking into account the regional data that polling firms like CROP and Léger Marketing routinely release for the province of Quebec. Using this regional data makes a big difference.
(Seat projection for Quebec)
Using only CROP's province-wide numbers, the model would give the Liberals 36 seats to 33 for the New Democrats, eight for the Conservatives, and one for the Bloc Québécois. When using their regional data, however, the Liberals are boosted to 46 seats while the New Democrats fall to 23. It means the model is able to take into account the Liberals' disproportionate growth in support outside of the Montreal area.
The variations between the two sets of projections occur throughout Quebec. For the Liberals, using province-wide data would under-estimate their seat count in the regions by 10, while over-estimating the seats the NDP could win in Quebec City (by four) and the regions (by seven).

Using the regional data, the Liberals would win 14 of 18 seats in Montreal and nine of 17 in the surrounding suburbs, while also winning 21 in the regions of Quebec. They would even win two in Quebec City (in close three-way races).

The New Democrats take 11 seats in the regions, eight in the Montreal suburbs, and only four on the island, while the Conservatives win six seats in Quebec City and two in the regions. The Bloc retains only one seat.

The conventional wisdom that the Liberal vote will be inefficient in Quebec may not be as wise as it seemed. CROP has consistently shown that the Liberals have more than a little life among francophone voters outside of Montreal, and that they do have the ability to win seats outside of their traditional bases. And if the Liberals look like a winning party in the province, they will have an easier time attracting quality candidates, making it in turn easier to win.

But the disproportionate shift in support in Quebec will make forecasting the results in the province in 2015 quite difficult. On the one hand, most polling firms do not release data this detailed for the province, meaning that swings based only on the province-wide results should under-estimate the number of seats the Liberals will win and over-estimate the number that could go to the NDP. On the other hand, projections calculated with the regional numbers from CROP and Léger Marketing, while having the potential to be more accurate, will be based on small sample sizes and so will be more prone to errors. And unlike the national numbers, where there are similarly small samples for the Prairies and Atlantic Canada, there will not be as many polls to smooth out the noisy data. CROP, for instance, did not report in the final 12 days of the campaign in 2011 and Léger only put out national numbers.

This will be something to keep an eye on. The Liberals have retreated to the cities in the last few elections, but in Quebec they are gaining support in the countryside. Regional data for other provinces, particularly Ontario and British Columbia, would be especially useful to determine whether the gains the Liberals have made are similarly disproportionately distributed. There are a number of reasons why the 2015 election is looking like a difficult one to forecast. This might be the most important one.

Wednesday, August 21, 2013

Liberals still hold comfortable lead as PQ gets boost

It seems like it has been ages since the last federal or provincial poll has been published, but this morning La Presse and CROP have given us something to sustain ourselves on for the next few days. The poll shows that Philippe Couillard's Quebec Liberals still remain well ahead of the pack in the province, but also that Pauline Marois and the Parti Québécois have received a boost in support.
CROP was last in the field in mid-June, and since then both the PLQ and PQ have experienced a gain in support. Neither gain is outside the margin of error, however (if this was a probabilistic sample). The Liberals were up two points to 40% while the Parti Québécois was up four points to 29%, just flirting with a statistically significant boost.

The Coalition Avenir Québec fell two points to 20%, while Québec Solidaire was down four points to 7% - that drop is outside of the margin of error. Option Nationale and support for other parties was at 2% apiece, while 11% were undecided and another 6% did not give a response for one reason or another.

The PQ's gain has been attributed to the government's good handling of the Lac-Mégantic tragedy, so this boost is somewhat expected. But it is interesting how the numbers have improved across the board on other measures for the PQ: satisfaction was up six points to 34%, Marois was up eight points on who would make the best premier to 19%, and support for sovereignty was up six points to 40%, the highest it has been in some time.

Nevertheless, the Liberals remain in control. Couillard had 26% on the premier question, unchanged from June, and they were ahead in every region of the province except Quebec City.

The Liberals had 93% support among non-francophones and 28% among francophones. They were up nine points to 51% in the Montreal region - and more specifically led with 55% on the island of Montreal and 45% (a gain of 12 points) in the surrounding suburbs. They were narrowly up on the PQ in the regions of Quebec with 32%, but fell 13 points in and around Quebec City to 27%. That is below even their 2012 election total.

The PQ was up five points among francophones to 35%, but experienced no other significant boosts in support. They were up on the island of Montreal to 30%, in Quebec City to 22%, and in the regions to 31%, however, while they dropped in the suburbs of Montreal to 26%.

The CAQ was down throughout the province except around Quebec City. It means this is a bad poll for the CAQ, which makes it difficult to believe that François Legault is in any hurry to help Couillard bring down the government. The party was down to 24% among francophones and only 3% among non-francophones, has dropped six points in the Montreal RMR to 12%, and was down to 6% on the island and 20% in the suburbs. The party was down to 23% in the regions, but did experience an 11-point gain to lead in Quebec City with 42%. Legault was down only one point to 16% on the premier question, though that did put him behind Marois.

Québec Solidaire seems to have suffered the most at the hands of the PQ, as they were down to only 6% on the island of Montreal. That is their main region of support, and it puts both of their seats at risk. Option Nationale seems to be dropping back due to Jean-Martin Aussant's resignation, though it appears CROP kept his name in the survey.
In terms of seats, the disproportionate concentration of support for the Liberals in and around Montreal means they do not win the kind of majority one would expect with an 11-point lead. Instead, they barely squeak by with 63 seats, 39 of them in the Montreal region.

The Parti Québécois drops to 48 seats, while the CAQ is reduced to 14. Québec Solidaire is shut out, due to the PQ's relatively stronger showing on the island of Montreal.

It is possible that with such a large lead, the PLQ would be able to pull a few more seats out of these numbers than the 63 they are given here. But the Liberals have traditionally had a less efficient vote due to their weakness among the francophone electorate, which is consistent with this poll. The CAQ seems to be stuck at around 20%, quite a bit lower than they 27% the party took in 2012.

For Pauline Marois and the Parti Québécois, these numbers are heading in the right direction but the party is still badly positioned to face voters. A government under 30% is in a very sorry state, particularly when the main opposition party is ahead by double-digits. It seems that the CAQ, despite its lacklustre performance, is set on bringing down the government in the spring. One would expect the PLQ to come down a little from their current honeymoon, and that should benefit Legault more than Marois. But Couillard's numbers are improving, not retreating. How the numbers move this fall will be interesting to watch.

Monday, August 19, 2013

Interviews with Christian Bourque and David Coletto

To finish off the series of pollster interviews related to my articles on polling methodology for The Globe and Mail, today we talk to Christian Bourque, Executive Vice-President of Léger Marketing, and David Coletto, CEO of Abacus Data.

Previously, I spoke with Don Mills of Corporate Research Associates (who should be very active in the upcoming Nova Scotia election), Frank Graves of EKOS Research, and Darrell Bricker of Ipsos-Reid.

Léger Marketing has been an active political polling firm in Quebec for a very long time, but has moved over to online in the last few years. This is what Christian Bourque had to say about the methodology:

308: Léger Marketing has used online polling for some time now. Why was the decision made to move over to that methodology?

CB: We made the decision based on the fact that we control the sample. It is our panel. We control it from recruitment to data collection to data cleaning. We started the panel in 2004 and felt comfortable using it for political polling after almost three years of comparative polling telephone to Web. Really, we have focused on Web-based electoral polling since 2007.

308: What are the advantages of conducting your polls online instead of over the telephone?

CB: Cost, timing is crucial and no social desirability bias. We can get up and running faster if the demand from the client requires quicker turnaround. Rushing telephone projects can mean “burning” samples to quickly fill quotas. Given that media clients do not have a lot to spend on polling we have been able to produce larger samples at a fair cost compared to telephone. The “honesty” factor also works in favour of the Web. 

308: What are the disadvantages?

CB: Controlling for potential differences between panel members and the general public is always something we need to control for. Our panel is over 70% RDD recruitment coming from our call center, so we are already very confident about the source. Panelists here are profiled at length so we can compare them, not only on socio-demographics but we can do it as well on technology variables, health-related profiling questions and we could even weight on beer brand to conform to market share statistics if we wanted to.

308: How is your online panel recruited and what steps do you take to ensure the sample is representative?

CB: Most of the panel is recruited from our telephone studies and telephone-based recruitment. It has a higher cost compared to online recruiting but generates more reliable and loyal panelists. We profile on over 90 variables over time so get a good grasp on who these panelists are and we stratify samples at the invitation stage to ensure that the output will not require important corrections or weighting. We also have a data cleaning and data quality protocol that gets rid of speedsters, straightliners and potential fraudsters.

308: What challenges do you face in building a representative sample of the population, considering that not everyone has access to the Internet and the potential for opt-in panels to attract a different sort of respondent?

CB: 86% of Canadians went online last week. That’s more that households who have a landline phone. In a market of rapidly decreasing response rates over the telephone, no methodology should feel they can take the high road and look down on the others. 

308: There are debates in the industry about the problems surrounding online polling not being probabilistic, despite some good performances. Why is this, or isn't it, a problem?

CB: Compare the results of probabilistic vs. non-probalistic polls in BC, Albera, Ontario, Quebec and the last few federal elections and you will not find a clear pattern of who has done best or worse. You either feel very strongly about one or other, but it comes down to faith and preferences more than any clear conclusions one can reach from historic data.

308: Léger Marketing has a very long history of polling in Quebec. How has political polling changed over that time?

CB: Like everywhere else, declining participation in elections is making our work more challenging. How do we account for that 30% to almost 50% who simply do not show up on election day.  Should we move to “likely voter” models in Canada? If you cross-tabulate participation by age and voting intent by age, you can explain most of the differences between polls and election results in the recent past (except Alberta). But age is not the only factor. Disengagement and cynicism need to be factored in too, outside of age.

When doing comparative polling or comparing our historical results and those of the competition, differences between phone and web tend to be rather small and not necessarily consistent over time. We found that only weighting by age and sex will tend to produce slightly more left-leaning results on the Web. We have been using a more complex weighting scheme over the past six years to account for that (education, income and household composition are now factored in).

I'd also like to add the following:gGiven the recent critique and, some would say, controversy in the market, I believe we, as an industry, should come to agree to greater disclosure mechanisms to allow the industry to develop a better understanding of the changing landscape out there. This would benefit us all.  


**************

Abacus Data is a relative newcomer to the political polling world, having first appeared in 2010. Abacus uses an online panel for its polling, but has used other methodologies in the past. I thought David Coletto could give us an interesting perspective as he has worked in the industry in an era where online polling was always an option, as opposed to the other major players who cut their teeth in the telephone-only age.

308: Why has Abacus settled on using online panels for political polling, and what are the advantages and disadvantages of the decision?

DC: Abacus Data decided to exclusively use online panels for political polling as we decided that the advantages outweighed the disadvantages.

Advantages
Online panel research provides for a variety of advantages over live telephone or IVR research. Online research allows a large number of respondents to be contacted simultaneously, meaning the study can be completed much faster than with other methods. Also, the nature of online design allows for great flexibility in the visual appearance of the survey and in question design. Such design aspects allow for the creation of scale questions, visual sliders, drag and drop, or even the presentation of audio and video to respondents.  Further, online research allows for broad flexibility in sample design to target groups of respondents along virtually any screening criteria. Finally, online research is considerably more affordable than telephone or IVR, making it attractive for smaller firms and repeat projects.

Disadvantages
As online polling involves drawing sample from a panel of potential respondents, it does not constitute the entire population and therefore cannot be considered a true random sample – this is the primary disadvantage of online polling. Further, with online research it is currently not possible to verify that the respondent is exactly who they claim to be. While this is also true of IVR, it is much easier to verify an identity with a live telephone interview. Online polling can also result in certain coverage bias, especially among lower income and older groups.

308: What are the costs of online polling compared to over the telephone?

DC: The capital costs of setting up and maintaining a call center are high, just as building a panel is expensive. However, licensing costs of online are much more affordable than contracting out to a call center.  Online allows us to be a full service firm in house and still be small, meaning Abacus is able to control the research process from beginning to end.

Actually carrying out the online research requires somewhat less effort than live telephone, as there is no need for a paid bank of phone interviewers. More importantly, online has far more quality control, as all responses can be easily monitored as they arrive. Further, there is no need to observe or control for interviewer bias.

308: What are the challenges faced in building a representative sample?

DC: There are challenges, but not around representativeness of demographics or regional variables, but rather psychographic representation like interest in politics, political participation and engagement in public issues are likely greater for anyone who answers a survey. However, large panel management firms have a vested interest in maintaining quality panels and ensuring that samples are as representative as possible.

308: There are debates in the industry about the problems surrounding online polling not being probabilistic, despite some good performances. Why is this, or isn't it, a problem?

DC: It is a problem, but it is something that all survey firms must face. Probability issues become more significant when respondents are over-surveyed, meaning they change their behaviour or attitudes because they are surveyed often. We have an in-house policy to screen out frequent survey participants. Abacus tries to solve the problem by making the sample as representative as possible, use minimal weighting, weeding out survey frequent takers, and using high quality large panels.

The problem with probability sampling extends to telephone surveys however with large portions of the population refusing to answer surveys whether because of increased call screening or because of the refusal to respond.

The growing use of cell phone or internet based calling will continue to make telephone surveys more difficult, more expensive, and therefore less representative.

308: What role does weighting play in good polling?

DC: Although weighting plays an important role in helping us to make our samples representative of the population, our data is not heavily weighted. 

We use balanced sampling and interlocking quotas, similar to stratified sampling strategy, to ensure that the respondents captured are as representative as possible and heavy weighting is not required.

Weighting is particularly challenging for IVR polling, because certain demographics are more likely to answer the phone. Therefore, IVR surveys bias towards women and older demographics.

308: What are the challenges involved in building a representative sample of voters, rather than just of the entire population?

DC: The number one challenge is trying to predict who will actually vote, as people are less likely to admit that they do not vote, or that they don’t plan on voting. 

Further, we know that those who answer surveys are likely to be more engaged than those who don’t. 

To address these challenges, online research allows us to use varied question types and measure likelihood to vote in different ways. By being transparent with the models and try to forecast what the electorate will look like versus the general population, Abacus attempts to be as clear and accurate as possible.

In Canada, this problem is evolving. As a result of BC and Alberta, we are taking this issue seriously and will test a number of models in the next election we participate in.

308: As a relatively new polling firm, what challenges did you face in getting into the market?

DC: We face a small-c conservative industry that is adverse to change and innovation, and, quite frankly, are a little threatened by a new crop of researchers like us that are testing the established ways.

I think early on we established our credibility by demonstrating that our online research methodology could accurately forecast the 2011 federal election and 2011 Ontario provincial election. 

We, like many other pollsters, failed to really understand what was going on in Alberta, using a methodology we no longer use (IVR). In BC, our only poll was conducted before the leaders debate, so our performance there is difficult to judge.

308: How is the business of polling evolving?

DC: I foresee more smaller players emerging. 

The business of polling will be completely online, within the next 10 years nobody will answer their phone unless they know who is calling, if we are using telephones at all.

So, the industry needs to perfect and refine how we conduct internet surveys. As Google is showing us, there will be new ways to generate sample that are only emerging as alternatives now, many of which lean towards indirectly observing behavior rather than asking direct questions.

308: What has to be done to ensure that online polling can produce good results in the future?

DC: If you mean being able to predict elections, the question is predicting who is going to turn out to vote. I do think online polling is producing good results now for our clients, whether it’s testing new marketing concepts or public support for policy proposals, or the potential for new product success.

Thursday, August 15, 2013

Interview with Darrell Bricker of Ipsos-Reid

In the last of my series of articles for The Globe and Mail on political polling methodology, I look at online polling. 

For this article I interviewed, among others, Darrell Bricker, the CEO of Ipsos Public Affairs. The transcript of the interview can be found below. It is a very interesting one, particularly on the topics of the business of polling and the role of the media.

In the past few weeks, I've posted the interviews with Don Mills of Corporate Research Associates and Frank Graves of EKOS. Over the next week, I'll also post the interviews I had with David Coletto of Abacus Data and Christian Bourque of Léger Marketing. 

308: Recently, Ipsos-Reid moved from traditional telephone polling to use of an online panel for its political polls. Why was that decision made?

DB: We’ve been considering the move to on-line for some time. That’s because the market research industry, especially in North America, now uses almost exclusively on-line data collection methods for quantitative studies. Phone is becoming a smaller part of the mix and is usually focused on either specific audiences (B2B), or calling lists. So, the investment in research platforms is going into on-line methods, and the “research on research” that’s being done is also focusing on on-line. The clincher for us was the 2012 US Presidential election – we had an opportunity to work extensively in the on-line space for Reuters and saw how strong it was in terms of sample stability and representativeness.

308: In the past, you have criticized the amount of weighting that has to be applied to online polls. What has Ipsos-Reid done to mitigate this problem?

DB: What I've been critical of is not the amount of weighting (although less is always better), it’s been the lack of disclosure about how much weighting is being done and according to which variables. But, this doesn’t just apply to on-line, it applies to all forms of data collection. As for our experience with on-line, we don’t actually do much weighting (usually just some light demographics), and we always disclose both our weighted and unweighted data.

308: What are the advantages of conducting your polls online instead of over the telephone?

DB: The biggest advantage is coverage. Over 80% of the Canadian population is now on-line. Another advantage is that we can control our sample “input” by heavying-up on hard to reach categories – especially with the river sampling portion of our sample frame. Also, we like the fact that we can ask longer questionnaires on-line. As you know, questionnaire length isn’t a big driver of costs for on-line surveys as it is for telephone.  Dual-frame telephone (that’s combo landline and cell) is cost prohibitive, and there’s no advantage in terms of sample accuracy, especially when non-responses are taken into account.

308: What are the disadvantages?

DB: The biggest disadvantage is that on-line research in politics is a relatively new. We’re still learning every day about what potential issues might exist. BC is a good example of this – although the miss in BC was more of an issue with predicting differential party turnout than it was about a specific methodology or under-representing a specific group in the sampling.  The way to solve these problems, in my view, is to follow good scientific practice – be your own worst critic and disclose your errors (painful as this can sometimes be) to review by your peers and other interested parties.

308: Generally speaking, how does online polling compare to other methodologies in terms of costs and effort?

DB: To do on-line well doesn’t save a lot of money. And, the amount of effort is basically the same as any other quantitative survey method.

308: Though online polls have performed well in some recent elections, for example in the 2012 US presidential vote, the methodology struggled in this year's B.C. election. Was there anything particular to this methodology that contributed to the error?

DB: The evidence shows that this is a bit of a red herring. The issue in BC was predicting which groups of the public would vote. This was a problem for ALL methodologies. The exit poll that we did on election day (which got the results very close) shows that if we had all done a better job of selecting actual voters to interview we all (regardless of methodology) would have come closer. As for on-line excluding parts of the population that don’t have access to the Internet, the truth is that these groups (usually less affluent, more transient, etc) are also among the least likely members of society to vote. For certain types of social and commercial research getting to these more marginal groups is important and using on-line to get them won’t work. But, for political research this isn’t a major issue.

308: What challenges do you face in building a representative sample of the population, considering that not everyone has access to the Internet and the potential for opt-in panels to attract a different sort of respondent?

DB: We don’t just use opt-in panels for our samples – we also use a proprietary form of river sampling that intercepts participants on the Internet regardless if they are part of an opt-in panel or not. All opt in panels have holes. They are impossible to prevent (for all of the obvious reasons). That’s why the world leaders in this space use a combination of their own and other opt in panels, and some form of river sampling. There’s a whitepaper on our website on blended sampling methods that describes what I’m talking about in detail.

308: There are debates in the industry about the problems surrounding online polling not being probabilistic, despite some good performances. Why is this, or isn't it, a problem?

DB: There are almost no probabilistic samples in any area of social science research these days. Even the ones claiming they are “probabilistic” significantly depart from the classic model and rules. In our case, we take a different approach to understanding both probability and sampling error. And, that approach borrows from the Bayesian side of statistical theory. That’s why we report a “credibility interval” instead of a margin of error with our on-line polls.  There’s another whitepaper on our website that explains how to calculate a credibility interval in detail. 

308: Ipsos has a long history of polling in Canada and worldwide. How has political polling changed over the years in this country?

DB: Susan Delacourt’s new book on political marketing in Canada does a great job of describing the history of political polling in our country. I’d start with that. But, the biggest change I’ve seen is the willingness of the media to publish polling without doing even the most rudimentary investigation of the pollster or their methods. Blame the lack of resources or the pressures of the 24 news cycle, but it’s led to an embarrassing environment in Canada that hurts polling, the media and our democracy. Want to fix it? The media needs to start demanding disclosure from pollsters and refusing to publish those who don’t comply.

308: Are there any differences between polling in Canada and elsewhere, both in terms of how polls are conducted and the challenges of polling in Canada? 

DB: The biggest difference I see in polling around the world compared to Canada is the degree to which media in other countries both value polling and are stingy about giving it coverage. For example, major media outlets in the US like Reuters, CNN, Associated Press, and the New York Times all have polling experts on staff that strictly enforce their organization’s quality standards. They are also active players in polling – each has their own proprietary poll that they pay for and release. This used to be the case in Canada. Now, only a couple of media outlets (including our partner, CTV) do this. As a result, some so-called “pollsters” in Canada simply shop their free results around to various media outlets until they get a bite. If the free poll is “juicy” enough (never mind being accurate or conducted according to reasonable standards), it gets published. If the poll is wrong, who is to blame? For the media, they have the convenience of throwing the pollster under the bus.  But, by then the media cycle has moved on and the “pollster” is already working on their next free release. It’s shameful, and Canadians deserve better. 

By the way, while I’ve used the US as the point of comparison I could have easily used France, the UK, Italy, Spain, Mexico, Australia or New Zealand.  Ironically, we did the polling in Nigeria a couple of years ago and even there the amount of disclosure and review we went through with our media client would put most newsrooms in Canada to shame.

308: Do you have an explanation as to why Canadian media treats polling differently from other countries? Newspapers everywhere are going through the same financial issues.

DB: It is a mystery to me. It just seems that Canadian media don't really take polling seriously anymore. I know that's not entirely true, but it does seem that way. A good example is the CBC making a virtue out of not covering polls for awhile. Instead of doing what the standard-setters do in other countries - which is to create a quality poll of record and challenge others to match, they decided to abandon the field all together. The BBC, AP, Reuters, etc all went in the other direction. 

308: How has the business of polling in general changed?

DB: The research business is in major transition. It’s funny that we get caught up in conversations about data collection methods like on-line vs off-line, it’s almost a bit quaint. The truth is that the marketplace has already decided much of this – and on-line is winning in all markets where it’s feasible. The people who used to set the standards for what is acceptable in research, mainly governments and academics, are being supplanted by global corporations like P&G, Unilever and Coca Cola. Outside of the US government and the EU, they are the biggest buyers of research in the world, and they are the ones setting the standards. And, the new standard is methodologically ecumenical. It’s increasingly about creating global averages, speed and direction. Whatever gets you a quick, usable answer be it on-line surveys, social listening, qualitative research, ethnography, passive data collection, Census data, that’s what will be used. 

Apart from how global packaged goods companies are redefining research, the other major trend is the domination of the research industry by a few global firms. Given the capital requirements necessary to service global clients, the big players in the market (which are mostly European) are now dominating the business and their domination can only grow. Global clients increasingly want global research partners who can deliver a similar level of quality in all markets. To do this, the major global players are acquiring companies in all the markets that matter. Yes, there will always be important boutiques in all markets, but their competitors will increasingly be the global players. And, the global players are smart, well financed and tough.

308: What changes, if any, need to be made to ensure that online polling produces good results in the future?

DB: On-line already produces terrific polling, so we’re not talking about a fundamentally flawed methodology. But, where things are moving is away from on-line surveys conducted via single opt-in panels. Increasingly, we’ll be seeing more blended samples that select people from wherever they can be found on the Internet. That’s where the big players are all headed. But, in all seriousness, if the competition to on-line is IVR (robocalls), I already know how this battle ends. To directly answer your question though, making on-line surveys better is no different from making any other survey better – we need to satisfy the primary rules of validity and reliability.

Tuesday, August 13, 2013

The new projection model for the upcoming Nova Scotia election

The next provincial election likely to occur will be in Nova Scotia. Premier Darrell Dexter needs to call the vote by the spring of 2014, but all indications are that he will drop the writ some time in the late summer or early fall. Accordingly, ThreeHundredEight is today launching the Nova Scotia projection.

Unfortunately, there isn't a lot of polling data to go on just yet. The most recent numbers we have for the province date to the end of May, when Corporate Research Associates was last in the field for their quarterly polling. They should be out with the numbers from their next quarterly poll in early September.

The projection model has been given a few tweaks, as is always the case after each election - and particularly after such a humbling election as the one in British Columbia. One thing that has not been tweaked, however, is the seat projection model or the way that polls are weighed. The seat projection model has, time and again, shown itself to be a useful tool if the numbers plugged into it are accurate. That means the challenge is not to estimate the number of seats each party will win, but rather what proportion of the vote they will get.

A detailed description of how the polls are weighed and seats are projected can be found here. I will not go over that again, but rather explain some of the changes in the new model and the philosophy behind them.

First, a quick guide to what the new main projection graphic is showing (click to magnify):
At first glance, you will note that I have dropped forecasts for the election date from the chart. This is because I have decided to take a different approach to projecting electoral outcomes.

The model I used in the provincial elections of Alberta, Quebec, and British Columbia were based on a simple premise: what can the polls tell us about how they will be wrong? The inherent problem with that approach is obvious.

I was using the sample size of all the polls being taken into account by the model to estimate a rough margin of error for the projection. That gave me my high and low ranges. I then also used the degree of volatility from poll to poll to estimate the potential for future volatility in the days remaining between the last set of polls and the election day.

But if the polls are off, it does not make much sense to use those polls to guess at how they will be off. The data they are recording is not going to give a hint at the potential error if the foundations upon which they are built are faulty.

Instead, the approach for the Nova Scotia election (and, if it works well, for future elections) is to use the degree of error polls have shown in the past to assess the probable range of outcomes.

I was already using this sort of adjustment in the last few elections. This was applied directly to the poll average, using the average over- or under-estimation that the polls had done in past elections to guess at how the polls might over- or under-estimate party support in future elections (in B.C., this was done for the Greens and Conservatives and worked very well). But rather than use this information to make a best guess, I will instead be using it to give a likely range of outcomes.

This is calculated based on a party's position in the legislature at dissolution: the governing party, the Official Opposition, a third party with multiple seats, a third party with a single seat, and parties without a seat in the legislature. The electoral outcome for each of these parties in recent elections is then compared to the polling average.

All cases in which a party in a particular position in the legislature was under-estimated in the polls is then used to calculate the average "High". For example, the average under-estimation (when polls under-estimated a party's support) in recent elections for the governing party has been by a factor of 0.97. That means that the weighted polling average (which does make an independent estimate of "Other" support) is adjusted by a factor of 0.97. The same is done for cases of over-estimation to calculate the "Low", and these numbers are then used to project the number of seats that can be won at these high and low numbers.

The minimum and maximum projections ("Min." and "Max." on the chart) are simply the worst cases of over- and under-estimation that has occurred in recent elections. That means the Alberta Progressive Conservatives in 2012 (governing party, under-estimated), the Newfoundland and Labrador Liberals in 2011 (Official Opposition, under-estimated), Wildrose in 2012 (other party with multiple seats, over-estimated), etc. In other words, it means that if the election outcome falls outside of these minimum and maximum ranges, the polls have missed the target by an unprecedented amount. That this is a possibility should not be entirely discounted.

This hopefully gives readers a full understanding of the potential range of outcomes that are possible, based on past polling performance and what the data is showing. My role is not to make bets, but to try to figure out what the polls are saying and what they aren't saying, and to give people in idea of what to expect. But to narrow it down a little, I have also included boxes showing the range of most likely outcomes for each party. This means that each box represents the degree of polling error that has occurred in a majority of recent elections for a party in a similar legislative position. The chart below spells this out for the Nova Scotia vote:
As the governing party, there is 68% chance that the electoral outcome for the New Democrats will fall within the average-to-high range. This is the range that is highlighted in the main projection chart. If we want to extend that further, we can say there is a 79% chance it will fall between the low and high marks, or there is an 84% chance that it will fall within the average to maximum range. This suggests that we should expect the polls to under-estimate NDP support - though that is not necessarily what is going to happen.

For the Liberals as the Official Opposition, the range is not so tight. The most likely individual outcome is for the result to fall within the average-to-high range (42%), but it is more likely that it will fall outside of that range (the remaining 58%). To find the smallest range that incorporates the most likely outcome, we have to stretch that to the low-to-high range. There is a 63% chance that the outcome will fall within that range.

For the PCs, a third party with multiple seats, a slight over-estimation is the most likely individual outcome but we have to stretch the range also from low-to-high to get to a 67% chance. For the Greens, an over-estimation is almost certain: there is a 60% chance the outcome will fall within the low-to-average projection, and a 95% chance that it will between the minimum-to-average.

This is the extent of the probabilities that will be calculated for the province-wide projection. I was not pleased with the picture the probabilities painted in the B.C. election: the NDP was given a 98.3% chance of winning the popular vote, and a 83.3% chance of winning the most seats. Technically, that doesn't mean the probability forecast was wrong. Realistically, the polling data may not have supported a 1.7% chance of the B.C. Liberals prevailing. The seat projection probabilities were probably closer to the mark (expecting this sort of "B.C. Surprise" in four out of every five elections is not entirely unrealistic), but I've decided to drop this calculation for the time-being until I have the chance to go over in more detail what these calculations were based upon. I think the seat probabilities are on the right track (they were based, after all, on 308's track record), but the vote probabilities may have in the end been based on inappropriate data.

Probability calculations for individual seat calls, however, will remain a feature of the new model. They performed very well in the B.C. election, the first time they were calculated:
In the ridings where the confidence of a correct call was between 50% and 59%, 60% were called correctly. When the confidence was between 60% and 69%, the accuracy rate was 71%, and so on, as the chart above shows. The results of the B.C. election have been incorporated into the probability calculations, which has had the effect of narrowly boosting them upwards (yes, one of the ridings is called with 100% confidence!).

Unless more detailed polling data is made public, the model will not be presenting projections at the regional level. The model is designed for regional-level polling, however, splitting the province up into Cape Breton, the Halifax Regional Municipality, and the Rest of the Mainland. Corporate Research Associates divides the province like this, but I don't know yet whether this information will be available throughout the campaign. If this information is not available, regional support will be 'estimated' from province-wide polls in order for the data to be plugged into the model. I don't suspect a problem related to this, however. I got a peek at CRA's regional results for their last poll, and they generally mirrored what a proportional swing would calculate from province-wide results. It seems that, at this stage at least, support for the parties in Nova Scotia are increasing and decreasing proportionately at the regional level.

I am hopeful that these new measures will provide a good result in Nova Scotia. But it is a small province with which to launch a new methodology. The populations of Toronto or Montreal, not to mention a few other cities, are larger than that of Nova Scotia (accordingly, I am hoping to cover their mayoral elections). Why should readers who don't live in the province be interested?

Well - the province does have interesting elections. There are three competitive parties and a lot of ridings where all three parties have a chance of winning. The NDP government there is only in its first term and is the first to be elected in Atlantic Canada. A rebuke may not mean anything in particular for the federal New Democrats, but it certainly does not bode well. The Liberals have been buoyed throughout the region, perhaps due to the federal party's new appeal, and have not been in power in Nova Scotia since 1999. For the PCs, they need to get out of the trough that delivered them their worst performance in provincial history in 2009.

If that isn't enough, provincial results in Nova Scotia do track with those of the federal parties in the province generally well - if not always explicitly, at least relatively (they seldom go in different directions).

So, perhaps the provincial election result in Nova Scotia will tell us something about how the federal parties will do in the province in 2015. Perhaps not, however. The numbers generally tracked pretty well until recently.

I don't suspect a huge amount of polls for the race in Nova Scotia, unless one of the firms feel they have something to prove. But hopefully there will be enough to keep things interesting.

Monday, August 12, 2013

July 2013 federal polling averages

Federal polling has hit a summer lull, with only two national polls having been released for the month of July, surveying almost 4,700 Canadians. Due to the small number of polls last month, I will only briefly go over the monthly averages for the sake of continuity.
The Liberals still led in July, with an average of 32.4% support. That was a drop of two points from June. The Conservatives were up 0.6 points to an average of 29.5%, while the New Democrats were down one point to 22.8%. The Bloc Québécois was up 1.3 points to 6.6% and the Greens were up 0.3 points to 6.5%, while 2.1% of Canadians said they would vote for another party.

That drop for the New Democrats is actually deceptive. If we look only at the last set of polls from EKOS and Forum, the two firms in the field in July, we see that the NDP has actually gained when comparing apples to apples:
EKOS and Forum were last in the field within a 30-day period between mid-May and mid-June. A simple average of those polls shows that the Liberals have dropped 3.7 points, while the NDP was up 2.2 points and the Conservatives were up 1.5 points.

Regionally, the Liberals were narrowly ahead in British Columbia and Ontario, while they held more significant leads in Quebec and Atlantic Canada. They were second in both Alberta and the Prairies, meaning they were first or second throughout the country. They are the only party that can say that.

The Conservatives were ahead in Alberta and the Prairies, and were second in Ontario and Atlantic Canada. They were tied for second in British Columbia with the NDP, and were fourth in Quebec.

The NDP was only tied for second in British Columbia and was third in every other region of the country. That meant the Bloc had displaced the NDP for runner-up status, the first time that has happened under Thomas Mulcair's leadership.

Thanks in large part to the close race in Ontario, the Conservatives would be able to win the plurality of seats despite their deficit in support. They would win 135 seats with these numbers, up 15 from the June projection. The Liberals dropped 17 seats to 117 and the NDP dropped 30 seats to only 50. With the NDP slumping in Quebec, the Bloc manages to win most of the francophone regions of the province by default and takes 34 seats, up 32 from June. The Greens would win two, both in British Columbia.

The polling data is a little too thin to draw any significant conclusions. But it does seem that the Liberals are coming down from their highs after Justin Trudeau's leadership victory and that both the Tories and NDP are winning back some of that lost support. Nevertheless, the Liberals remain ahead and the Conservatives and NDP are still showing signs of weakness in important parts of the country. The rebound of the Bloc is not likely to be anything definitive but they may have become the 'none of the above' option that will gather support whenever the Liberals and NDP fail to impress. Whether that support would show up at the ballot box, however, is another question entirely - and the answer will be vital to the hopes of both Trudeau and Mulcair in 2015.

Thursday, August 8, 2013

Interview with Frank Graves of EKOS

Today's is the second of three articles for The Globe and Mail looking at polling methodologies used in Canada. Two weeks ago, I looked at live-caller telephone polling and interviewed Don Mills from Corporate Research Associates. Now, it is time to look at interactive voice response polling, or IVR. I will be looking at online polling next, likely in another two weeks. I'll have some interviews from some of Canada's leading online pollsters to post as well.

Frank Graves, President of EKOS Research Associates, was kind enough to give me a very complete interview over a few emails. In it, he discusses some aspects of polling and IVR polling in particular that I would have liked to have gotten the chance to cover in my article, but could not because of length. Below is the full transcript of the email interviews.

308: Several years ago, EKOS moved from traditional live-caller polling to interactive voice response. Why was that decision made?

FG: Actually we continue to do live interviewing and we have a large random probability panel which we use for survey work. Most but not all of our media polling comes from our version of IVR, which is a series of carefully tested protocols which we are now calling HD-IVR (high definition IVR). IVR is not a survey methodology, it is a method of reaching respondents. The sampling strategy, call back regimens, data purification techniques, instrument design, etc. are all crucial and determine the accuracy of the survey results. We  turned to IVR several years ago noting the success of some American IVR pollsters. IVR has certain limitations but for short  surveys, properly designed and administered it can be an excellent tool. We particularly like the ability to generate large random samples at far lower cost than with Live Interviewer CATI. In our experiments HD-IVR gives results that are equivalent or better than those with Live CATI  and much more accurate than any opt-in online methods.

308: Having used different methodologies in the past, what do you consider the strengths of IVR compared to those other methodologies?

FG: Once again noting that we continue to actively use many other methods (we have our own call centre for live CATI) and we maintain a large random probability panel (PROBIT), I would give the following list of strengths for properly applied IVR techniques. (this includes the application of call backs , noise detection and elimination, dual land line and cell phone sampling frame, etc.):
- Accuracy, particularly on simple behavioural and intention measures.
- Speed: large samples can be assembled and analysed very rapidly.
- Large samples which produce lower MOE (particularly for sub population analysis and tracking).
- Economy (the live interviewer cost is replace by robotics).
- Minimisation of undesirable mode effects from live interviewer (particularly important on questions which can produce social desirability bias).

308: What are IVR's limitations, and what can be done to correct for them?

FG: Once again I want to stress the difference between HD-IVR and any properly designed IVR system and “raw” IVR.  We get much better results with several important refinements which are often not applied in IVR polls. The biggest limitations of IVR are:

- Length, the survey must be quite short.
- Reputational problems: the use of IVR is associated with reputational issues (particularly in Canada) where there is lower familiarity with properly applied IVR. Sloppy applications of IVR and the nefarious connection to some vote suppression activities have done nothing to help this problem.
- Stricter limitations on calling periods.
- Programming complexity to deal with multiple random versions to eliminate response set biases and sequencing effects.
- More people are called so there is a modest increase in the intrusiveness of the research.
- In order to get sufficient representation of younger respondents we have to engage in call backs and a judicious sample of cell phone-only populations.
- Response rates are somewhat lower than those for live interviewer but with our techniques only modestly lower and with less systematic patterns of non-response.
- Our experiments show that there is more random noise in IVR than with live interviewer. This noise is easily detected with testing and can be purged.

308: What do you mean by noise?

FG: By noise I mean responses which are not measuring the concept being tested. Noise is random, meaningless data. The analogy is drawn from psychoacoustics but applies to other areas such as this (I believe Nate Silver uses the terms in the title of his last book). As an example of random noise consider the difference between someone answering the questions thoughtfully and accurately (signal) and someone just randomly pushing numbers. We find that people answering questions about fictitious events/products is higher in IVR than with live interviewer. This applies to other unwanted survey behaviour as well. What we used to call anomalous response set  (yea and nay saying and more recently speeding and straight lining). With the noise detection questions we can identify and remove these sources of noise from the sample.

308: Generally speaking, how does IVR polling compare to other methodologies in terms of costs and effort?

FG: The front end programming and data base management is more complex but the obvious savings are in live interviewer time. Long distance charges are higher because of the greater numbers of calls. Our costs and efforts are perhaps half of what a live interviewer survey would cost and comparable to the costs of our probability panel offerings. Opt in panels, where the respondents volunteer for surveying and who have never been spoken to by the survey organization are cheaper still than HD-IVR.

308: Can you explain how your 'probability panel' is different from 'opt-in' panels?

FG: Probability methods select with an equal probability of selecting  (EPSEM in Kish’s terminology) each member of a population. This is a canon of good sampling and the foundation of the ability to apply the central limit theorem and the law of large numbers, the foundations of inferential statistics. We sample each member of the population with a known probability of their likelihood of appearing in the sample. In the case of opt-in or convenience sampling there are (at least ) two fundamental problems. The sample is NOT randomly drawn from a frame of all individuals in the population.  Respondents are invited to join or come from other pre-existing lists of some other portion of the population. They therefore opt-in or volunteer (typically for material incentives) and their relationship to the broader population is unclear. Since the process is not random inferential statistics are not possible (including calculation of Margin of Error). The problem is worsened by systematic coverage errors where those who cannot or will not do surveys on line will never appear in the sample

Now some say that as response rates decline the process of random sampling is no longer meeting the requirements of statistical inference. The hard, third party research suggests this is not true. While we have selection processes from a random invitation this is a much smaller problem than those problems PLUS a non-random invitation. The top authorities remain unconvinced that one can achieve scientific accuracy and MOE with non-random samples (MOST online panels are non-random). Under rare and extremely stringent conditions this happens but in most cases it is wrong. By the way, the response rates with HD IVR are close to what we get with live interviewer now. And objections from those using opt-in panels are hard to take seriously as their response rates are incalculable and if they could be calculated would be the percentage of all those who saw the internet ad and didn’t join the panel  (maybe 99.9 % or higher?).

308: Because of the issues related to the use of robocalls in political campaigns, whether legitimate or not, and by telemarketers, there has been increased criticism of this methodology recently. What kind of problem does this pose for polling firms that use IVR?

FG: We have spoken to the CRTC on this issue, as well as the MRIA. We certainly would welcome limitations on the less savoury applications of robocalls as this would lessen our problems with public suspicions. We use a very rigorous code of application that meets the CRTC requirements for automatic dialling devices. We would welcome clarification that would distinguish legitimate research application based use of IVR from the much more common mass market uses.  This distinction does apply to polling and market research in other areas and it is unclear how it would apply in the context of IVR. We would welcome sound guidelines and a demarcation between legitimate survey research  and other areas of use.

308: You have recently discussed the challenges of building a representative sample of voters. But what challenges do you face in building a representative sample of the population, considering falling response rates and increased use of cell phones over landlines?

FG: We only really know who the voters are after the vote so this will remain a challenge. In the case of representative samples of known populations careful sampling, call backs and weighting can continue to produce scientific accuracy when based on random sampling, even with steeply declining response rates. Coverage errors for cell only and off line respondents can also be solved but these subpopulations are not included in lots of current work by others. Experimental testing can identify and calibrate deficiencies and patterns of selection even when using random selection. These patterns can be both demographic and psychographic; but they are correctable.

308: And where does the challenge come in building a representative sample of voters?

FG: The challenge is not one of modelling a known population but predicting a future event. We can never know this with certainty and using guesses like previous demographic characteristics of the past vote are very limited solutions. Some things that used to work (e.g. enthusiasm) no longer work and demographic turnout can and will vary from election to election. Asking people their certainty to vote is basically useless for predicting who will vote. Past voting patterns are of some assistance as are questions about whether you know where your polling station is. But these are highly limited aids in those situations where more than half of the eligible population isn’t voting and they are systematically different in their vote intentions than those who show up. Increasingly, political campaigns are all about getting out your vote, and keeping home the opponents' vote. Mandatory voting would eliminate this problem but I am not holding my breath on that one.

At the federal level one should be able to accurately forecast the outcome with sufficient tracking and diagnostic tools. And we have correctly forecast all federal elections save the ‘miss’ on the 2011 election which got the winner right but not the majority. In fairness, no one else predicted a majority that time and our poll was within the MOE of all polls for that election. We have been working extensively to understand the issues of turnout (which is the key - NOT late switching or undecided movements as some have claimed). We are very confident that we will get the next federal election outcome accurately as we did in all previous attempts.

308: What role does weighting play in producing accurate results?

FG: Weighting is very important but it should be a fairly light touch with crystal clear guidelines. It should never be used to correct huge deficiencies (e.g. weighting younger respondents by several times).  Our unweighted HD IVR gives very similar results to our weighted version (age, gender, household size). One should definitely not root around in the weighting bin until things look okay. And pollsters should produce or have available both weighted and unweighted results. If weighted results look really different than the unweighted then something is wrong with the sample.

308: EKOS has been in the business for a very long time. How has political polling changed over the years?

FG: That is an essay in itself, but I would say that the methodological challenges we have been discussing and the transformation of methodologies are very important.

I think that the media-pollster relationship is in a state of disrepair. I think there are inadequate budgets and I think the statistical fluency in the media and possibly the public has declined. The role of the aggregators is another new feature; something I find to be a mixed blessing (although I do think you give a really good effort here Eric). I detest the conflation of polling accuracy with forecasting the next day election. This yardstick comes from a time when most voted and those who didn’t weren’t particularly different. The correspondence between the election and final polls was a great way to check a pollster’s accuracy. When half or more aren’t voting and those who didn’t have different political preferences this becomes a lousy yardstick for “polling accuracy”. There is a continued need for forecasting and this is a related skill but the tasks of forecasting and modeling the population should be seen as related but separate tasks.

308: If elections are no longer good ways to gauge a pollster's accuracy, how else can the accuracy of a pollster's work be tested?

FG: Pollsters should conduct proof of concept testing with known external benchmarks to show that they can achieve representativeness. Important polls should at least occasionally include a basic inventory of benchmark indicators of representativeness such as: do you smoke? Own a valid Canadian passport? Rent or own your home with or without mortgage? Heating fuel type, etc. And the unweighted raw data should look like the population on key demographic measures and these external benchmarks.

308: If the media does not have the budget for polling or dedicated poll analysts (and that will not be changing), what are pollsters to do? Should they back away or do they have a responsibility of some sort to put out numbers?

FG: They should probably limit their activities to those areas where they can put best effort forward. The media should pay (as they do in the US) as this is an area which really does generate viewership and readership. The industry could consider a consortium of players to offer this up as an industry service during elections. Or perhaps we could look at alternative models such as Ramussen.com that successfully sells directly to consumers with subscriptions.

308: How has the business of polling in general changed?

FG: The ‘business’ of polling has changed dramatically. We have discussed some of the methodological and technological transformations. Political polling really isn’t a ‘business’ for any of those doing it in Canada. Historically, we have probably been the largest supplier of polling services to the federal government. The federal polling budget has dropped from over $30M in 2006 to under $4M last year. This is a rather breathtaking elimination of what was non-partisan policy and communications work based on listening to Canadians.  Interestingly while "listening" to Canadians has all but disappeared “persuading" Canadians has burgeoned. In 2006 there were roughly similar expenditures. Today, there is probably 30 to 40 times the expenditures federally on advertising that there is on polling. Fortunately our investments in new survey technologies have strengthened our other markets and we are now experiencing growth and profits. While we no longer depend on federal markets it is our hope that the federal government will return to listening to Canadians again.

308: In your polls, particularly of the general population, EKOS has tended to have larger proportions of people supporting the Greens or 'Others' than other firms. Why is that?

FG: Our polls (particularly between elections) are focussed on all eligible voters. We believe that our polls accurately measure the percentages of all eligible voters who support the Green Party on the day of the polls. If one doesn’t prompt for the Green Party one will get lower incidences as one would if you dropped any of the other party prompts. The simple fact is that many GPC supporters don’t bother voting. They are younger and younger voters vote in half the rate of older voters. They also correctly note that under the first-past-the-post voting system they are unlikely to see any electoral results if they did vote, so this is a further de-motivating factor. In 2008, nearly 7 per cent of all voters voted for the GPC. If you  don’t mention GPC in your prompting you may get a number closer to the election (or you may well be lower than that). But I don’t like to mix up ad hoc adjustments for the fact that the GPC doesn’t vote as much when measuring all eligible voters. We carefully note that GPC support historically translates into fewer actual voters. Other pollsters have their own legitimate views on how this problem should be handled.

308: What changes, if any, need to be made to ensure that IVR polling produces good results in the future?

FG: Our HD-IVR has been refined to provide scientifically accurate models of all eligible voters. We have the experimental evidence to show that. If you are separating the question of how to make better forecasts of turnout there is lots of work needed there and we and others are focusing on this challenge. As Yogi Berra noted, ‘prediction is really hard, particularly when it’s about the future'.