### Forecasting and Projection Methodology

The following is a detailed explanation of the forecasting and projection methodology for the upcoming provincial election in British Columbia. The fundamentals of the model were also employed in the federal election of 2011 and the provincial elections in 2011 and 2012 in Prince Edward Island, Manitoba, Ontario, Newfoundland and Labrador, Saskatchewan, Alberta, and Quebec. Improvements, updates, and new features were added after each election season. This model will also be used with necessary adjustments for any upcoming elections in Ontario, Quebec, and Nova Scotia.

This methodological explanation is for projections and forecasts for an upcoming election. The methodology of seat projections for individual polls and for poll the aggregations for Canada, Ontario, and Quebec when an election is not scheduled are slightly different, and is explained here and here.

In the following description, the "projection" refers to the expected results of an election held on the date attributed to the projection itself. The "forecast" refers to the range of probable results estimated for the future date of the actual election. This is the difference between what Nate Silver's FiveThirtyEight called the "now-cast" and the "forecast" during the 2012 U.S. presidential election.

Poll aggregation

The projection model starts with the aggregation of all publicly available opinion polls. Polls are weighted by their age and sample size, as well as by the track record and past performance of the polling firm.

The weight of a poll is reduced by 35% with each passing week outside of an election campaign and each passing day once a campaign has officially begun. In polls taken over multiple days, the median day of the poll is used for the weight. For example, a poll taken between March 12 and March 14 would be dated for March 13.

The sample size weighting is determined by the margin of error that would apply to the poll, assuming a completely random sampling of the population. The margin of error for a poll of 1,000 people, for example, is +/- 3.1%. A poll with a sample of 500 people has a margin of error of +/- 4.4%. Rather than giving the poll of 500 people half the weight of the poll of 1,000 people, the smaller poll would be weighted at 70% (3.1/4.4) of the larger poll.

An analysis of a polling firm's past experience in a province or at the federal level has suggested that polling firms that were not active in a jurisdiction's previous election have a total error 1.25 times that of firms that were active in the previous election. Accordingly, polling firms with prior experience in a jurisdiction are weighted more heavily than those that have none.

Polling firms are also weighted by their track record of accuracy over the last 10 years. Their accuracy rating is determined by three factors: 1) the last poll the firm released in an election campaign, 2) their average error for all parties that earned 3% or more of the popular vote, and 3) the amount of time that has passed since the election. In order to take into account changes of methodology or improvements made over time, the performance of a polling firm in a recent election is weighted more heavily than their performance in an older election.

The accuracy rating is determined by comparing the average error, weighted by how recent the election is, of the best performing polling firm to others. For example, if the best performing firm had an average error of 1.5 points per party, a firm with an average error of three points per party would be given half the weight.

All of these ratings are combined to give each poll in the projection model a weight. In sum, this means that newer polls with larger sample sizes from experienced polling firms with a good accuracy record are weighted more heavily than older and smaller polls from inexperienced firms with a bad track record.

While polls are generally accurate, they sometimes have methodological biases or are unable to bridge the gap between voting intentions and actual voting behaviour. Adjustments need to be made in order to take into account these problems, which can be caused by turnout and organization - things not always measured in poll results. However, there is sometimes little uniformity in how election results differ from poll results. For example, in the recent Quebec election the Liberals out-performed the polls by a significant margin, after having under-performed their polling in the 2007 and 2008 election. The federal Conservatives out-performed the polls in 2008 and 2011, but under-performed in 2006.

Adjustments to the aggregation, thus, have to be handled carefully and applied only when there is an overwhelmingly strong justification to do so. In the case of the 2013 British Columbia projection, the only adjustment is made for the support of the B.C. Greens. An analysis of elections over the last decade or so shows that in more than nine out of every ten cases the Greens are over-estimated in the polls, performing at about three-quarters of their expected results. In about 19 out of 20 cases, parties without a seat in the legislature also under-perform by about the same amount. This also occurred in the 2009 B.C. election, when the Green result was 77% of what was determined by the polls. This makes for a very strong case to make a similar adjustment for the B.C. Greens, with their lost support being distributed to the other parties proportionately. Running candidates in only 72% of ridings also argues towards applying this adjustment.

A similar case can be made for the B.C. Conservatives, who have no seats in the legislature. But there is a danger of applying an adjustment to a case that is outside of the sample. With their decent polling results over the last few years, the potential performance of the Conservatives is an unknown. Applying the adjustment for their lack of a seat in the legislature may not accurately reflect their potential results. The performance of the polls in the 2009 B.C. election also argues against applying this sort of adjustment, as the results for "other parties" was quite close to the final polls. But as the Conservatives have only nominated candidates in 66% of ridings, the adjustment (worth 73% of their polling level) seems to be warranted.

The vote projection

After weighing all the polls to determine the average result and tweaking it according to any applicable adjustments, the projection model gives the best estimate of support that each party is likely to get in an election.

But rather than suggest that the poll aggregation, after adjustment, reflects the results of an election "held today", the projection will instead be presented as reflecting the result as of the last day of polling in the projection model. For example, if a poll is released on April 14 but the last day the poll was in the field was April 12, the vote projection will be presented as being the best estimate of what the result of an election held on April 12 would have been.

The provincial or national vote projection, however, has little bearing on the seat projection. That is because the seat projections are calculated regionally, using the same methods described above to estimate support in each region of a province or in the country. Polls whose regional definitions do not exactly match the projection model's definitions are adjusted accordingly, with the difference between the election result in a region as it is defined by the model and by the polling firm being used.

The performance of this method

This adjusted and weighted poll aggregation performs better than most individual polls and better than an unweighted and simple averaging of the last polls of a campaign. An analysis of the seven provincial elections in 2011 and 2012 shows that the model used by ThreeHundredEight.com performed better, on average, than 11 of the 15 polling firms active in at least one of these elections (and two of the four better performers were active in only one campaign) and was about 10% better than a simple average of polls.

Recognizing the limitations and vote ranges

But despite performing better than most polls and the average of the polls, the vote projection is still heavily dependent on what the polls show. It can thus fail catastrophically when the polls do, as occurred in the 2012 provincial election in Alberta. A measure of the likely error in the vote projection needs to be made.

This measure is reflected in the high and low results for the vote projection. It is determined by applying the margin of error for the estimated "sample" of the projection itself. This sample is measured by comparing the weights and sample sizes of other polls to the most highly weighted poll in the projection. For example, let us assume that a projection has three polls in the database. All three have a sample of 1,000 people. One poll is rated at half the weight of the most highly rated poll, while the third poll is weighted at one-fourth of the most highly rated poll. The estimated "sample" of the projection would then be considered to be 1,750 people (1,000 people from the first poll, 500 from the half-weight poll, and 250 from the quarter-weight poll). That 1,750 sample is then used to determine the margin of error for each party, which changes according to the support a party holds. An easy demonstration of this is that a party with 2.5% support has a different margin of error than a party with 25%. If the margin of error in the poll is 3.1%, the range of results for the small party can't be as low as -0.6%.

By calculating the margin of error attributed to the projection's "sample", that is used to determine the likely ranges of the projection itself. Theoretically, this range is based on the assumption that the polls have accurately reflected the mood of the electorate, within their respective margins of error.

Seat projection methodology

Once the vote projection and likely ranges for each party is determined, the model then makes a seat projection. This seat projection is based on the vote projection: if the first is wrong, the second will be as well. If the vote projection is accurate, the seat projection will also be accurate. With completely accurate polls, the seat projection model would have a margin of error of only 3.2 seats per party and make the right call in each riding 85% of the time.

At its core, the seat projection model uses a simple proportional swing method based on the difference between the results of the last election and current polls. Put simply, if a party managed 20% in a given region in the previous election and is now polling at 40% in that same region, their results in each individual riding would be doubled. The image below shows how this method would have estimated the NDP's support in the riding of Trinity-Spadina in the 2011 election.
This swing is applied to every party in each riding. As this will sometimes result in total support of more or less than 100%, the numbers are adjusted upwards or downwards proportionately to equal exactly 100%.

This model is in contrast to the uniform swing method popular in the United Kingdom. With that method, in the example of Trinity-Spadina, the NDP's increase of 7.4 percentage points in Ontario would have simply been added to the NDP's result in 2008 in Trinity-Spadina, estimating that the party would captured 48.3% of the vote instead of 57.7%, as proportional swing would suggest. In this one case, that would put the error of uniform swing at about double the error using the proportional swing method.

The proportional swing method is a better estimation of how support changes between elections, reflecting that a party with a large base of support in a riding is more likely to grow by larger proportions than a party with no real support. It can also perform well when parties make major gains - with the actual provincial results of the 2011 federal election plugged into the model, it would have projected 60 seats for the NDP in Quebec to four for the Bloc Québécois, instead of the actual result of 59 to 4.

Taking other factors into account

The swing model alone, however, cannot take into account the individual characteristics of each riding. Other factors need to be taken into account.

Incumbency is the most important factor, as it applies to every riding and can have a significant effect. My own research shows that support for incumbents is far more resilient than for other candidates, and that when parties do not have incumbents on the ballot they suffer a serious loss in support. That drop equals about 10% of what the party managed in the previous election, resulting in a slip of anywhere from four to six points (all else being equal). But the incumbency effect is also determined by a how a party is doing overall. My research shows that an incumbent retains more of their vote when their party's support is dropping in the region. It also shows that incumbents who have been re-elected at least once make lesser gains when a party's support is increasing in a region, while incumbents running for re-election for the first time tend to out-perform their own party's gains.

This would seem to be reflection of the difference between a first-time incumbent and a veteran incumbent. A veteran has a more solid base of support that is harder to move in either direction, whereas a sophomore is now a much safer bet compared to when they first ran for election. They have a record of winning, whereas in the previous election they had none.

Accordingly, when a party is losing support incumbents are given a "bonus" usually worth three to five points, while when a party is gaining sophomores are given a bonus worth about one to two points while veterans are penalized by about that much. When the incumbent is not running for re-election, the party is penalized accordingly.

The effect of having star candidates on the ballot is the largest bonus of the projection model. Star candidates improve their party's performance in the vast majority of cases, though the classification of star candidates is one of the purely subjective aspects of the model, as I have to determine whether a candidate should be considered a "star" or not. This is usually quite obvious, and one of the biggest determinant factors is whether a candidate is widely considered as a star in the media, which has its own effect on how the candidate is perceived by voters. Star candidates are usually former MPs or cabinet ministers, party leaders, or well-known figures from the private sector.

Floor crossing is a difficult factor to take into account, as the amount of support a sitting MP or provincial representative brings with them can vary dramatically. But an analysis of past cases shows that the effect can be very large, with the floor crossing candidate able to increase their new party's support in a riding by about half, while the other party's support drops by about a quarter.

The presence of independents can also be difficult to model. If an independent politician is running for re-election as an independent, their vote is dropped by about one-eighth from the previous election, as has occurred in other cases. The same penalty is applied to popular independent candidates who were never elected. Politicians who left or were forced out of their party caucuses and are running for re-election as independents are treated differently. Based on an analysis of previous cases, these candidates take a proportion of their vote share from the previous election based on the circumstances of their departure from caucus. Those who depart for positive reasons retain much more of their support than those who leave in disgrace. When the circumstances are hard to define, an average proportion is used. Those votes come directly from the party the candidate left.

By-elections are also taken into account. When the result of a by-election was significantly different from the results of the previous general election, the proportional swing is applied to the by-elections results based on how current polling levels differ from where the parties stood in the polls at the time of the by-election.

The particularities of an election

When necessary, the projection model takes into account the individual particularities of an election campaign. One common particularity is the presence of a new party, or a formerly fringe party running a full (or almost full) slate of candidates.

When a party is running candidates where they did not have a name on the ballot in the previous election (whether that be limited to a handful of ridings, as often occurs with smaller parties, or in the bulk of ridings, as occurred in the 2012 election in Alberta for Wildrose), the regional vote projection for the party is applied directly to the riding. For example, if a party is polling at 20% in a region it will be projected to have 20% in each riding in that region. However, that number can be adjusted by any of the factors listed above and is always adjusted when the model makes all of the projections add up to 100%. In this example, in ridings where there is little room for the party to have 20% their vote will be adjusted downwards. When there is a lot more room, the vote will be adjusted upwards. This system performed well when the real results of the 2012 Alberta election were applied: Wildrose would have been projected to win 18 seats (instead of the actual result of 17).

Likely seat ranges

In order to take into account error in the polling and in the seat projection model itself, the vote projection ranges are used to determine likely seat ranges. These are applied directly to each party's projected results in each riding. For example, if the high projected vote for a party in a given region is 5% higher than the most likely projection, then the projected vote for the party in each riding in that region is increased by a factor of 1.05. How these high and low results for each party in each riding compare determines whether a seat is "in play". If the projected high result for a party in a riding is higher than the projected low result for the party expected to win the seat, the seat is then potentially winnable for the trailing party.

This gives the seat projection a confidence interval, based on likely results if the polls are accurate. As the vote projection range is dependent on the size of the "sample" of the projection, the more polls there are the narrower the seat projection range will be.

Probability of a correct call

One new feature added to the model for 2013 is the probability that a call made by the seat projection model will be correct. This is based on an analysis of the seat projection model's performance in the eight elections that it has made projections for individual ridings. This probability is determined by the margin the projection model estimates the winner will win by. The following chart tracks how the projection has performed in the past, based on the projected winning margin in each riding.
Each red dot shows the percentage of calls that were correct, while the blue line shows the general trend. As the data is somewhat noisy (but the trend is still clearly visible), the trendline has been used to determine what probability of a correct call should be applied to every riding.

If the riding projection shows that a party leading in a riding by 12 points has a 73% chance of winning, that means that based on past performance the model will be right about 73% of the time when it chooses a winner by a margin of 12 points. It does not mean that there is a 73% chance that the projection for every party in the riding will be correct, or that the trailing party has a 27% chance of winning (a third place party could win as well). It is referring to the odds that the party projected to win will win.

But this is not a forecast of the future, it is the confidence that can be placed on a call if the election were held the day of the projection. Forecasting the future is an entirely different matter.

Forecasting the popular vote for a future date

Having a background in History, I am partial to the idea that the past can tell us a lot about the future. The forecasting model is based entirely on this premise.

In addition to the projection, the model also gives the plausible high and low results each party might be able to manage by election day. Unless the polls begin fluctuating wildly, the margin narrows as Election Day approaches.

The ranges are determined by measuring the degree of polling volatility in the past, with the period examined being equal to the amount of time before the next election. For example, if the next election is scheduled for 160 days from the date of the projection (determined by the last day of polling in the model), the ranges will be determined by the difference between the highest and lowest poll result for each party over the last 160 days. This is a measure of what kind of change in support is plausible based on how much that support has changed in the past. Of course, what is plausible does not equal what is possible - in theory, a party can get anywhere from 0% to 100% of the vote. That a party at 5% could win 75% of the vote six months from now is possible, and vice versa. It is not plausible, however, if that 5% has varied by only three points in the previous six months of polling.

This forecasting also cannot take into account completely exceptional events, but it is a best estimate of the plausibility of a party gaining or losing a certain amount of support. There is a minimum amount of days the forecasting model will look back to find poll volatility, calculated to have about 91% confidence. That is to say, 91% of the time the vote share a party takes in an election will be within the high and low forecasted ranges.

The forecast is continually updated as a measure of what should be expected, based on current information. Accordingly, it will change and the forecasts six months from an election may not overlap with the forecasts one month from the election. It is, instead, a best guess of what to expect based on what we know now.

Forecasting the likely seat results for a future date

In the same way that the seat projection model gives a range of likely outcomes based on the vote projection, the seat forecasting model gives a likely range of outcomes based on the vote forecast. These, of course, vary wide when the election date is far away. They give the range of plausible outcomes for the next election based on the information available to us right now.

Note that the seat forecast is not the same kind of assessment as the seat projection in terms of what seats are at play. In the case of the first projection for British Columbia, the Conservatives are given a high forecast of 23 seats and the Greens of six seats. This does not mean that we should expect the Conservatives and Greens to win this many seats. In the case of the Conservatives, it means that if the party does end up at 25.8% support (their forecasted high, which would almost certainly mean the B.C. Liberals have dropped considerably), they could win as many as 23 seats. In the case of the Greens, it means that if the party ends up at 16% support (their forecasted high) they would be in play in as many as six seats. The one is dependent on the other. The Greens are not going to be at play in six seats at 7% support or even the high projected result of 8.7% support in the first projection. The model considers that they are at play in no seats at current polling levels.

The probability of winning an election held immediately

Another new feature of the model is the ability to calculate the probability of a party winning the next election as of the date of the projection.

This is based on the performance of the projection model in the past. In short, it determines the probability that the amount of error in the seat projection will be less than the margin between the leading party and other parties. In the case of the first projection for British Columbia, the margin of 40 seats between the Liberals and NDP has been overcome in only 2.4% of cases. That means that if this was the final projection call before the election, the NDP would have a 97.6% chance of winning it. This takes into account the potential for an Alberta-sized error, but calling a winner by 40 seats would be right virtually all of the time.

The probability of winning a future election

The greater challenge lies in being able to predict the probability that a party will win an election at a future date based on current information. The seat forecast shows what range of outcomes are plausible, but not which are most likely to occur.

After analyzing almost 6,000 pieces of data from polls conducted in over 20 federal and provincial elections since 2004, I have been able to determine the probability that the margin between any two parties can be overcome in a given period of time. As this analysis was based on the difference between polls and final outcomes, it takes into account both the past amount of error in the polls as well as the degree of real change that has occurred in voting intentions.

Using this model suggests how likely it is, to use the first British Columbia projection as an example, for the B.C. Liberals to overcome a 19-point deficit six months from the election, based on how often this sort of shift has taken place in the past. In this case, this sort of margin has been overcome in the six months prior to an election only 4.3% of the time, giving the NDP a 95.7% chance of winning the popular vote in May 2013 based on the polling of November 2012.

The calculations do take into account the role played by third parties and other parties, when they have a significant level of support. When the margin between the leading party and third or other parties is very large, the model assumes a (nearly) 0% chance that they could win. When three or more parties are a factor, the probability is calculated accordingly.

Calculating the probability of a party winning a future event should be very familiar to readers of Nate Silver's FiveThirtyEight blog. In his excellent book, The Signal and the Noise, Silver includes a chart showcasing the probability of a Senate candidate winning an election based on their polling on a given date. This gave me the opportunity to check my math, as it were. Here is a comparison of FiveThirtyEight's probability ratings to the ones that are employed by ThreeHundredEight:
As you can see, the probabilities are almost identical in most cases, and are often lower than FiveThirtyEight's. This may be a reflection of the complications caused by our three-or-more-party system. The numbers in the above chart assumes a two-party race, which of course is not always the case in Canada. But it is also possible to use these numbers to calculate the odds of a party winning in a multi-party race.

This calculation is based solely on the probability of a party winning the popular vote. In U.S. Senate races, that is also what determines who wins the election. That is not the case in Canada, where a party can win the most seats with fewer votes. The amount of variables at play to determine the winner of the most seats based on popular support in polls months before an election are, of course, enormous. But generally speaking, the party with the most votes will win the most seats, so while the probability of a party winning the popular vote may not necessarily determine their probability of winning the election, it is a close enough proxy. This is particularly the case in British Columbia, where neither the Liberals nor the NDP appear to have an intrinsic advantage in vote efficiency.

Hopefully, this should provide a complete explanation of ThreeHundredEight.com's projection and forecasting methodology during and in the run-up to election campaigns.