There are five major polling firms that publicly release their findings – One News Colmar Brunton, NZ Herald-Digipoll, TV3 Reid Research, Fairfax Ipsos and Roy Morgan. Gavin White notes:
Four of those five polls were around at the 2011 election, with exception being Fairfax (then conducted by Research International). Although some on the left wing blogs have been critical of the Fairfax poll on the grounds that it was a long way out in 2011, I think that’s manifestly unfair as Ipsos weren’t doing it. That’s like criticising Cadbury for the taste of a Peanut Slab. The most we can say about the Fairfax Ipsos poll in 2014 is that we don’t know how it stacks up historically.
I was interested in what happened to historic polling bias if Fairfax was taken out of the equation, as its results were so far out (both when Research International was running its polling in 2011, and when Nielsen was its pollster in 2008 and 2005).
I’ve looked at the final polling results from the last three elections (2005, 2008 and 2011 for the major polling companies), and the results are quite interesting.
In 2011, the average error across the final polls of each of the five main polling companies was:
- National: 3.7% too high
- Labour: 1.2% too low
- Greens: 1.3% too high
- NZ First: 2% too low
In 2008, the average error was:
- National: 1.5% too high
- Labour: Correct (to one decimal place)
- Greens: 1.6% too high
- NZ First: 0.6% too low
In 2005, the average error was:
- National: 0.3% too high
- Labour: 1.4% too low
- Greens: 0.7% too high
- NZ First: 0.5% too high
Overall, across all three elections, the average error across the final polls for each of the five main polling companies (in their different incarnations) was:
- National: 1.8% too high
- Labour: 0.9% too low
- Greens: 1.2% too high
- NZ First: 0.7 too low
Now that’s all very well, but what happens if we take into account only those polling companies that are still running publicly released political polls? As Mr White noted, there’s no point in comparing Fairfax’s Ipsos poll results with those of Fairfax’s Research International or Nielsen polling. They were evidently concerned about the inaccuracies with both prior polling companies, and have now changed to Ipsos. Likewise, 3 News’ polling in the 2005 and 2008 elections was performed by their earlier pollster TNS, whereas prior to 2011 their pollster has been Reid Research.
So if eradicate the now non-existent public pollsters (all of Fairfax’s polling and the TNS polling from 2005 and 2008), thereby only looking at the track record of the existing polling companies, what happens?
First of all, let’s look at the individual elections:
- National: 3% too high
- Labour: 1.1% too low
- Greens: 1.4% too high
- NZ First: 1.8 too low
- National: 0.7% too high
- Labour: 1.3% too high
- Greens: 1.5% too high
- NZ First: 0.5% too low
- National: 0.6% too low
- Labour: 0.7% too low
- Greens: 0.4% too high
- NZ First: Correct (to 1 decimal place)
And the total average error across all three elections?
- National: 1.4% too high
- Labour 0.3% too low
- Greens: 1.2% too high
- NZ First: 1.0% too low
That’s just a 0.5% divergence between the National v Labour & Greens blocs. Not a hell of a lot.
I commented in my last post that I saw the polling divergence in 2011 between final polls and actual elections results to likely be the result of the final polls not picking up a sudden last-minute switch from National to NZ First due to the teapot tapes saga. Those 2011 results for National and NZ First really do seem to be outliers, compared to the other election results.
So one final piece of analysis – what happens if we take the 2005 and 2008 elections, and compare only the poll results for the currently existing public pollsters?
- National: Correct (to one decimal place)
- Labour: 0.3% too high
- Greens: 1.0% too high
- NZ First: 0.3% too low
The left can hardly complain about those figures…
The lesson? Numbers can be arranged and re-arranged to suit whatever thesis one is currently banging the drum for.
Which is why my Poll of Polls doesn’t try to correct bias based on deviations from election day(s). Occasionally Erudite’s Poll of Polls corrects for deviations from the industry average, but that’s as far as it goes. On 20 September, if it turns out the same sort of error rates exist in the major polls as occurred in 2011, then I’ll undoubtedly be taking that on board and trying to factor in a broader polling bias. However, at present, I’m unconvinced…