As a former campaign operative, I’ve suffered the fickleness of polling data. It can feel like being on a roller-coaster without a reliable seat-belt. But the current crop of polls of New Hampshire voters strike me as particularly strange. Leave aside other polls showing even larger differences, but not with a tracking nature to them. How is it that these three can co-exist?
* The Boston Globe/WBZ tracking poll has Sen. Kerry with a lead of 20 to 23 points over Gov. Dean for the past three days. The margin for error is +/- 5 percentage points, with a sample size of 400 likely voters.
* USA Today/CNN/Gallup’s tracking poll has Sen. Kerry with a lead of 11 to 13 points over Gov. Dean for the past three days. The margin for error is +/- 4 percentage points, with a sample size of 970.
* The MSNBC/Reuters/Zogby tracking poll, covering the same three-day period, by contrast, shows a much tighter race — even a statistic dead-heat on Sunday (i.e., Kerry up by 3 percentage points over Dean, but within the +/- 4.1 percentage point margin for error; sample size of 601).
Surely, the methodologies are not precisely the same, one to the next, with different sampling methods, different questions asked, and different callers used. Is it possible that the Gallup poll has Kerry ahead by 4 points too many and Zogby has Kerry ahead by 4 points too few? How could the Boston Globe/WBZ poll possibly square with the Zogby findings? Does one of the polls have to be “wrong”? How useful can this batch of information from these various tracking polls really be?