The short answer is: No.
Here's the scoop: at the time the survey is done,
there’s no way to calculate how accurate it is.
Doubtless you’ve heard about
some polls that went awry in spectacular ways. For instance, in late 2012, the
internal polls for the Romney campaign were predicting a clear Republican win. You
know how that turned out.
Of course, some polls done by
and for political candidates have been right on the money.
But here’s one sticky point:
inevitably, with so many surveys being done, a few will be really cockeyed, and
a few will be spot on, and most of them will be sort of close to the final
outcome, more or less.
But predictive accuracy is what
everyone wants.
And that’s what modern
pollsters can’t provide. Basically, they’re providing pretty good guesses.
A recent piece on
WashingtonPost.com tried to show that current polling generally produces “good
quality survey estimates.” It says that the divergence of poll results and
actual election outcomes can be measured in single digit percentages. It also
explains that surveyors routinely “weight” their findings—a polite way of
saying “cook the numbers”—because they can’t reach a satisfactory random sample
of respondents.
That’s another sticky point:
even a highly respected polling organization like the Pew Research Center
reports that, in recent polls, it actually interviewed only 9% of its targeted
sample of adults across America. That is, Pew failed to complete an interview
with 91% of the people it tried to reach. You know, people just don’t answer
their phones any more….
The WashingtonPost.com piece fails to acknowledge another sticky point. With abysmally low completion rates,
today’s pollsters are refusing to face up to the statistical 800-lb gorilla in
the room:
The mathematical underpinnings of statistical reliability in
a survey are contingent on having a "true random sample" of the
population being surveyed. That is, every member of the entire population must have
an equal chance to be selected for the survey.
Manifestly, this does not occur in every political poll done
today, viz. response
("completion") rates as low as 9%. (Here’s a frame of reference: when
I started doing public polling in the late 1970s, with door-to-door
interviewers selecting households and respondents at random, we had completion
rates in the 75%-80% range).
If only 9% (or 23%, or whatever low percentage) of the
targeted respondents actually complete the interview, there is no way to
meaningfully calculate "accuracy" (i.e. statistical reliability, error
range) for a survey. No way.
Every survey published today is, de facto, not reliable,
even if the published result happens to be close to the real outcome. Of
course, experienced pollsters can weight the data to try to estimate the
"real" result, but in terms of reliably predicting outcomes (e.g.
election results) in advance, especially in close races, modern surveys are
close to useless.
No comments:
Post a Comment