By David Hill - 05/18/10 09:58 PM EDT
AAPOR, the American Association for Public Opinion Research, wrapped up its 65th annual conference in Chicago last week. A meeting highlight was the announcement of the membership’s approval of a revised Code of Professional Ethics and Practices. Even if you have never heard of AAPOR and would never consider joining or wearing one of the “AAPOR: Freqs and Geeks” T-shirts being sold at the conference registration desk, this updated code is something you should know and care about.
The revision that most affects inside-the-Beltway types springs from the Transparency Initiative AAPOR launched after its study of 2008 pre-election polls. You may recall that one polling firm, the Georgia-based Strategic Vision, was reporting suspect numbers. AAPOR asked Strategic Vision, along with 20 other firms, to disclose their methods in polling presidential primaries. The Georgians balked, forcing AAPOR to issue a 2009 statement that “nondisclosure by Strategic Vision LLC was inconsistent with the association’s Code of Professional Ethics and Practices and contrary to basic principles of scientific research.” This snub was evidently a knockout punch, as Strategic Vision seems to have left the “public polling” community.
One particularly noteworthy change is that poll sponsors are being asked to reveal the names of sample suppliers. This is interesting information. For phone surveys, I use Aristotle samples mostly, but occasionally my sampling supervisor tells me that an alternative provider may have a more recently updated sample for a particular state or locality, or that another sampling firm’s data has a higher phone-match rate. We look into it. Little details may make important differences.
The principal reason for the new focus on samples, though, springs from the rise of online surveys with their opt-in panels. Researchers are being asked to reveal methods used to recruit these panels and the nature of incentives or compensation for panel solicitations. This may open up some eyes to potential problems in over-reliance on Web surveys.
In addition, AAPOR is asking that polls released to the public routinely include some items not always found in past releases, including the response rate (or sample disposition data so that response rates can be computed) and data weighting methodologies. I suspect these details are overkill for the “quick and dirty” releases you typically see regarding most political polls. I don’t even see them in most public poll releases. The response rate, of course, is typically so low that it undermines the whole notion of random sampling and the attendant “margin of error.” Low response rates would theoretically nullify claims of error not exceeding plus or minus four percentage points.
I’m glad that AAPOR is keeping an eye on quality and the standards of the survey industry. As the code says, I can “point with pride” to my membership.
David Hill has been a Republican pollster since 1984. This cycle he is polling for gubernatorial campaigns in four states.