Recent high-profile challenges to the veracity of published social science research (see here and here), highlight the importance of methodological transparency in academic research. Methodological transparency is the key to scientific integrity and the accumulation of scientific knowledge. As outlined in a recent NSF report, transparency about data collection, management, and analysis are critical to producing research that is reproducible, replicable, and generalizable.
While there is increasing recognition and support for research transparency—including efforts like The Berkeley Initiative for Transparency in the Social Sciences (BITSS)—it’s not always clear what exactly it means to be transparent. We tend to leave it up to well-intentioned researchers to determine what it will take “to permit other scholars to replicate results and carry out similar analyses on other data sets” (from the APSR submission guidelines). The discipline would benefit from journals adopting systematic reporting standards for methodological disclosure. The Experimental Research section of the American Political Science Association has recently taken on this charge, creating a set of recommended reporting standards for experiments to be published in the Journal of Experimental Politics (here).
As I’ve previously argued (here), a similar set of standards is needed for published survey research. It’s not necessary to reinvent the wheel—journals can borrow from the work of the Transparency Initiative of the American Association for Public Opinion Research (AAPOR) and require adherence to the AAPOR Standards for Disclosure. This includes reporting such information as
- Survey sponsors, funders, and vendors
- Questionnaire (wording, options, transitions/intro)
- Definition of population, including screening criteria
- Description of sampling frame used to identify population
- Supplier of sampling frame
- Details of sample design, including any quotas or criteria used, sufficient to determine if probability or nonprobability sampling used
- Sample sizes
- Adjustments made for clustering or other design effects
- Use and calculation of weights
- Methods, modes, dates of data collection, including languages
- Incentives, follow-up attempts, other procedures
- Nonresponse/attrition rates and how calculated
To be clear, I’m calling for disclosure standards, not standards of practice. It should still be the prerogative of editors, reviewers, and readers to determine if the methods used justify the knowledge claims being made. At the same time, improved transparency will help to highlight significant variation in survey design and data quality. The current academic landscape in political science is dotted with more original data collections than ever before. While this offers exciting opportunities, it also increases the need for methodological transparency.