2020 American Association of Public Opinion Research Annual Conference
Date
Catch up on the latest RTI International innovations at the American Association for Public Opinion Research (AAPOR) virtual conference, held June 11-12, 2020.
AAPOR is the leading professional organization of public opinion and survey research professionals in the U.S., and its annual conference is one of the largest gatherings of data researchers and survey methodologists in the world. AAPOR includes members from academia, media, governmnet, the non-profit sector and private industry, and it believes that public opinion research is essential to a healthy democracy, informed policymaking, and giving voice to the nation's beliefs, attitudes and desires.
RTI has been an active member and participant in AAPOR for 60 years, and many of our experts have served in leadership roles in AAPOR committees. Be sure to virtually engage with our experts at the following sessions:
Introduction to the role of big data in social science research
Presenter: Antje Kirchner
Date: Thursday, 6/11/2020 11:00 AM
Session: Concurrent Session A
Abstract: The social sciences and survey research in particular, are at a crossroads transitioning from the third era of survey research as Groves described (2011) to the emergence of a fourth era that increasingly incorporates alternative data sources and methods. Specifically, we see a reimagining of survey research by leveraging new data sources and methods that offer more holistic insights into public opinion and social science phenomena as well as improve efficiencies of traditional data collection, processing, and analysis…This session illustrates how various big data sources outside of the survey ecosystem, such as data from registers, social media, apps, sensors, as well as paradata are being used as an adjunct to any part of the survey research process including: questionnaire design and evaluation; sampling; tailored survey designs and data collection; weighting adjustment and analysis.
RDD to ABS: Experiences of a Pilot Study to Transition the Ohio Medicaid Assessment Survey
Presenter: Marcus Berzofsky
Date: Thursday, 6/11/2020 11:00 AM
Session: Concurrent Session A
Abstract: Surveys that utilize a Random Digit Dialing (RDD) frame have become increasingly inefficient over the past 10 years, with response rates often dropping into single digits. Therefore, many survey designers have had to transition their surveys to an alternative frame that maintains the low-cost benefits, compared to in-person surveys, provided by the RDD frame. One alternative being investigated by many researchers is an address-based sampling (ABS) frame…In this paper, we present the results of an ABS pilot study of the 2019 Ohio Medicaid Assessment Survey (OMAS) and compare the results to the 2019 RDD OMAS sample, the main fielding. OMAS is a general population survey that examines access health care and health outcomes at the state-wide, regional, and county levels. We conducted the pilot in 5 Ohio counties to understand county level differences that may occur when a survey transitions from a RDD to an ABS frame. Unlike similar state health surveys, we offered both web and paper response options using Choice+ to incentivize web responses. We present comparisons of the RDD and ABS samples by: (1) respondent characteristics, (2) response rates, and (3) key outcomes of interest.
Machine Learning and Grid-Based Sampling Experience in India
Presenter: Jamie Cajka
Date: Thursday, 6/11/2020 01:15 PM
Session: Concurrent Session B
Abstract: Grid-based sampling can be effective in developing a representative sample when existing frames are poor or absent as if often the case in low- and middle-income countries. RTI International’s (RTI) Geosampling is a grid-based sampling methodology that divides a study area into ~1km x ~1km grid cells. Each grid cell contains a population estimate, which allows probability proportional to size (PPS) selection to be used. Each 1km grid cell is further subdivided into smaller grid cells whose size depends on the degree of urbanicity. A full census is performed in the area defined by the smaller grid cell. RTI conducted a social, religious, and political topics survey in the Uttar Pradesh state in India in 2019. The methodology used was a combination of random walk, and a modified version of RTI’s Geosampling. For the Uttar Pradesh survey, RTI generated grid cells from the WorldPop population grid, which has spatial resolution of 100 m.
Changes in Reporting over Waves in the Consumer Expenditure Survey
Presenter: Stephanie Eckman
Date: Thursday, 6/11/2020 01:15 PM
Session: Concurrent Session B
Abstract: Motivated misreporting occurs when respondents give incorrect responses to questions to shorten a survey interview. Previous research has shown that motivated misreporting occurs across many modes, topics and countries. Those studies have often relied on questions about purchases to elicit the motivated misreporting effect. This paper tests for the presence of measurement error in the U. S. Consumer Expenditure Survey. The data from this survey inform the calculation of the Consumer Price Index in the US, among other uses.
Methods for Household Sample Selection in Low- and Middle-Income Countries: An Evaluation of “Geosampling” in India
Presenter: Charles Lau
Date: Thursday, 6/11/2020 03:00 PM
Session: Concurrent Session C
Abstract: We evaluate an innovative sampling technique for household surveys called “geosampling” which leverages advances in geographic information systems (GIS), computer vision algorithms, and satellite imagery. Geosampling achieves a probability sample while avoiding the pitfalls of other sampling techniques used in low- and middle-income countries. We have developed and refined the geosampling methodology over five years through 50,000 household interviews in 11 countries…Using an experimental design, we conducted two household surveys in Uttar Pradesh, India: one using geosampling and another using the standard method to sample households (called “random walk.”) Our analysis compares geosampling and random walk along three dimensions: (1) performance indicators – response rates, contact attempts, cost; (2) bias and variance in socio-demographic estimates; and (3) survey estimates of substantive variables…Before AAPOR, we will extend this analysis and discuss geosampling’s strengths and weaknesses as a method for sampling households in low- and middle-income countries.
The leader, the helper and the reluctant participant: An experiment on persuasive cover letter techniques.
Presenter: Rebekah Torcasso Sanchez
Date: Thursday, 6/11/2020 03:00 PM
Session: Concurrent Session C
Abstract: This paper aims to fill this gap by examining the impact of different messaging strategies on response rates for two physician surveys. We conducted two concurrent experiments. For both experiments, we hypothesized an appeal to altruism would result in a higher response rate than an appeal to expertise. The second experiment tests our hypothesis that switching the type of appeal in the third wave of contact attempts would boost response rates. The first experiment was conducted with web survey respondents, comprised of physicians who are members of the professional medical network Doximity (n > 215,000), and the second with respondents to a mixed-mode survey of physicians (n = 4,700). To test our hypotheses, we compare response rates and survey completion time by physician specialty subgroups. This work aims to identify whether different types of persuasive messaging can boost response rates and contributes to the literature on leverage-saliency theory.
Informed Consent and User Experience in a Redirected Inbound Calling Sample with Interactive Voice Response Data Collection
Presenter: Burton Levine
Date: Thursday, 6/11/2020 03:00 PM
Session: Concurrent Session C
Abstract: Redirected Inbound Call Sampling (RICS) is a nonprobability telephone sampling methodology where callers to toll-free numbers not in normal use are redirected to a screening, recruitment, and data collection system. In RICS, as in all studies involving human subjects, the ethical treatment of sampled individuals is an overarching concern. RICS needs to be evaluated to determine if there are operationalizations that fail to achieve the standard of ethical treatment of survey respondents as described in the AAPOR document, The Code of Professional Ethics and Practices (Revised 11/30/2015), and under which operationalizations of ethical treatment of survey respondents is achieved. In this paper we address a knowledge gap on how implementation decisions affect the experience of the selected sample members in a RICS survey where screening, recruitment, and data collection are all conducted with an interactive voice response (IVR) system. We fielded 5 versions of an informed consent script. We examine the relationship between the informed consent script versions, and whether respondents reported placing the call about an emergency, and if the emergency is life threatening.
Identifying Places of Interest from Always-on Location Data
Presenter: Rob Chew
Date: Friday, 6/12/2020 11:00 AM
Session: Concurrent Session D
Abstract: As survey costs increase and response rates decrease, researchers are looking for alternative methods to collect data from study subjects. Because they are collected without subject involvement, passive data may offer a way to reduce the burden on research subjects while also collecting high-quality data needed for social science research. Examples of passive data collection tools are applications installed on mobile devices and sensors in subjects’ homes or worn on the body. In this study, we focus on always-on location data collected from subjects’ iPhones. To explore the promise of passive data to augment and improve survey data, we conducted a 2-week pilot study with 24 subjects. We discuss the utility and promise of always-on location data and the challenges researchers may encounter when they incorporate location data in their analyses…These results will be relevant to researchers considering incorporating passive data into their studies as a way to reduce the burden of survey data collection.
Respondent Receptivity to Electronic Incentives in a Multi-Mode Survey - Unraveling Multivariate Collinearity
Presenter: Adam Kaderabek
Date: Friday, 6/12/2020 11:00 AM
Session: Concurrent Session D
Abstract: Post-incentives are an established tool to increase survey response and the use of electronic incentives (e.g. Pre-paid Visa codes) offers promises of economy, accessibility and immediacy. Analysis of respondents’ likelihood to elect remuneration for their participation in the form of an electronic incentive in the National Recreational Boating Safety Survey (NRBSS; n>250,000), a multi-mode survey about boating safety, has shown certain demographic variables to be associated with selection outcomes….We will present regression analysis models of the confounding relationships between gender, age, income and the mode of survey completion and how those factors impact whether a respondent will choose to accept an electronic incentive or no incentive following survey participation. Our variables of interest are commonly associated with respondents’ preferences for mode selection and often reveal collinearity in statistical analysis of other survey outcomes. We will discuss the efficacy of electronic incentives, articulating the influence of key aspects of choice architecture, as they relate to a respondent’s propensity to participate.
Use of Responsive Design with a Calibration Sample to Optimize Incentive Structure in a Cross-Sectional National Survey
Presenter: Emilia Peytcheva
Date: Friday, 6/12/2020 11:00 AM
Session: Concurrent Session D
Abstract: We examine the utility of alternative initial incentive designs and related nonresponse follow-up strategies, in a calibration sample in the 2019-20 National Postsecondary Student Aid Study (NPSAS:20), sponsored by NCES. NPSAS is a nationwide repeated cross-sectional study tracking how students and their families pay for postsecondary education. The main mode of data collection is web supplemented with telephone for nonresponse follow-up. The NPSAS:20 calibration sample launches 10 weeks in advance of data collection with a sample of approximately 6,000 students randomly selected from about 60 institutions. We test several alternatives of an incentive plan with random assignment to multiple conditions and implementation in different points in time: (1) a small prepaid incentive, (2) offering half of the promised incentive amount and doubling it in a later phase, and (3) prepayment of part of the main incentive.
Enhancing analytic capacity through data integration of national survey and nonprobability samples.
Presenter: Steven Cohen
Date: Friday, 6/12/2020 11:00 AM
Session: Concurrent Session D
Abstract: The quality and content of national population-based surveys are enhanced through integrated designs that link additional medical, behavioral, environmental, socio-economic and financial content from multiple sectors. A recent effort by the Committee on National Statistics of the National Academy of Sciences is serving as a catalyst to advance future national data integration efforts, as indicated in their recent report on Federal Statistics, Multiple Data Sources, and Privacy Protection: Next Steps. These integrated data platforms would include content drawn from nonprobability based samples to enhance analytic capacity... In this presentation, we focus on an alternative framework to account for the limitations of the estimates derived from nonprobability samples. Examples are provided using data from the Medical Expenditure Panel Survey (MEPS), the National Health Interview Survey (NHIS), and cancer patient-level phase III clinical datasets.
Crowdsourcing to recruit hard-to-reach populations: Evidence from recruiting military veterans for cross-sectional and longitudinal survey research
Presenter: Y. Patrick Hsieh
Date: Friday, 6/12/2020 11:00 AM
Session: Concurrent Session D
Abstract: Conducting surveys with military veterans is challenging, from accessing representative samples to gaining adequate response rates that yield valid, data-based conclusions. US military veterans are often an operationally hard-to-reach population for survey research. This study addresses the issue of sample access while also piloting innovative recruitment strategy for increasing response rates and sample representativeness of veteran surveys by using Amazon’s Mechanical Turk (MTurk). We started the sample construction by recruiting military veterans from MTurk since 2018 for a research exploring the health behaviors and healthcare utilization of the veterans. Our effort produces a diverse online sample of more than 600 unique veterans who passed our validation questions, with about 150 being retained for a follow-up surveys over time. We assessed the extent of its representation of the demographics of military veterans currently living in US households and describe the survey results relating to the corresponding estimates from national surveys of veterans.
How Convenient are Different Convenience Samples for (Cognitive) Pretesting?
Presenter: Shauna Yates
Date: Friday, 6/12/2020 01:15 PM
Session: Concurrent Session E
Abstract: Pretesting is often used to improve question wording, understand sample members’ motivation to participate in a survey, or test data collection protocols. Due to resource constraints, including budget and time, pretesting is often done using convenience samples, which can raise concern about generalizability for the main study…This presentation will evaluate the representativeness and measurement properties of these different pretesting approaches and how they compare to the main study.
We present pretesting results from the 2019-20 National Postsecondary Student Aid Study (NPSAS:20) regarding the survey instrument, and data collection materials using different methods of recruitment and modes of administration. We compare results from three different approaches: 1) online surveys with debriefing questions (n=258) where participants were recruited using lists from commercial vendors and online/print advertising with subsequent in-person focus groups (n = 42); 2) an online survey with debriefing questions using a crowdsourcing platform (n = 1,000); and 3) online focus groups mostly recruited using social media (n = 47).
Experimentally Testing a Mixed Mode Study Design in a Survey of Physicians
Presenter: Rebecca Powell
Date: Friday, 6/12/2020 01:15 PM
Session: Concurrent Session E
Abstract: In surveys of physicians, research indicates physicians favor responding via mail even when offered a web mode (e.g., Geisen et al., 2013; Taylor and Scott, 2019). However, there is recent evidence that some physician sub-groups, such as younger physicians, prefer web over mail (Taylor and Scott, 2019). As technology trends change rapidly, researchers who conduct surveys of physicians must adapt to new sample members preferring to respond online. This paper aims to add to the literature on the changing technology trends in surveys of physicians by presenting the results of a push-to-web experiment implemented in the 2020 Best Hospitals Physician Survey (n=4,700). In this experiment, physicians were randomly assigned to receive either the (1) traditional mail mode survey or (2) push-to-web survey… This paper examines response rates, respondent sub-groups, completion costs, as well as implications for future research to determine the effectiveness of a push-to-web study design for physicians.
The ABCs of Additional Questions: Attrition, Bias, and Continuing Impact of Additional Questions in a Survey of Physicians
Presenter: Rebecca Powell
Date: Friday, 6/12/2020 01:15 PM
Session: Concurrent Session E
Abstract: Each year, the Best Hospitals Physician Survey asks physicians for their opinion on which hospitals are the best for patients in their specialty…Since 2005, response rates on the Physician Survey have declined leading to concerns about potential nonresponse bias in the estimates. To better understand these potential biases, we wanted to include additional questions asking about physicians’ hospital affiliations and motivations. However, based on the previous research, we were hesitant to include the question for all sample members, especially as the online survey includes the same sample members each year. Therefore, we conducted an experiment to see if the additional questions still negatively impact response rates. In the 2019 survey (n = 215,321), 50% of the sample received additional questions about hospital affiliation, and response rates did not differ between the two groups. We then examined attrition, by asking 50% of sample members in each group from 2019 additional questions about motivation in the 2020 survey. This paper examines response rates and potential biases from both surveys, and results of this research will be used to inform future Physician Survey designs and may help other researchers understand the impact of adding questions overtime.
Learn more about our work in Surveys and Data Collection and Statistics and Data Science.