RTI uses cookies to offer you the best experience online. By clicking “accept” on this website, you opt in and you agree to the use of cookies. If you would like to know more about how RTI uses cookies and how to manage them please view our Privacy Policy here. You can “opt out” or change your mind by visiting: http://optout.aboutads.info/. Click “accept” to agree.
Measurement error, unit nonresponse, and self-reports of abortion experiences
Peytchev, A., Peytcheva, E., & Groves, RM. (2010). Measurement error, unit nonresponse, and self-reports of abortion experiences. Public Opinion Quarterly, 74(2), 319-327. https://doi.org/10.1093/poq/nfq002
Survey designs for producing population prevalence estimates, such as abortion rates, need to consider multiple sources of error. Abortion prevalence has been found to suffer from underreporting. Abortion rates can also be underestimated due to unit nonresponse. However, it is when these two phenomena are linked that it becomes a particularly critical problem for both researchers interested in studying abortion rates and survey research practitioners. For substantive researchers, results may depend on the mix of these survey errors, in unexpected ways. For survey methodologists, decreasing one source of error may lead to greater bias in estimates by increasing the influence of another source. Identifying common causes can inform when to expect differences in rates due to the mix of nonresponse and measurement error, while helping practitioners design surveys that reduce both sources of error. This article addresses both nonresponse and measurement error in abortion estimates. We found that those with a lower likelihood to participate in the survey were also more likely to underreport such experiences. We interpret both nonresponse and measurement error as stemming from a common cause: the likely social stigma that reporting of these experiences poses to individuals. Although these results show that naïve increasing of response rates may lead to greater bias in survey estimates, we also find limited evidence that some survey design changes can reduce the link between nonresponse and measurement error. This finding is worthy of replication for practitioners in order to break the link between these sources of error.