RTI uses cookies to offer you the best experience online. By clicking “accept” on this website, you opt in and you agree to the use of cookies. If you would like to know more about how RTI uses cookies and how to manage them please view our Privacy Policy here. You can “opt out” or change your mind by visiting: http://optout.aboutads.info/. Click “accept” to agree.
Reductions in statistical power and increased bias when categorizing medication adherence data
Tueller, S. J., Deboeck, P. R., & Van Dorn, R. A. (2016). Getting less of what you want: Reductions in statistical power and increased bias when categorizing medication adherence data. Journal of Behavioral Medicine, 39(6), 969-980. https://doi.org/10.1007/s10865-016-9727-9
Medication adherence is thought to be the principal clinical predictor of positive clinical outcomes, not only for serious mental illnesses such as schizophrenia, bipolar disorder, or depression, but also for physical conditions such as diabetes. Consequently, research on medication often looks not only at medication condition (e.g., placebo, standard medication, investigative medication), but also at adherence in taking those medications within each medication condition. The percentage (or proportion) scale is one of the more frequently employed and easily interpretable measures. Patients can be 0 % adherent, 100 % adherent, or somewhere in between. For simplicity, many reported adherence analyses dichotomize or trichotomize the adherence predictor when estimating its effect on outcomes of interest. However, the methodological literature shows that the practice of categorizing continuously distributed predictors reduces statistical power at best and, at worst, can severely bias parameter estimates. This can result in inflated Type I errors (false positive acceptance of null adherence effects) or Type II errors (false negative rejection of true adherence effects). We extend the methodological literature on categorization to the construct of adherence. The measurement scale of adherence leads to a diverse family of potential distributions including uniform, n-shaped, u-shaped (i.e., bimodal), positively skewed, and negatively skewed. Using a simulation study, we generated negative, null, and positive "true" effects of adherence on simulated continuous and binary outcomes. We then estimated the adherence effect with and without categorizing the adherence variable. We show how parameter estimates and standard errors can be severely biased when categorizing adherence. The categorization of adherence is shown to cause null effects to become positive or negative depending on the distribution of the simulated adherence variable, inflating Type I errors. When the adherence effect was significantly different from zero, categorization can render the effect null, inflating Type II errors. We recommend that adherence be measured continuously and analyzed without categorization when using it as a predictor in regression models.