RTI uses cookies to offer you the best experience online. By clicking “accept” on this website, you opt in and you agree to the use of cookies. If you would like to know more about how RTI uses cookies and how to manage them please view our Privacy Policy here. You can “opt out” or change your mind by visiting: http://optout.aboutads.info/. Click “accept” to agree.
Using entropy balancing to strengthen an observational cohort study design
Lessons learned from an evaluation of a complex multi-state federal demonstration
Parish, W. J., Keyes, V. S., Beadles, C., & Kandilov, A. M. G. (2018). Using entropy balancing to strengthen an observational cohort study design: Lessons learned from an evaluation of a complex multi-state federal demonstration. Health Services and Outcomes Research Methodology, 18(1), 17–46. https://doi.org/10.1007/s10742-017-0174-z
We conducted an evaluation of a patient-centered medical home demonstration sponsored by the Centers for Medicare & Medicaid Services. We implemented a quasi-experimental pre-post with a comparison group design. Traditional propensity score weighting failed to achieve balance (exchangeability) between the two groups on several critical characteristics. In response, we incorporated a relatively new alternative known as entropy balancing. Our objective is to share lessons learned from using entropy balancing in a quasi-experimental study design. We document the advantages and challenges with using entropy balancing. We also describe a set of best practices, and we present a series of illustrative analyses that empirically demonstrate the performance of entropy balancing relative to traditional propensity score weighting. We compare alternative approaches based on: (i) covariate balance (e.g., standardized differences); (ii) overlap in conditional treatment probabilities; and (iii) the distribution of weights. Our comparison of overlap is based on a novel approach we developed that uses entropy balancing weights to calculate a pseudo-propensity score. In many situations, entropy balancing provides remarkably superior covariate balance compared to traditional propensity score weighting methods. Entropy balancing is also preferred because it does not require extensive iterative manual searching for an optimal propensity score specification. However, we demonstrate that there are some situations where entropy balancing "fails". Specifically, there are instances where entropy balancing achieves adequate covariate balance only by using a distribution of weights that dramatically up-weights a small set of observations, giving them a disproportionately large and undesirable influence.