RTI uses cookies to offer you the best experience online. By clicking “accept” on this website, you opt in and you agree to the use of cookies. If you would like to know more about how RTI uses cookies and how to manage them please view our Privacy Policy here. You can “opt out” or change your mind by visiting: http://optout.aboutads.info/. Click “accept” to agree.
Detection of incomplete data in surveillance and impact of missing data in assessment of effectiveness of financial incentives of viral load suppression in HPTN 065
Donnell, D., Branson, B., El-Sadr, W., & Zangeneh, S. (2017). Detection of incomplete data in surveillance and impact of missing data in assessment of effectiveness of financial incentives of viral load suppression in HPTN 065. Trials, 18(S1), 227. Article O99. https://doi.org/10.1186/s13063-017-1902-y
Background Pragmatic trials often rely on pre-existing data systems to evaluate trial outcomes. HPTN 065 used US HIV Surveillance data to evaluate the outcomes of a clinic-randomized strategy trial to test the use of financial incentives to improve viral load suppression outcomes in the Bronx NY and Washington DC.
Methods In a collaborative effort between study and surveillance staff, aggregate clinic outcomes were defined using variables available in surveillance data and aligned with the financial incentive intervention. Outcomes were centrally programmed by surveillance experts for evaluation using local surveillance databases. Pre-trial data on trial outcomes were used to conduct a restricted randomization of the 38 trial clinics, with the goal of achieving balance in pre-trial viral load suppression and number of patients per clinic. During the trial, triangulation of data from the clinics, surveillance and the financial incentive delivery was used to assess data completeness. Sensitivity analysis and multiple imputation were subsequently used to evaluate 1) randomization balance based on incomplete pre-trial data in the restricted randomization; 2) the impact of incompleteness in baseline data on efficacy evaluation and 3)the sensitivity of the trial results to sites with missing data during the primary evaluation period.
Results Data triangulation throughout the trial revealed missing data in surveillance for some clinics in Washington DC. Evaluation of data inconsistencies and investigation into the causes of incompleteness, together with extensive collaboration with HIV surveillance staff, were largely successful in remediating the missing data for the trial evaluation period. Some baseline data could not be corrected due to lost access to data. During initial trial evaluation of effectiveness, exploratory data analysis of time trends revealed a small number of clinics with more subtle data incompleteness issues, complicating the evaluation of the effectiveness of financial incentives. Increased imbalance on the restricted factors was observed using corrected data compared to the pre-trial data. Missing data in the baseline outcome assessment decreased the precision of efficacy estimates, with a 57% higher SE of the efficacy estimate in DC vs. NY. Trial efficacy results were sensitive to the effect of missing data, with the initial analysis of effectiveness of financial incentives showing an increase in viral load suppression of 3.9% (−3.4%, 11.1%; p = 0.27), changing to 3.7% (0.5, 6.9%) p = 0.022 after data completeness was addressed in Washington DC HIV surveillance.
Conclusion Program assessments that utilize external data sources to evaluate outcomes need to conduct ongoing exploratory data analysis to understand and monitor data quality and completeness during the trial as trial results and study power will be affected by problems in the data source. Close collaboration with data source experts is critical to assure quality and completeness of outcome data.
Special issue: Meeting abstracts from the 4th International Clinical Trials Methodology Conference (ICTMC) and the 38th Annual Meeting of the Society for Clinical Trials