RTI uses cookies to offer you the best experience online. By clicking “accept” on this website, you opt in and you agree to the use of cookies. If you would like to know more about how RTI uses cookies and how to manage them please view our Privacy Policy here. You can “opt out” or change your mind by visiting: http://optout.aboutads.info/. Click “accept” to agree.
The use of expert elicitation among computational modeling studies in health research
A systematic review
Cadham, C. J., Knoll, M., Sanchez-Romero, L. M., Cummings, K. M., Douglas, C. E., Liber, A., Mendez, D., Meza, R., Mistry, R., Sertkaya, A., Travis, N., & Levy, D. T. (2022). The use of expert elicitation among computational modeling studies in health research: A systematic review. Medical Decision Making, 42(5), 684-703. https://doi.org/10.1177/0272989X211053794
Background Expert elicitation (EE) has been used across disciplines to estimate input parameters for computational modeling research when information is sparse or conflictual. Objectives We conducted a systematic review to compare EE methods used to generate model input parameters in health research. Data Sources PubMed and Web of Science. Study Eligibility Modeling studies that reported the use of EE as the source for model input probabilities were included if they were published in English before June 2021 and reported health outcomes. Data Abstraction and Synthesis Studies were classified as "formal" EE methods if they explicitly reported details of their elicitation process. Those that stated use of expert opinion but provided limited information were classified as "indeterminate" methods. In both groups, we abstracted citation details, study design, modeling methodology, a description of elicited parameters, and elicitation methods. Comparisons were made between elicitation methods. Study Appraisal Studies that conducted a formal EE were appraised on the reporting quality of the EE. Quality appraisal was not conducted for studies of indeterminate methods. Results The search identified 1520 articles, of which 152 were included. Of the included studies, 40 were classified as formal EE and 112 as indeterminate methods. Most studies were cost-effectiveness analyses (77.6%). Forty-seven indeterminate method studies provided no information on methods for generating estimates. Among formal EEs, the average reporting quality score was 9 out of 16. Limitations Elicitations on nonhealth topics and those reported in the gray literature were not included. Conclusions We found poor reporting of EE methods used in modeling studies, making it difficult to discern meaningful differences in approaches. Improved quality standards for EEs would improve the validity and replicability of computational models.