Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence.

<h4>Objectives</h4>Evidence-based medicine depends on the timely synthesis of research findings. An important source of synthesized evidence resides in systematic reviews. However, a bottleneck in review production involves dual screening of citations with titles and abstracts to find el...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Tanja Bekhuis, Eugene Tseytlin, Kevin J Mitchell, Dina Demner-Fushman
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2014
Materias:
R
Q
Acceso en línea:https://doaj.org/article/237d4532199c47e2a8ef93a1cad91da8
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:237d4532199c47e2a8ef93a1cad91da8
record_format dspace
spelling oai:doaj.org-article:237d4532199c47e2a8ef93a1cad91da82021-11-18T08:35:44ZFeature engineering and a proposed decision-support system for systematic reviewers of medical evidence.1932-620310.1371/journal.pone.0086277https://doaj.org/article/237d4532199c47e2a8ef93a1cad91da82014-01-01T00:00:00Zhttps://www.ncbi.nlm.nih.gov/pmc/articles/pmid/24475099/pdf/?tool=EBIhttps://doaj.org/toc/1932-6203<h4>Objectives</h4>Evidence-based medicine depends on the timely synthesis of research findings. An important source of synthesized evidence resides in systematic reviews. However, a bottleneck in review production involves dual screening of citations with titles and abstracts to find eligible studies. For this research, we tested the effect of various kinds of textual information (features) on performance of a machine learning classifier. Based on our findings, we propose an automated system to reduce screeing burden, as well as offer quality assurance.<h4>Methods</h4>We built a database of citations from 5 systematic reviews that varied with respect to domain, topic, and sponsor. Consensus judgments regarding eligibility were inferred from published reports. We extracted 5 feature sets from citations: alphabetic, alphanumeric(+), indexing, features mapped to concepts in systematic reviews, and topic models. To simulate a two-person team, we divided the data into random halves. We optimized the parameters of a Bayesian classifier, then trained and tested models on alternate data halves. Overall, we conducted 50 independent tests.<h4>Results</h4>All tests of summary performance (mean F3) surpassed the corresponding baseline, P<0.0001. The ranks for mean F3, precision, and classification error were statistically different across feature sets averaged over reviews; P-values for Friedman's test were .045, .002, and .002, respectively. Differences in ranks for mean recall were not statistically significant. Alphanumeric(+) features were associated with best performance; mean reduction in screening burden for this feature type ranged from 88% to 98% for the second pass through citations and from 38% to 48% overall.<h4>Conclusions</h4>A computer-assisted, decision support system based on our methods could substantially reduce the burden of screening citations for systematic review teams and solo reviewers. Additionally, such a system could deliver quality assurance both by confirming concordant decisions and by naming studies associated with discordant decisions for further consideration.Tanja BekhuisEugene TseytlinKevin J MitchellDina Demner-FushmanPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 9, Iss 1, p e86277 (2014)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Tanja Bekhuis
Eugene Tseytlin
Kevin J Mitchell
Dina Demner-Fushman
Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence.
description <h4>Objectives</h4>Evidence-based medicine depends on the timely synthesis of research findings. An important source of synthesized evidence resides in systematic reviews. However, a bottleneck in review production involves dual screening of citations with titles and abstracts to find eligible studies. For this research, we tested the effect of various kinds of textual information (features) on performance of a machine learning classifier. Based on our findings, we propose an automated system to reduce screeing burden, as well as offer quality assurance.<h4>Methods</h4>We built a database of citations from 5 systematic reviews that varied with respect to domain, topic, and sponsor. Consensus judgments regarding eligibility were inferred from published reports. We extracted 5 feature sets from citations: alphabetic, alphanumeric(+), indexing, features mapped to concepts in systematic reviews, and topic models. To simulate a two-person team, we divided the data into random halves. We optimized the parameters of a Bayesian classifier, then trained and tested models on alternate data halves. Overall, we conducted 50 independent tests.<h4>Results</h4>All tests of summary performance (mean F3) surpassed the corresponding baseline, P<0.0001. The ranks for mean F3, precision, and classification error were statistically different across feature sets averaged over reviews; P-values for Friedman's test were .045, .002, and .002, respectively. Differences in ranks for mean recall were not statistically significant. Alphanumeric(+) features were associated with best performance; mean reduction in screening burden for this feature type ranged from 88% to 98% for the second pass through citations and from 38% to 48% overall.<h4>Conclusions</h4>A computer-assisted, decision support system based on our methods could substantially reduce the burden of screening citations for systematic review teams and solo reviewers. Additionally, such a system could deliver quality assurance both by confirming concordant decisions and by naming studies associated with discordant decisions for further consideration.
format article
author Tanja Bekhuis
Eugene Tseytlin
Kevin J Mitchell
Dina Demner-Fushman
author_facet Tanja Bekhuis
Eugene Tseytlin
Kevin J Mitchell
Dina Demner-Fushman
author_sort Tanja Bekhuis
title Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence.
title_short Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence.
title_full Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence.
title_fullStr Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence.
title_full_unstemmed Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence.
title_sort feature engineering and a proposed decision-support system for systematic reviewers of medical evidence.
publisher Public Library of Science (PLoS)
publishDate 2014
url https://doaj.org/article/237d4532199c47e2a8ef93a1cad91da8
work_keys_str_mv AT tanjabekhuis featureengineeringandaproposeddecisionsupportsystemforsystematicreviewersofmedicalevidence
AT eugenetseytlin featureengineeringandaproposeddecisionsupportsystemforsystematicreviewersofmedicalevidence
AT kevinjmitchell featureengineeringandaproposeddecisionsupportsystemforsystematicreviewersofmedicalevidence
AT dinademnerfushman featureengineeringandaproposeddecisionsupportsystemforsystematicreviewersofmedicalevidence
_version_ 1718421549394952192