Explainable artificial intelligence model to predict acute critical illness from electronic health records
Acute critical illness is often preceded by deterioration of routinely measured clinical parameters, e.g., blood pressure and heart rate. Here, the authors develop an explainable artificial intelligence early warning score system for its early detection.
Guardado en:
Autores principales: | Simon Meyer Lauritsen, Mads Kristensen, Mathias Vassard Olsen, Morten Skaarup Larsen, Katrine Meyer Lauritsen, Marianne Johansson Jørgensen, Jeppe Lange, Bo Thiesson |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2020
|
Materias: | |
Acceso en línea: | https://doaj.org/article/b6bf21b3f09e4e2ab17325dc1d6b1879 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
The Framing of machine learning risk prediction models illustrated by evaluation of sepsis in general wards
por: Simon Meyer Lauritsen, et al.
Publicado: (2021) -
An Explainable Artificial Intelligence Model for Detecting Xenophobic Tweets
por: Gabriel Ichcanziho Pérez-Landa, et al.
Publicado: (2021) -
Untangling hybrid hydrological models with explainable artificial intelligence
por: Daniel Althoff, et al.
Publicado: (2021) -
A Systematic Review of Human–Computer Interaction and Explainable Artificial Intelligence in Healthcare With Artificial Intelligence Techniques
por: Mobeen Nazar, et al.
Publicado: (2021) -
Mitigating belief projection in explainable artificial intelligence via Bayesian teaching
por: Scott Cheng-Hsin Yang, et al.
Publicado: (2021)