Computational medication regimen for Parkinson’s disease using reinforcement learning
Abstract Our objective is to derive a sequential decision-making rule on the combination of medications to minimize motor symptoms using reinforcement learning (RL). Using an observational longitudinal cohort of Parkinson’s disease patients, the Parkinson’s Progression Markers Initiative database, w...
Guardado en:
Autores principales: | , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/31c5e5c270414017a3711587b00d9e9a |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:31c5e5c270414017a3711587b00d9e9a |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:31c5e5c270414017a3711587b00d9e9a2021-12-02T16:55:46ZComputational medication regimen for Parkinson’s disease using reinforcement learning10.1038/s41598-021-88619-42045-2322https://doaj.org/article/31c5e5c270414017a3711587b00d9e9a2021-04-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-88619-4https://doaj.org/toc/2045-2322Abstract Our objective is to derive a sequential decision-making rule on the combination of medications to minimize motor symptoms using reinforcement learning (RL). Using an observational longitudinal cohort of Parkinson’s disease patients, the Parkinson’s Progression Markers Initiative database, we derived clinically relevant disease states and an optimal combination of medications for each of them by using policy iteration of the Markov decision process (MDP). We focused on 8 combinations of medications, i.e., Levodopa, a dopamine agonist, and other PD medications, as possible actions and motor symptom severity, based on the Unified Parkinson Disease Rating Scale (UPDRS) section III, as reward/penalty of decision. We analyzed a total of 5077 visits from 431 PD patients with 55.5 months follow-up. We excluded patients without UPDRS III scores or medication records. We derived a medication regimen that is comparable to a clinician’s decision. The RL model achieved a lower level of motor symptom severity scores than what clinicians did, whereas the clinicians’ medication rules were more consistent than the RL model. The RL model followed the clinician’s medication rules in most cases but also suggested some changes, which leads to the difference in lowering symptoms severity. This is the first study to investigate RL to improve the pharmacological approach of PD patients. Our results contribute to the development of an interactive machine-physician ecosystem that relies on evidence-based medicine and can potentially enhance PD management.Yejin KimJessika SuescunMya C. SchiessXiaoqian JiangNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-9 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Medicine R Science Q |
spellingShingle |
Medicine R Science Q Yejin Kim Jessika Suescun Mya C. Schiess Xiaoqian Jiang Computational medication regimen for Parkinson’s disease using reinforcement learning |
description |
Abstract Our objective is to derive a sequential decision-making rule on the combination of medications to minimize motor symptoms using reinforcement learning (RL). Using an observational longitudinal cohort of Parkinson’s disease patients, the Parkinson’s Progression Markers Initiative database, we derived clinically relevant disease states and an optimal combination of medications for each of them by using policy iteration of the Markov decision process (MDP). We focused on 8 combinations of medications, i.e., Levodopa, a dopamine agonist, and other PD medications, as possible actions and motor symptom severity, based on the Unified Parkinson Disease Rating Scale (UPDRS) section III, as reward/penalty of decision. We analyzed a total of 5077 visits from 431 PD patients with 55.5 months follow-up. We excluded patients without UPDRS III scores or medication records. We derived a medication regimen that is comparable to a clinician’s decision. The RL model achieved a lower level of motor symptom severity scores than what clinicians did, whereas the clinicians’ medication rules were more consistent than the RL model. The RL model followed the clinician’s medication rules in most cases but also suggested some changes, which leads to the difference in lowering symptoms severity. This is the first study to investigate RL to improve the pharmacological approach of PD patients. Our results contribute to the development of an interactive machine-physician ecosystem that relies on evidence-based medicine and can potentially enhance PD management. |
format |
article |
author |
Yejin Kim Jessika Suescun Mya C. Schiess Xiaoqian Jiang |
author_facet |
Yejin Kim Jessika Suescun Mya C. Schiess Xiaoqian Jiang |
author_sort |
Yejin Kim |
title |
Computational medication regimen for Parkinson’s disease using reinforcement learning |
title_short |
Computational medication regimen for Parkinson’s disease using reinforcement learning |
title_full |
Computational medication regimen for Parkinson’s disease using reinforcement learning |
title_fullStr |
Computational medication regimen for Parkinson’s disease using reinforcement learning |
title_full_unstemmed |
Computational medication regimen for Parkinson’s disease using reinforcement learning |
title_sort |
computational medication regimen for parkinson’s disease using reinforcement learning |
publisher |
Nature Portfolio |
publishDate |
2021 |
url |
https://doaj.org/article/31c5e5c270414017a3711587b00d9e9a |
work_keys_str_mv |
AT yejinkim computationalmedicationregimenforparkinsonsdiseaseusingreinforcementlearning AT jessikasuescun computationalmedicationregimenforparkinsonsdiseaseusingreinforcementlearning AT myacschiess computationalmedicationregimenforparkinsonsdiseaseusingreinforcementlearning AT xiaoqianjiang computationalmedicationregimenforparkinsonsdiseaseusingreinforcementlearning |
_version_ |
1718382792012726272 |