MedFuseNet: An attention-based multimodal deep learning model for visual question answering in the medical domain
Abstract Medical images are difficult to comprehend for a person without expertise. The scarcity of medical practitioners across the globe often face the issue of physical and mental fatigue due to the high number of cases, inducing human errors during the diagnosis. In such scenarios, having an add...
Guardado en:
Autores principales: | Dhruv Sharma, Sanjay Purushotham, Chandan K. Reddy |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/52b07af925ff445990dba24717ca49fe |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Adversarial Learning with Bidirectional Attention for Visual Question Answering
por: Qifeng Li, et al.
Publicado: (2021) -
Question Dependent Recurrent Entity Network for Question Answering
por: Andrea Madotto, et al.
Publicado: (2017) -
ISCHEMIA Trial: Key Questions and Answers
por: Jose Lopez-Sendon, et al.
Publicado: (2021) -
25 Questions & Answers on Health & Human Rights
por: World Health Organization -
Enhance Text-to-Text Transfer Transformer with Generated Questions for Thai Question Answering
por: Puri Phakmongkol, et al.
Publicado: (2021)