Random Forest Similarity Maps: A Scalable Visual Representation for Global and Local Interpretation
Machine Learning prediction algorithms have made significant contributions in today’s world, leading to increased usage in various domains. However, as ML algorithms surge, the need for transparent and interpretable models becomes essential. Visual representations have shown to be instrumental in ad...
Guardado en:
Autores principales: | Dipankar Mazumdar, Mário Popolin Neto, Fernando V. Paulovich |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/7a08344bc0154928a55d6d7975855372 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Feature-Based Interpretation of the Deep Neural Network
por: Eun-Hun Lee, et al.
Publicado: (2021) -
Evaluation of the factors explaining the use of agricultural land: A machine learning and model-agnostic approach
por: Cláudia M. Viana, et al.
Publicado: (2021) -
Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension
por: Francisco S. Marcondes, et al.
Publicado: (2021) -
E-SFD: Explainable Sensor Fault Detection in the ICS Anomaly Detection System
por: Chanwoong Hwang, et al.
Publicado: (2021) -
Help Me Learn! Architecture and Strategies to Combine Recommendations and Active Learning in Manufacturing
por: Patrik Zajec, et al.
Publicado: (2021)