TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data

With the rise in the employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time series data has been neglected, with only a handful...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Shoaib Ahmed Siddiqui, Dominique Mercier, Andreas Dengel, Sheraz Ahmed
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/881a494110f14e32aac9ec0578d41da6
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:881a494110f14e32aac9ec0578d41da6
record_format dspace
spelling oai:doaj.org-article:881a494110f14e32aac9ec0578d41da62021-11-11T19:18:33ZTSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data10.3390/s212173731424-8220https://doaj.org/article/881a494110f14e32aac9ec0578d41da62021-11-01T00:00:00Zhttps://www.mdpi.com/1424-8220/21/21/7373https://doaj.org/toc/1424-8220With the rise in the employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time series data has been neglected, with only a handful of methods tested due to their poor intelligibility. We approach the problem of interpretability in a novel way by proposing TSInsight, where we attach an auto-encoder to the classifier with a sparsity-inducing norm on its output and fine-tune it based on the gradients from the classifier and a reconstruction penalty. TSInsight learns to preserve features that are important for prediction by the classifier and suppresses those that are irrelevant, i.e., serves as a feature attribution method to boost the interpretability. In contrast to most other attribution frameworks, TSInsight is capable of generating both instance-based and model-based explanations. We evaluated TSInsight along with nine other commonly used attribution methods on eight different time series datasets to validate its efficacy. The evaluation results show that TSInsight naturally achieves output space contraction; therefore, it is an effective tool for the interpretability of deep time series models.Shoaib Ahmed SiddiquiDominique MercierAndreas DengelSheraz AhmedMDPI AGarticleinterpretabilitytime series analysisfeature attributiondeep learningauto-encoderfeature importanceChemical technologyTP1-1185ENSensors, Vol 21, Iss 7373, p 7373 (2021)
institution DOAJ
collection DOAJ
language EN
topic interpretability
time series analysis
feature attribution
deep learning
auto-encoder
feature importance
Chemical technology
TP1-1185
spellingShingle interpretability
time series analysis
feature attribution
deep learning
auto-encoder
feature importance
Chemical technology
TP1-1185
Shoaib Ahmed Siddiqui
Dominique Mercier
Andreas Dengel
Sheraz Ahmed
TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
description With the rise in the employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time series data has been neglected, with only a handful of methods tested due to their poor intelligibility. We approach the problem of interpretability in a novel way by proposing TSInsight, where we attach an auto-encoder to the classifier with a sparsity-inducing norm on its output and fine-tune it based on the gradients from the classifier and a reconstruction penalty. TSInsight learns to preserve features that are important for prediction by the classifier and suppresses those that are irrelevant, i.e., serves as a feature attribution method to boost the interpretability. In contrast to most other attribution frameworks, TSInsight is capable of generating both instance-based and model-based explanations. We evaluated TSInsight along with nine other commonly used attribution methods on eight different time series datasets to validate its efficacy. The evaluation results show that TSInsight naturally achieves output space contraction; therefore, it is an effective tool for the interpretability of deep time series models.
format article
author Shoaib Ahmed Siddiqui
Dominique Mercier
Andreas Dengel
Sheraz Ahmed
author_facet Shoaib Ahmed Siddiqui
Dominique Mercier
Andreas Dengel
Sheraz Ahmed
author_sort Shoaib Ahmed Siddiqui
title TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
title_short TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
title_full TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
title_fullStr TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
title_full_unstemmed TSInsight: A Local-Global Attribution Framework for Interpretability in Time Series Data
title_sort tsinsight: a local-global attribution framework for interpretability in time series data
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/881a494110f14e32aac9ec0578d41da6
work_keys_str_mv AT shoaibahmedsiddiqui tsinsightalocalglobalattributionframeworkforinterpretabilityintimeseriesdata
AT dominiquemercier tsinsightalocalglobalattributionframeworkforinterpretabilityintimeseriesdata
AT andreasdengel tsinsightalocalglobalattributionframeworkforinterpretabilityintimeseriesdata
AT sherazahmed tsinsightalocalglobalattributionframeworkforinterpretabilityintimeseriesdata
_version_ 1718431595701993472