SPACE: Structured Compression and Sharing of Representational Space for Continual Learning

Humans learn incrementally from sequential experiences throughout their lives, which has proven hard to emulate in artificial neural networks. Incrementally learning tasks causes neural networks to overwrite relevant information learned about older tasks, resulting in ‘Catastrophic Forget...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Gobinda Saha, Isha Garg, Aayush Ankit, Kaushik Roy
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/46706ff8450743849542b1376c3fdb24
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:46706ff8450743849542b1376c3fdb24
record_format dspace
spelling oai:doaj.org-article:46706ff8450743849542b1376c3fdb242021-11-18T00:07:14ZSPACE: Structured Compression and Sharing of Representational Space for Continual Learning2169-353610.1109/ACCESS.2021.3126027https://doaj.org/article/46706ff8450743849542b1376c3fdb242021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9605653/https://doaj.org/toc/2169-3536Humans learn incrementally from sequential experiences throughout their lives, which has proven hard to emulate in artificial neural networks. Incrementally learning tasks causes neural networks to overwrite relevant information learned about older tasks, resulting in &#x2018;Catastrophic Forgetting&#x2019;. Efforts to overcome this phenomenon often utilize resources poorly, for instance, by growing the network architecture or needing to save parametric importance scores, or violate data privacy between tasks. To tackle this, we propose SPACE, an algorithm that enables a network to learn continually and efficiently by partitioning the learnt space into a <italic>Core</italic> space, that serves as the condensed knowledge base over previously learned tasks, and a <italic>Residual</italic> space, which is akin to a scratch space for learning the current task. After learning each task, the Residual is analyzed for redundancy, both within itself and with the learnt Core space. A minimal number of extra dimensions required to explain the current task are added to the Core space and the remaining Residual is freed up for learning the next task. We evaluate our algorithm on P-MNIST, CIFAR and a sequence of 8 different datasets, and achieve comparable accuracy to the state-of-the-art methods while overcoming catastrophic forgetting. Additionally, our algorithm is well suited for practical use. The partitioning algorithm analyzes all layers in one shot, ensuring scalability to deeper networks. Moreover, the analysis of dimensions translates to filter-level sparsity, and the structured nature of the resulting architecture gives us up to 5x improvement in energy efficiency during task inference over the current state-of-the-art.Gobinda SahaIsha GargAayush AnkitKaushik RoyIEEEarticleContinual learningcatastrophic forgettingdeep learning algorithmprincipal component analysisElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 150480-150494 (2021)
institution DOAJ
collection DOAJ
language EN
topic Continual learning
catastrophic forgetting
deep learning algorithm
principal component analysis
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
spellingShingle Continual learning
catastrophic forgetting
deep learning algorithm
principal component analysis
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
Gobinda Saha
Isha Garg
Aayush Ankit
Kaushik Roy
SPACE: Structured Compression and Sharing of Representational Space for Continual Learning
description Humans learn incrementally from sequential experiences throughout their lives, which has proven hard to emulate in artificial neural networks. Incrementally learning tasks causes neural networks to overwrite relevant information learned about older tasks, resulting in &#x2018;Catastrophic Forgetting&#x2019;. Efforts to overcome this phenomenon often utilize resources poorly, for instance, by growing the network architecture or needing to save parametric importance scores, or violate data privacy between tasks. To tackle this, we propose SPACE, an algorithm that enables a network to learn continually and efficiently by partitioning the learnt space into a <italic>Core</italic> space, that serves as the condensed knowledge base over previously learned tasks, and a <italic>Residual</italic> space, which is akin to a scratch space for learning the current task. After learning each task, the Residual is analyzed for redundancy, both within itself and with the learnt Core space. A minimal number of extra dimensions required to explain the current task are added to the Core space and the remaining Residual is freed up for learning the next task. We evaluate our algorithm on P-MNIST, CIFAR and a sequence of 8 different datasets, and achieve comparable accuracy to the state-of-the-art methods while overcoming catastrophic forgetting. Additionally, our algorithm is well suited for practical use. The partitioning algorithm analyzes all layers in one shot, ensuring scalability to deeper networks. Moreover, the analysis of dimensions translates to filter-level sparsity, and the structured nature of the resulting architecture gives us up to 5x improvement in energy efficiency during task inference over the current state-of-the-art.
format article
author Gobinda Saha
Isha Garg
Aayush Ankit
Kaushik Roy
author_facet Gobinda Saha
Isha Garg
Aayush Ankit
Kaushik Roy
author_sort Gobinda Saha
title SPACE: Structured Compression and Sharing of Representational Space for Continual Learning
title_short SPACE: Structured Compression and Sharing of Representational Space for Continual Learning
title_full SPACE: Structured Compression and Sharing of Representational Space for Continual Learning
title_fullStr SPACE: Structured Compression and Sharing of Representational Space for Continual Learning
title_full_unstemmed SPACE: Structured Compression and Sharing of Representational Space for Continual Learning
title_sort space: structured compression and sharing of representational space for continual learning
publisher IEEE
publishDate 2021
url https://doaj.org/article/46706ff8450743849542b1376c3fdb24
work_keys_str_mv AT gobindasaha spacestructuredcompressionandsharingofrepresentationalspaceforcontinuallearning
AT ishagarg spacestructuredcompressionandsharingofrepresentationalspaceforcontinuallearning
AT aayushankit spacestructuredcompressionandsharingofrepresentationalspaceforcontinuallearning
AT kaushikroy spacestructuredcompressionandsharingofrepresentationalspaceforcontinuallearning
_version_ 1718425222213795840