Efficient-CapsNet: capsule network with self-attention routing

Abstract Deep convolutional neural networks, assisted by architectural design strategies, make extensive use of data augmentation techniques and layers with a high number of feature maps to embed object transformations. That is highly inefficient and for large datasets implies a massive redundancy o...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Vittorio Mazzia, Francesco Salvetti, Marcello Chiaberge
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/8c49880da55c4f129745f80880ff3925
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:8c49880da55c4f129745f80880ff3925
record_format dspace
spelling oai:doaj.org-article:8c49880da55c4f129745f80880ff39252021-12-02T16:17:18ZEfficient-CapsNet: capsule network with self-attention routing10.1038/s41598-021-93977-02045-2322https://doaj.org/article/8c49880da55c4f129745f80880ff39252021-07-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-93977-0https://doaj.org/toc/2045-2322Abstract Deep convolutional neural networks, assisted by architectural design strategies, make extensive use of data augmentation techniques and layers with a high number of feature maps to embed object transformations. That is highly inefficient and for large datasets implies a massive redundancy of features detectors. Even though capsules networks are still in their infancy, they constitute a promising solution to extend current convolutional networks and endow artificial visual perception with a process to encode more efficiently all feature affine transformations. Indeed, a properly working capsule network should theoretically achieve higher results with a considerably lower number of parameters count due to intrinsic capability to generalize to novel viewpoints. Nevertheless, little attention has been given to this relevant aspect. In this paper, we investigate the efficiency of capsule networks and, pushing their capacity to the limits with an extreme architecture with barely 160 K parameters, we prove that the proposed architecture is still able to achieve state-of-the-art results on three different datasets with only 2% of the original CapsNet parameters. Moreover, we replace dynamic routing with a novel non-iterative, highly parallelizable routing algorithm that can easily cope with a reduced number of capsules. Extensive experimentation with other capsule implementations has proved the effectiveness of our methodology and the capability of capsule networks to efficiently embed visual representations more prone to generalization.Vittorio MazziaFrancesco SalvettiMarcello ChiabergeNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-13 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Vittorio Mazzia
Francesco Salvetti
Marcello Chiaberge
Efficient-CapsNet: capsule network with self-attention routing
description Abstract Deep convolutional neural networks, assisted by architectural design strategies, make extensive use of data augmentation techniques and layers with a high number of feature maps to embed object transformations. That is highly inefficient and for large datasets implies a massive redundancy of features detectors. Even though capsules networks are still in their infancy, they constitute a promising solution to extend current convolutional networks and endow artificial visual perception with a process to encode more efficiently all feature affine transformations. Indeed, a properly working capsule network should theoretically achieve higher results with a considerably lower number of parameters count due to intrinsic capability to generalize to novel viewpoints. Nevertheless, little attention has been given to this relevant aspect. In this paper, we investigate the efficiency of capsule networks and, pushing their capacity to the limits with an extreme architecture with barely 160 K parameters, we prove that the proposed architecture is still able to achieve state-of-the-art results on three different datasets with only 2% of the original CapsNet parameters. Moreover, we replace dynamic routing with a novel non-iterative, highly parallelizable routing algorithm that can easily cope with a reduced number of capsules. Extensive experimentation with other capsule implementations has proved the effectiveness of our methodology and the capability of capsule networks to efficiently embed visual representations more prone to generalization.
format article
author Vittorio Mazzia
Francesco Salvetti
Marcello Chiaberge
author_facet Vittorio Mazzia
Francesco Salvetti
Marcello Chiaberge
author_sort Vittorio Mazzia
title Efficient-CapsNet: capsule network with self-attention routing
title_short Efficient-CapsNet: capsule network with self-attention routing
title_full Efficient-CapsNet: capsule network with self-attention routing
title_fullStr Efficient-CapsNet: capsule network with self-attention routing
title_full_unstemmed Efficient-CapsNet: capsule network with self-attention routing
title_sort efficient-capsnet: capsule network with self-attention routing
publisher Nature Portfolio
publishDate 2021
url https://doaj.org/article/8c49880da55c4f129745f80880ff3925
work_keys_str_mv AT vittoriomazzia efficientcapsnetcapsulenetworkwithselfattentionrouting
AT francescosalvetti efficientcapsnetcapsulenetworkwithselfattentionrouting
AT marcellochiaberge efficientcapsnetcapsulenetworkwithselfattentionrouting
_version_ 1718384242689310720