Joint representation of color and form in convolutional neural networks: A stimulus-rich network perspective.

To interact with real-world objects, any effective visual system must jointly code the unique features defining each object. Despite decades of neuroscience research, we still lack a firm grasp on how the primate brain binds visual features. Here we apply a novel network-based stimulus-rich represen...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: JohnMark Taylor, Yaoda Xu
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/69297f41643541489609fdbcc26b45e4
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:69297f41643541489609fdbcc26b45e4
record_format dspace
spelling oai:doaj.org-article:69297f41643541489609fdbcc26b45e42021-12-02T20:15:45ZJoint representation of color and form in convolutional neural networks: A stimulus-rich network perspective.1932-620310.1371/journal.pone.0253442https://doaj.org/article/69297f41643541489609fdbcc26b45e42021-01-01T00:00:00Zhttps://doi.org/10.1371/journal.pone.0253442https://doaj.org/toc/1932-6203To interact with real-world objects, any effective visual system must jointly code the unique features defining each object. Despite decades of neuroscience research, we still lack a firm grasp on how the primate brain binds visual features. Here we apply a novel network-based stimulus-rich representational similarity approach to study color and form binding in five convolutional neural networks (CNNs) with varying architecture, depth, and presence/absence of recurrent processing. All CNNs showed near-orthogonal color and form processing in early layers, but increasingly interactive feature coding in higher layers, with this effect being much stronger for networks trained for object classification than untrained networks. These results characterize for the first time how multiple basic visual features are coded together in CNNs. The approach developed here can be easily implemented to characterize whether a similar coding scheme may serve as a viable solution to the binding problem in the primate brain.JohnMark TaylorYaoda XuPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 16, Iss 6, p e0253442 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
JohnMark Taylor
Yaoda Xu
Joint representation of color and form in convolutional neural networks: A stimulus-rich network perspective.
description To interact with real-world objects, any effective visual system must jointly code the unique features defining each object. Despite decades of neuroscience research, we still lack a firm grasp on how the primate brain binds visual features. Here we apply a novel network-based stimulus-rich representational similarity approach to study color and form binding in five convolutional neural networks (CNNs) with varying architecture, depth, and presence/absence of recurrent processing. All CNNs showed near-orthogonal color and form processing in early layers, but increasingly interactive feature coding in higher layers, with this effect being much stronger for networks trained for object classification than untrained networks. These results characterize for the first time how multiple basic visual features are coded together in CNNs. The approach developed here can be easily implemented to characterize whether a similar coding scheme may serve as a viable solution to the binding problem in the primate brain.
format article
author JohnMark Taylor
Yaoda Xu
author_facet JohnMark Taylor
Yaoda Xu
author_sort JohnMark Taylor
title Joint representation of color and form in convolutional neural networks: A stimulus-rich network perspective.
title_short Joint representation of color and form in convolutional neural networks: A stimulus-rich network perspective.
title_full Joint representation of color and form in convolutional neural networks: A stimulus-rich network perspective.
title_fullStr Joint representation of color and form in convolutional neural networks: A stimulus-rich network perspective.
title_full_unstemmed Joint representation of color and form in convolutional neural networks: A stimulus-rich network perspective.
title_sort joint representation of color and form in convolutional neural networks: a stimulus-rich network perspective.
publisher Public Library of Science (PLoS)
publishDate 2021
url https://doaj.org/article/69297f41643541489609fdbcc26b45e4
work_keys_str_mv AT johnmarktaylor jointrepresentationofcolorandforminconvolutionalneuralnetworksastimulusrichnetworkperspective
AT yaodaxu jointrepresentationofcolorandforminconvolutionalneuralnetworksastimulusrichnetworkperspective
_version_ 1718374530144010240