Self-incremental learning vector quantization with human cognitive biases
Abstract Human beings have adaptively rational cognitive biases for efficiently acquiring concepts from small-sized datasets. With such inductive biases, humans can generalize concepts by learning a small number of samples. By incorporating human cognitive biases into learning vector quantization (L...
Guardado en:
Autores principales: | , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/217e4b2b43ed42cc944849d1cacb86f7 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:217e4b2b43ed42cc944849d1cacb86f7 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:217e4b2b43ed42cc944849d1cacb86f72021-12-02T12:11:53ZSelf-incremental learning vector quantization with human cognitive biases10.1038/s41598-021-83182-42045-2322https://doaj.org/article/217e4b2b43ed42cc944849d1cacb86f72021-02-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-83182-4https://doaj.org/toc/2045-2322Abstract Human beings have adaptively rational cognitive biases for efficiently acquiring concepts from small-sized datasets. With such inductive biases, humans can generalize concepts by learning a small number of samples. By incorporating human cognitive biases into learning vector quantization (LVQ), a prototype-based online machine learning method, we developed self-incremental LVQ (SILVQ) methods that can be easily interpreted. We first describe a method to automatically adjust the learning rate that incorporates human cognitive biases. Second, SILVQ, which self-increases the prototypes based on the method for automatically adjusting the learning rate, is described. The performance levels of the proposed methods are evaluated in experiments employing four real and two artificial datasets. Compared with the original learning vector quantization algorithms, our methods not only effectively remove the need for parameter tuning, but also achieve higher accuracy from learning small numbers of instances. In the cases of larger numbers of instances, SILVQ can still achieve an accuracy that is equal to or better than those of existing representative LVQ algorithms. Furthermore, SILVQ can learn linearly inseparable conceptual structures with the required and sufficient number of prototypes without overfitting.Nobuhito ManomeShuji ShinoharaTatsuji TakahashiYu ChenUng-il ChungNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-12 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Medicine R Science Q |
spellingShingle |
Medicine R Science Q Nobuhito Manome Shuji Shinohara Tatsuji Takahashi Yu Chen Ung-il Chung Self-incremental learning vector quantization with human cognitive biases |
description |
Abstract Human beings have adaptively rational cognitive biases for efficiently acquiring concepts from small-sized datasets. With such inductive biases, humans can generalize concepts by learning a small number of samples. By incorporating human cognitive biases into learning vector quantization (LVQ), a prototype-based online machine learning method, we developed self-incremental LVQ (SILVQ) methods that can be easily interpreted. We first describe a method to automatically adjust the learning rate that incorporates human cognitive biases. Second, SILVQ, which self-increases the prototypes based on the method for automatically adjusting the learning rate, is described. The performance levels of the proposed methods are evaluated in experiments employing four real and two artificial datasets. Compared with the original learning vector quantization algorithms, our methods not only effectively remove the need for parameter tuning, but also achieve higher accuracy from learning small numbers of instances. In the cases of larger numbers of instances, SILVQ can still achieve an accuracy that is equal to or better than those of existing representative LVQ algorithms. Furthermore, SILVQ can learn linearly inseparable conceptual structures with the required and sufficient number of prototypes without overfitting. |
format |
article |
author |
Nobuhito Manome Shuji Shinohara Tatsuji Takahashi Yu Chen Ung-il Chung |
author_facet |
Nobuhito Manome Shuji Shinohara Tatsuji Takahashi Yu Chen Ung-il Chung |
author_sort |
Nobuhito Manome |
title |
Self-incremental learning vector quantization with human cognitive biases |
title_short |
Self-incremental learning vector quantization with human cognitive biases |
title_full |
Self-incremental learning vector quantization with human cognitive biases |
title_fullStr |
Self-incremental learning vector quantization with human cognitive biases |
title_full_unstemmed |
Self-incremental learning vector quantization with human cognitive biases |
title_sort |
self-incremental learning vector quantization with human cognitive biases |
publisher |
Nature Portfolio |
publishDate |
2021 |
url |
https://doaj.org/article/217e4b2b43ed42cc944849d1cacb86f7 |
work_keys_str_mv |
AT nobuhitomanome selfincrementallearningvectorquantizationwithhumancognitivebiases AT shujishinohara selfincrementallearningvectorquantizationwithhumancognitivebiases AT tatsujitakahashi selfincrementallearningvectorquantizationwithhumancognitivebiases AT yuchen selfincrementallearningvectorquantizationwithhumancognitivebiases AT ungilchung selfincrementallearningvectorquantizationwithhumancognitivebiases |
_version_ |
1718394559956779008 |