Recycling Waste Classification Using Vision Transformer on Portable Device
Recycling resources from waste can effectively alleviate the threat of global resource strain. Due to the wide variety of waste, relying on manual classification of waste and recycling recyclable resources would be costly and inefficient. In recent years, automatic recyclable waste classification ba...
Guardado en:
Autores principales: | , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/016ce5d9f8814f79aa694e0d4f71ee4d |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:016ce5d9f8814f79aa694e0d4f71ee4d |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:016ce5d9f8814f79aa694e0d4f71ee4d2021-11-11T19:20:52ZRecycling Waste Classification Using Vision Transformer on Portable Device10.3390/su1321115722071-1050https://doaj.org/article/016ce5d9f8814f79aa694e0d4f71ee4d2021-10-01T00:00:00Zhttps://www.mdpi.com/2071-1050/13/21/11572https://doaj.org/toc/2071-1050Recycling resources from waste can effectively alleviate the threat of global resource strain. Due to the wide variety of waste, relying on manual classification of waste and recycling recyclable resources would be costly and inefficient. In recent years, automatic recyclable waste classification based on convolutional neural network (CNN) has become the mainstream method of waste recycling. However, due to the receptive field limitation of the CNN, the accuracy of classification has reached a bottleneck, which restricts the implementation of relevant methods and systems. In order to solve the above challenges, in this study, a deep neural network architecture only based on self-attention mechanism, named <i>Vision Transformer</i>, is proposed to improve the accuracy of automatic classification. Experimental results on TrashNet dataset show that the proposed method can achieve the highest accuracy of 96.98%, which is better than the existing CNN-based method. By deploying the well-trained model on the server and using a portable device to take pictures of waste in order to upload to the server, automatic waste classification can be expediently realized on the portable device, which broadens the scope of application of automatic waste classification and is of great significance with respect to resource conservation and recycling.Kai HuangHuan LeiZeyu JiaoZhenyu ZhongMDPI AGarticlewaste classificationautomatic recyclingdeep neural networkself-attentionportable deviceEnvironmental effects of industries and plantsTD194-195Renewable energy sourcesTJ807-830Environmental sciencesGE1-350ENSustainability, Vol 13, Iss 11572, p 11572 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
waste classification automatic recycling deep neural network self-attention portable device Environmental effects of industries and plants TD194-195 Renewable energy sources TJ807-830 Environmental sciences GE1-350 |
spellingShingle |
waste classification automatic recycling deep neural network self-attention portable device Environmental effects of industries and plants TD194-195 Renewable energy sources TJ807-830 Environmental sciences GE1-350 Kai Huang Huan Lei Zeyu Jiao Zhenyu Zhong Recycling Waste Classification Using Vision Transformer on Portable Device |
description |
Recycling resources from waste can effectively alleviate the threat of global resource strain. Due to the wide variety of waste, relying on manual classification of waste and recycling recyclable resources would be costly and inefficient. In recent years, automatic recyclable waste classification based on convolutional neural network (CNN) has become the mainstream method of waste recycling. However, due to the receptive field limitation of the CNN, the accuracy of classification has reached a bottleneck, which restricts the implementation of relevant methods and systems. In order to solve the above challenges, in this study, a deep neural network architecture only based on self-attention mechanism, named <i>Vision Transformer</i>, is proposed to improve the accuracy of automatic classification. Experimental results on TrashNet dataset show that the proposed method can achieve the highest accuracy of 96.98%, which is better than the existing CNN-based method. By deploying the well-trained model on the server and using a portable device to take pictures of waste in order to upload to the server, automatic waste classification can be expediently realized on the portable device, which broadens the scope of application of automatic waste classification and is of great significance with respect to resource conservation and recycling. |
format |
article |
author |
Kai Huang Huan Lei Zeyu Jiao Zhenyu Zhong |
author_facet |
Kai Huang Huan Lei Zeyu Jiao Zhenyu Zhong |
author_sort |
Kai Huang |
title |
Recycling Waste Classification Using Vision Transformer on Portable Device |
title_short |
Recycling Waste Classification Using Vision Transformer on Portable Device |
title_full |
Recycling Waste Classification Using Vision Transformer on Portable Device |
title_fullStr |
Recycling Waste Classification Using Vision Transformer on Portable Device |
title_full_unstemmed |
Recycling Waste Classification Using Vision Transformer on Portable Device |
title_sort |
recycling waste classification using vision transformer on portable device |
publisher |
MDPI AG |
publishDate |
2021 |
url |
https://doaj.org/article/016ce5d9f8814f79aa694e0d4f71ee4d |
work_keys_str_mv |
AT kaihuang recyclingwasteclassificationusingvisiontransformeronportabledevice AT huanlei recyclingwasteclassificationusingvisiontransformeronportabledevice AT zeyujiao recyclingwasteclassificationusingvisiontransformeronportabledevice AT zhenyuzhong recyclingwasteclassificationusingvisiontransformeronportabledevice |
_version_ |
1718431507256705024 |