Universal activation function for machine learning

Abstract This article proposes a universal activation function (UAF) that achieves near optimal performance in quantification, classification, and reinforcement learning (RL) problems. For any given problem, the gradient descent algorithms are able to evolve the UAF to a suitable activation function...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Brosnan Yuen, Minh Tu Hoang, Xiaodai Dong, Tao Lu
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/761a26ef959a4ff2b37e7710b3ce6b10
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:761a26ef959a4ff2b37e7710b3ce6b10
record_format dspace
spelling oai:doaj.org-article:761a26ef959a4ff2b37e7710b3ce6b102021-12-02T18:14:08ZUniversal activation function for machine learning10.1038/s41598-021-96723-82045-2322https://doaj.org/article/761a26ef959a4ff2b37e7710b3ce6b102021-09-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-96723-8https://doaj.org/toc/2045-2322Abstract This article proposes a universal activation function (UAF) that achieves near optimal performance in quantification, classification, and reinforcement learning (RL) problems. For any given problem, the gradient descent algorithms are able to evolve the UAF to a suitable activation function by tuning the UAF’s parameters. For the CIFAR-10 classification using the VGG-8 neural network, the UAF converges to the Mish like activation function, which has near optimal performance $$F_{1}=0.902\pm 0.004$$ F 1 = 0.902 ± 0.004 when compared to other activation functions. In the graph convolutional neural network on the CORA dataset, the UAF evolves to the identity function and obtains $$F_1=0.835\pm 0.008$$ F 1 = 0.835 ± 0.008 . For the quantification of simulated 9-gas mixtures in 30 dB signal-to-noise ratio (SNR) environments, the UAF converges to the identity function, which has near optimal root mean square error of $$0.489\pm 0.003~\mu {\mathrm{M}}$$ 0.489 ± 0.003 μ M . In the ZINC molecular solubility quantification using graph neural networks, the UAF morphs to a LeakyReLU/Sigmoid hybrid and achieves RMSE= $$0.47\pm 0.04$$ 0.47 ± 0.04 . For the BipedalWalker-v2 RL dataset, the UAF achieves the 250 reward in $${961\pm 193}$$ 961 ± 193 epochs with a brand new activation function, which gives the fastest convergence rate among the activation functions.Brosnan YuenMinh Tu HoangXiaodai DongTao LuNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-11 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Brosnan Yuen
Minh Tu Hoang
Xiaodai Dong
Tao Lu
Universal activation function for machine learning
description Abstract This article proposes a universal activation function (UAF) that achieves near optimal performance in quantification, classification, and reinforcement learning (RL) problems. For any given problem, the gradient descent algorithms are able to evolve the UAF to a suitable activation function by tuning the UAF’s parameters. For the CIFAR-10 classification using the VGG-8 neural network, the UAF converges to the Mish like activation function, which has near optimal performance $$F_{1}=0.902\pm 0.004$$ F 1 = 0.902 ± 0.004 when compared to other activation functions. In the graph convolutional neural network on the CORA dataset, the UAF evolves to the identity function and obtains $$F_1=0.835\pm 0.008$$ F 1 = 0.835 ± 0.008 . For the quantification of simulated 9-gas mixtures in 30 dB signal-to-noise ratio (SNR) environments, the UAF converges to the identity function, which has near optimal root mean square error of $$0.489\pm 0.003~\mu {\mathrm{M}}$$ 0.489 ± 0.003 μ M . In the ZINC molecular solubility quantification using graph neural networks, the UAF morphs to a LeakyReLU/Sigmoid hybrid and achieves RMSE= $$0.47\pm 0.04$$ 0.47 ± 0.04 . For the BipedalWalker-v2 RL dataset, the UAF achieves the 250 reward in $${961\pm 193}$$ 961 ± 193 epochs with a brand new activation function, which gives the fastest convergence rate among the activation functions.
format article
author Brosnan Yuen
Minh Tu Hoang
Xiaodai Dong
Tao Lu
author_facet Brosnan Yuen
Minh Tu Hoang
Xiaodai Dong
Tao Lu
author_sort Brosnan Yuen
title Universal activation function for machine learning
title_short Universal activation function for machine learning
title_full Universal activation function for machine learning
title_fullStr Universal activation function for machine learning
title_full_unstemmed Universal activation function for machine learning
title_sort universal activation function for machine learning
publisher Nature Portfolio
publishDate 2021
url https://doaj.org/article/761a26ef959a4ff2b37e7710b3ce6b10
work_keys_str_mv AT brosnanyuen universalactivationfunctionformachinelearning
AT minhtuhoang universalactivationfunctionformachinelearning
AT xiaodaidong universalactivationfunctionformachinelearning
AT taolu universalactivationfunctionformachinelearning
_version_ 1718378459095367680