Universal activation function for machine learning

Abstract This article proposes a universal activation function (UAF) that achieves near optimal performance in quantification, classification, and reinforcement learning (RL) problems. For any given problem, the gradient descent algorithms are able to evolve the UAF to a suitable activation function...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Brosnan Yuen, Minh Tu Hoang, Xiaodai Dong, Tao Lu
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/761a26ef959a4ff2b37e7710b3ce6b10
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Abstract This article proposes a universal activation function (UAF) that achieves near optimal performance in quantification, classification, and reinforcement learning (RL) problems. For any given problem, the gradient descent algorithms are able to evolve the UAF to a suitable activation function by tuning the UAF’s parameters. For the CIFAR-10 classification using the VGG-8 neural network, the UAF converges to the Mish like activation function, which has near optimal performance $$F_{1}=0.902\pm 0.004$$ F 1 = 0.902 ± 0.004 when compared to other activation functions. In the graph convolutional neural network on the CORA dataset, the UAF evolves to the identity function and obtains $$F_1=0.835\pm 0.008$$ F 1 = 0.835 ± 0.008 . For the quantification of simulated 9-gas mixtures in 30 dB signal-to-noise ratio (SNR) environments, the UAF converges to the identity function, which has near optimal root mean square error of $$0.489\pm 0.003~\mu {\mathrm{M}}$$ 0.489 ± 0.003 μ M . In the ZINC molecular solubility quantification using graph neural networks, the UAF morphs to a LeakyReLU/Sigmoid hybrid and achieves RMSE= $$0.47\pm 0.04$$ 0.47 ± 0.04 . For the BipedalWalker-v2 RL dataset, the UAF achieves the 250 reward in $${961\pm 193}$$ 961 ± 193 epochs with a brand new activation function, which gives the fastest convergence rate among the activation functions.