Improving Accuracy using The ASERLU layer in CNN-BiLSTM Architecture on Sentiment Analysis
There have been 350,000 tweets generated by the interaction of social networks with different cultures and educational backgrounds in the last ten years. Various sentiments are expressed in the user comments, from support to hatred. The sentiments regarded the United States General Election in 2020....
Guardado en:
Autores principales: | , |
---|---|
Formato: | article |
Lenguaje: | ID |
Publicado: |
Ikatan Ahli Indormatika Indonesia
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/47a0ebe4b47846e580ef85bd6c100f6f |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | There have been 350,000 tweets generated by the interaction of social networks with different cultures and educational backgrounds in the last ten years. Various sentiments are expressed in the user comments, from support to hatred. The sentiments regarded the United States General Election in 2020. This dataset has 3,000 data gotten from previous research. We augment it becomes 15,000 data to facilitate training and increase the required data. Sentiment detection is carried out using the CNN-BiLSTM architecture. It is chosen because CNN can filter essential words, and BiLSTM can remember memory in two directions. By utilizing both, the training process becomes maximum. However, this method has disadvantages in the activation. The drawback of the existing activation method, i.e., "Zero-hard Rectifier" and "ReLU Dropout" problem to become the cause of training stopped in the ReLU activation, and the exponential function cannot be set become the activation function still rigid towards output value in the SERLU activation. To overcome this problem, we propose a novel activation method to repair activation in CNN-BiLSTM architecture. It is namely the ASERLU activation function. It can adjust positive value output, negative value output, and exponential value by the setter variables. So, it adapts more conveniently to the output value and becomes a flexible activation function because it can be increased and decreased as needed. It is the first research applied in architecture. Compared with ReLU and SERLU, our proposed method gives higher accuracy based on the experiment results. |
---|