BERT, XLNet or RoBERTa: The Best Transfer Learning Model to Detect Clickbaits

Clickbait can be a spam or an advert which more often provides a link to commercial website and it can also be a headline to news media website which makes money from page views by providing eye-catchy headlines with deceptive news. This paper focuses on the latter definition in order to identify ne...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Praboda Rajapaksha, Reza Farahbakhsh, Noel Crespi
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/eed1f7b9e7c04273a87b93e0bbb9c486
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Clickbait can be a spam or an advert which more often provides a link to commercial website and it can also be a headline to news media website which makes money from page views by providing eye-catchy headlines with deceptive news. This paper focuses on the latter definition in order to identify news clickbaits that are published in Twitter. The aim of this work is to use recent Transfer Learning models to detect news clickbaits by adding various configuration changes to the existing models. Based on the author’s knowledge, this is the first attempt to adapt Transfer Learning to classify Clickbaits in social media. In this work we fine-tuned BERT, XLNet and RoBERTa models by integrating novel configuration changes into their default architectures such as model expansion, pruning and data augmentation strategies. Webis Clickbait dataset was used to train these models and the best performed model at the Webit Clickbait competition 2017 was considered as our benchmark. The analyses in this work are mainly focused on eight different scenarios after applying several fine-tuning approaches and model configuration changes to the default Transfer Learning models. The results shown that, our modified Transfer Learning approaches outperformed the considered benchmark. In our experiments, the best performed Transfer Learning model was RoBERTa with the integration of an additional non-linear layer with the hidden output tensor. this configuration has achieved 19.12% more accuracy in compared to the benchmark model for the binary classification. There is no significant performance improvement when each model expanded by adding an extra RNN layer(s). Apart from that, we experimented with another labelled clickbait dataset (Kaggle clickbait challenge) to explore the performance of our fine-tuned models under different scenarios.