Language Identification of Intra-Word Code-Switching for Arabic–English

Multilingual speakers tend to mix different languages in text and speech; a phenomenon referred to by linguists as “code-switching” (CS). Also, speakers switch between morphemes from various languages in the same word (intra-word CS). User-generated texts on social media are informal and contain a f...

Description complète

Enregistré dans:
Détails bibliographiques
Auteurs principaux: Caroline Sabty, Islam Mesabah, Özlem Çetinoğlu, Slim Abdennadher
Format: article
Langue:EN
Publié: Elsevier 2021
Sujets:
Accès en ligne:https://doaj.org/article/a5ec6db515d345e290c61a851789e27e
Tags: Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
Description
Résumé:Multilingual speakers tend to mix different languages in text and speech; a phenomenon referred to by linguists as “code-switching” (CS). Also, speakers switch between morphemes from various languages in the same word (intra-word CS). User-generated texts on social media are informal and contain a fair amount of different types of CS data. This data needs to be investigated and analyzed for several linguistic tasks. Language Identification (LID) is one of the important tasks that should be tackled for intra-word CS data. LID involves segmenting mixed words and tagging each part with its corresponding language ID. This work aimed at creating the first annotated Arabic–English (AR–EN) corpus for the CS intra-word LID task along with a web-based application for data annotation. We implemented two baseline models using Naïve Bayes and Character BiLSTM for AR–EN text. Our main model was constructed using segmental recurrent neural networks (SegRNN). We investigated the usage of different word embeddings with SegRNN. The highest LID system for tagging the entire data-set was obtained using SegRNN alone, achieving an F1-score of 94.84% and was able to recognize mixed words with F1-score equal to 81.15%. Besides, the model of the SegRNN with FastText embeddings achieved the highest results equal to 81.45% F1-score for tagging the mixed words.