TY - GEN
T1 - Multilingual Offensive Language Identification with Cross-lingual Embeddings
AU - Ranasinghe, Tharindu
AU - Zampieri, Marcos
N1 - Copyright © 2020 Association for Computational Linguistics. This paper is distributed under the terms of the Creative Commons Attribution License CC BY [https://creativecommons.org/licenses/by/4.0/], which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
PY - 2020/11
Y1 - 2020/11
N2 - Offensive content is pervasive in social media and a reason for concern to companies and government organizations. Several studies have been recently published investigating methods to detect the various forms of such content (e.g. hate speech, cyberbulling, and cyberaggression). The clear majority of these studies deal with English partially because most annotated datasets available contain English data. In this paper, we take advantage of English data available by applying cross-lingual contextual word embeddings and transfer learning to make predictions in languages with less resources. We project predictions on comparable data in Bengali, Hindi, and Spanish and we report results of 0.8415 F1 macro for Bengali, 0.8568 F1 macro for Hindi, and 0.7513 F1 macro for Spanish. Finally, we show that our approach compares favorably to the best systems submitted to recent shared tasks on these three languages, confirming the robustness of cross-lingual contextual embeddings and transfer learning for this task.
AB - Offensive content is pervasive in social media and a reason for concern to companies and government organizations. Several studies have been recently published investigating methods to detect the various forms of such content (e.g. hate speech, cyberbulling, and cyberaggression). The clear majority of these studies deal with English partially because most annotated datasets available contain English data. In this paper, we take advantage of English data available by applying cross-lingual contextual word embeddings and transfer learning to make predictions in languages with less resources. We project predictions on comparable data in Bengali, Hindi, and Spanish and we report results of 0.8415 F1 macro for Bengali, 0.8568 F1 macro for Hindi, and 0.7513 F1 macro for Spanish. Finally, we show that our approach compares favorably to the best systems submitted to recent shared tasks on these three languages, confirming the robustness of cross-lingual contextual embeddings and transfer learning for this task.
UR - https://www.aclweb.org/anthology/2020.emnlp-main.470
U2 - 10.18653/v1/2020.emnlp-main.470
DO - 10.18653/v1/2020.emnlp-main.470
M3 - Conference publication
SP - 5838
EP - 5844
BT - Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
PB - Association for Computational Linguistics (ACL)
ER -