Skip to main navigation Skip to search Skip to main content

AlphaBrains at WojoodNER shared task: Arabic Named Entity Recognition by Using Character-based Context-Sensitive Word Representations

  • University of Gujrat

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

Abstract

This paper presents Arabic named entity recognition models by employing single-task and multi-task learning paradigms. The models were developed by using character-based contextualized Embeddings from Language Model (ELMo) in the input layers of the Bidirectional Long-Short Term Memory (BiLSTM) networks. The ELMo embeddings are quite capable of learning the morphology and contextual information of tokens in word sequences. The single-task learning model outperformed the multi-task learning model, achieving micro F1-scores of 0.8751 and 0.8884, respectively, ranking 10th and 7th in the shared task for flat and nested NER.
Original languageEnglish
Title of host publicationProceedings of the The First Arabic Natural Language Processing Conference (ArabicNLP 2023)
PublisherAssociation for Computational Linguistics (ACL)
Pages783-788
Number of pages6
ISBN (Electronic)978-1-959429-27-2
Publication statusPublished - 2023
MoE publication typeA4 Article in a conference publication

Fingerprint

Dive into the research topics of 'AlphaBrains at WojoodNER shared task: Arabic Named Entity Recognition by Using Character-based Context-Sensitive Word Representations'. Together they form a unique fingerprint.

Cite this