Leveraging Open Large Language Models for Historical Named Entity Recognition
Résumé
The efficacy of large-scale language models (LLMs) as few-shot learners has dominated the field of natural language processing, achieving state-of-the-art performance in most tasks, including named entity recognition (NER) for contemporary texts. However, exploration of NER in historical collections (e.g., historical newspapers and classical commentaries) remains limited. This presents a greater challenge as historical texts are often noisy due to storage conditions, OCR extraction, and spelling variation. In this paper, we conduct an empirical evaluation comparing different Instruct variants of open-access and open-sourced LLMs using prompt engineering through deductive (with guidelines) and inductive (without guidelines) approaches against the fully supervised benchmarks. In addition, we study how the interaction between the Instruct model and the user impacts the entity prediction. We conduct reproducible experiments using an easy-to-implement mechanism on publicly available historical collections covering three languages (i.e., English, French, and German) with code-switching on Ancient Greek and four open Instruct models. The results show that Instruct models encounter multiple difficulties handling the noisy input documents, scoring lower than fine-tuned dedicated NER systems, yet the resulting predictions provide entities that can be used in further tagging processes by human annotators.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
licence |