Enhancing Clinical Named Entity Recognition via Fine-Tuned BERT and Dictionary-Infused Retrieval-Augmented Generation
Abstract
Clinical notes often contain unstructured text filled with abbreviations, non-standard terminology, and inconsistent phrasing, which pose significant challenges for automated medical information extraction. Named Entity Recognition (NER) plays a crucial role in structuring this data by identifying and categorizing key clinical entities such as symptoms, medications, and diagnoses. However, traditional and even transformer-based NER models often struggle with ambiguity and fail to produce clinically interpretable outputs. In this study, we present a hybrid two-stage framework that enhances medical NER by integrating a fine-tuned BERT model for initial entity extraction with a Dictionary-Infused Retrieval-Augmented Generation (DiRAG) module for terminology normalization. Our approach addresses two critical limitations in current clinical NER systems: lack of contextual clarity and inconsistent standardization of medical terms. The DiRAG module combines semantic retrieval from a UMLS-based vector database with lexical matching and prompt-based generation using a large language model, ensuring precise and explainable normalization of ambiguous entities. The fine-tuned BERT model achieved an F1 score of 0.708 on the MACCROBAT dataset, outperforming several domain-specific baselines, including BioBERT and ClinicalBERT. The integration of the DiRAG module further improved the interpretability and clinical relevance of the extracted entities. Through qualitative case studies, we demonstrate that our framework not only enhances clarity but also mitigates common issues such as abbreviation ambiguity and terminology inconsistency.