Named Entity Recognition

Abstract

 

Neural networks (NNs) have become the state of the art in many machine learning applications, such as image, sound (LeCun et al., 2015) and natural language processing (Young et al., 2017; Linggard et al., 2012). However, the success of NNs remains dependent on the availability of large labelled datasets, such as in the case of electronic health records (EHRs). With scarce data, NNs are unlikely to be able to extract this hidden information with practical accuracy. In this study, we develop an approach that solves these problems for named entity recognition, obtaining 94.6 F1 score in I2B2 2009 Medical Extraction Challenge (Uzuner et al., 2010), 4.3 above the architecture that won the competition. To achieve this, we bootstrap our NN models through transfer learning by pertaining word embeddings on a secondary task performed on a large pool of unannotated EHRs and using the output embeddings as a foundation of a range of NN architectures. Beyond the official I2B2 challenge, we further achieve 82.4 F1 on extracting relationships between medical terms using attention-based seq2seq models bootstrapped in the same manner.

Keywords:
Neural Networks, NLP, Named entity recognition, Electronic health records, Transfer learning, LSTM

Article history:
Received 20 February 2019
Received in revised form 29 July 2019
Accepted 29 August 2019
Available online 6 September 2019

DOI: https://doi.org/10.1016/j.neunet.2019.08.032

Neural Networks, 2020. IF 5.8