UW-BHI at MEDIQA 2019: An Analysis of Representation Methods for Medical Natural Language Inference

Abstract

Recent advances in distributed language modeling have led to large performance increases on a variety of natural language processing (NLP) tasks. However, it is not well understood how these methods may be augmented by knowledge-based approaches. This paper compares the performance and internal representation of an Enhanced Sequential Inference Model (ESIM) between three experimental conditions based on the representation method: Bidirectional Encoder Representations from Transformers (BERT), Embeddings of Semantic Predications (ESP), or Cui2Vec. The methods were evaluated on the Medical Natural Language Inference (MedNLI) subtask of the MEDIQA 2019 shared task. This task relied heavily on semantic understanding and thus served as a suitable evaluation set for the comparison of these representation methods.

Publication
Proceedings of the 18th BioNLP Workshop and Shared Task
Jason A. Thomas
Jason A. Thomas
PhD
Medical Data & AI Scientist | Strategist | Informatician | Tech lead - Senior Data & AI Scientist - Philips

My research interests include 1) building foundational layers (data, infrastructure, knowledge representation, talent, culture) to support biomedical data science and 2) applying data science & AI methods on data to drive business value and improve patient outcomes.

Related