Skip to content

Natural language processing of learners’ evaluations of attendings to identify professionalism lapses

Evaluation & the Health Professions February 24, 2023

Read the full article

Research Areas

PAIR Center Research Team

Overview

Unprofessional faculty behaviors negatively impact the well-being of trainees yet are infrequently reported through established reporting systems. Manual review of narrative faculty evaluations provides an additional avenue for identifying unprofessional behavior but is time- and resource-intensive, and therefore of limited value for identifying and remediating faculty with professionalism concerns. Natural language processing (NLP) techniques may provide a mechanism for streamlining manual review processes to identify faculty professionalism lapses. In this retrospective cohort study of 15,432 narrative evaluations of medical faculty by medical trainees, we identified professionalism lapses using automated analysis of the text of faculty evaluations. We used multiple NLP approaches to develop and validate several classification models, which were evaluated primarily based on the positive predictive value (PPV) and secondarily by their calibration. A NLP-model using sentiment analysis (quantifying subjectivity of the text) in combination with key words (using the ensemble technique) had the best performance overall with a PPV of 49% (CI 38%-59%). These findings highlight how NLP can be used to screen narrative evaluations of faculty to identify unprofessional faculty behaviors. Incorporation of NLP into faculty review workflows enables a more focused manual review of comments, providing a supplemental mechanism to identify faculty professionalism lapses.

Sponsors

University of Pennsylvania Department of General Internal Medicine Sam Martin Institutional Award

Authors

Janae K Heath, Caitlin B Clancy, William Pluta, Gary E Weissman, Ursula Anderson, Jennifer R Kogan, C Jessica Dine, Judy A Shea