Background
In varied educational settings, narrative evaluations have revealed systematic and deleterious differences in language describing women and those underrepresented in their fields. In medicine, limited qualitative studies show differences in narrative language by gender and under-represented minority (URM) status.Objective
To identify and enumerate text descriptors in a database of medical student evaluations using natural language processing, and identify differences by gender and URM status in descriptions.Design
An observational study of core clerkship evaluations of third-year medical students, including data on student gender, URM status, clerkship grade, and specialty.Participants
A total of 87,922 clerkship evaluations from core clinical rotations at two medical schools in different geographic areas.Main measures
We employed natural language processing to identify differences in the text of evaluations for women compared to men and for URM compared to non-URM students.Key results
We found that of the ten most common words, such as "energetic" and "dependable," none differed by gender or URM status. Of the 37 words that differed by gender, 62% represented personal attributes, such as "lovely" appearing more frequently in evaluations of women (p < 0.001), while 19% represented competency-related behaviors, such as "scientific" appearing more frequently in evaluations of men (p < 0.001). Of the 53 words that differed by URM status, 30% represented personal attributes, such as "pleasant" appearing more frequently in evaluations of URM students (p < 0.001), and 28% represented competency-related behaviors, such as "knowledgeable" appearing more frequently in evaluations of non-URM students (p < 0.001).Conclusions
Many words and phrases reflected students' personal attributes rather than competency-related behaviors, suggesting a gap in implementing competency-based evaluation of students. We observed a significant difference in narrative evaluations associated with gender and URM status, even among students receiving the same grade. This finding raises concern for implicit bias in narrative evaluation, consistent with prior studies, and suggests opportunities for improvement.