Volume 16, Issue 6, 2015
Volume 16, Issue 6, 2015
Table of Contents
Masthead
Educational Research and Practice
Emergency Medicine: On the Frontlines of Medical Education Transformation
Emergency medicine (EM) has always been on the frontlines of healthcare in the United States. I experienced this reality first hand as a young general medical officer assigned to an emergency department (ED) in a small naval hospital in the 1980s. For decades the ED has been the only site where patients could not be legally denied care. Despite increased insurance coverage for millions of Americans as a result of the Affordable Care Act, ED directors report an increase in patient volumes in a recent survey.1 EDs care for patients from across the socioeconomic spectrum suffering from a wide range of clinical conditions. As a result, the ED is still one of few components of the American healthcare system where social justice is enacted on a regular basis. Constant turbulence in the healthcare system, major changes in healthcare delivery, technological advances and shifting demographic trends necessitate that EM constantly adapt and evolve as a discipline in this complex environment.
Education Scholarship and its Impact on Emergency Medicine Education
Emergency medicine (EM) education is becoming increasingly challenging as a result of changes to North American medical education and the growing complexity of EM practice. Education scholarship (ES) provides a process to develop solutions to these challenges. ES includes both research and innovation. ES is informed by theory, principles and best practices, is peer reviewed, and is disseminated and archived for others to use. Digital technologies have improved the discovery of work that informs ES, broadened the scope and timing of peer review, and provided new platforms for the dissemination and archiving of innovations. This editorial reviews key steps in raising an education innovation to the level of scholarship. It also discusses important areas for EM education scholars to address, which include the following: the delivery of competency-based medical education programs, the impact of social media on learning, and the redesign of continuing professional development.
Morbidity and Mortality Conference in Emergency Medicine Residencies and the Culture of Safety
Introduction: Morbidity and mortality conferences (M+M) are a traditional part of residency training and mandated by the Accreditation Counsel of Graduate Medical Education. This study’s objective was to determine the goals, structure, and the prevalence of practices that foster strong safety cultures in the M+Ms of U.S. emergency medicine (EM) residency programs.
Methods: The authors conducted a national survey of U.S. EM residency program directors. The survey instrument evaluated five domains of M+M (Organization and Infrastructure; Case Finding; Case Selection; Presentation; and Follow up) based on the validated Agency for Healthcare Research & Quality Safety Culture survey.
Results: There was an 80% (151/188) response rate. The primary objectives of M+M were discussing adverse outcomes (53/151, 35%), identifying systems errors (47/151, 31%) and identifying cognitive errors (26/151, 17%). Fifty-six percent (84/151) of institutions have anonymous case submission, with 10% (15/151) maintaining complete anonymity during the presentation and 21% (31/151) maintaining partial anonymity. Forty-seven percent (71/151) of programs report a formal process to follow up on systems issues identified at M+M. Forty-four percent (67/151) of programs report regular debriefing with residents who have had their cases presented.
Conclusion: The structure and goals of M+Ms in EM residencies vary widely. Many programs lack features of M+M that promote a non-punitive response to error, such as anonymity. Other programs lack features that support strong safety cultures, such as following up on systems issues or reporting back to residents on improvements. Further research is warranted to determine if M+M structure is related to patient safety culture in residency programs.
Are Live Ultrasound Models Replaceable? Traditional versus Simulated Education Module for FAST Exam
Introduction: The focused assessment with sonography for trauma (FAST) is a commonly used and life-saving tool in the initial assessment of trauma patients. The recommended emergency medicine (EM) curriculum includes ultrasound and studies show the additional utility of ultrasound training for medical students. EM clerkships vary and often do not contain formal ultrasound instruction. Time constraints for facilitating lectures and hands-on learning of ultrasound are challenging. Limitations on didactics call for development and inclusion of novel educational strategies, such as simulation. The objective of this study was to compare the test, survey, and performance of ultrasound between medical students trained on an ultrasound simulator versus those trained via traditional, hands-on patient format.
Methods: This was a prospective, blinded, controlled educational study focused on EM clerkship medical students. After all received a standardized lecture with pictorial demonstration of image acquisition, students were randomized into two groups: control group receiving traditional training method via practice on a human model and intervention group training via practice on an ultrasound simulator. Participants were tested and surveyed on indications and interpretation of FAST and training and confidence with image interpretation and acquisition before and after this educational activity. Evaluation of FAST skills was performed on a human model to emulate patient care and practical skills were scored via objective structured clinical examination (OSCE) with critical action checklist.
Results: There was no significant difference between control group (N=54) and intervention group (N=39) on pretest scores, prior ultrasound training/education, or ultrasound comfort level in general or on FAST. All students (N=93) showed significant improvement from pre- to post-test scores and significant improvement in comfort level using ultrasound in general and on FAST (p<0.001). There was no significant difference between groups on OSCE scores of FAST on a live model. Overall, no differences were demonstrated between groups trained on human models versus simulator.Discussion: There was no difference between groups in knowledge based ultrasound test scores, survey of comfort levels with ultrasound, and students’ abilities to perform and interpret FAST on human models.
Conclusion: These findings suggest that an ultrasound simulator is a suitable alternative method for ultrasound education. Additional uses of ultrasound simulation should be explored in the future.
Teaching and Assessing ED Handoffs: A Qualitative Study Exploring Resident, Attending, and Nurse Perceptions
Introduction: The Accreditation Council for Graduate Medical Education requires that residency programs ensure resident competency in performing safe, effective handoffs. Understanding resident, attending, and nurse perceptions of the key elements of a safe and effective emergency department (ED) handoff is a crucial step to developing feasible, acceptable educational interventions to teach and assess this fundamental competency. The aim of our study was to identify the essential themes of ED-based handoffs and to explore the key cultural and interprofessional themes that may be barriers to developing and implementing successful ED-based educational handoff interventions.
Methods: Using a grounded theory approach and constructivist/interpretivist research paradigm, we analyzed data from three primary and one confirmatory focus groups (FGs) at an urban, academic ED. FG protocols were developed using open-ended questions that sought to understand what participants felt were the crucial elements of ED handoffs. ED residents, attendings, a physician assistant, and nurses participated in the FGs. FGs were observed, hand-transcribed, audio-recorded and subsequently transcribed. We analyzed data using an iterative process of theme and subtheme identification. Saturation was reached during the third FG, and the fourth confirmatory group reinforced the identified themes. Two team members analyzed the transcripts separately and identified the same major themes.
Results: ED providers identified that crucial elements of ED handoff include the following: 1) Culture (provider buy-in, openness to change, shared expectations of sign-out goals); 2) Time (brevity, interruptions, waiting); 3) Environment (physical location, ED factors); 4) Process (standardization, information order, tools).
Conclusion: Key participants in the ED handoff process perceive that the crucial elements of intershift handoffs involve the themes of culture, time, environment, and process. Attention to these themes may improve the feasibility and acceptance of educational interventions that aim to teach and assess handoff competency.
The Impact of Medical Student Participation in Emergency Medicine Patient Care on Departmental Press Ganey Scores
Introduction: Press Ganey (PG) scores are used by public entities to gauge the quality of patient care from medical facilities in the United States. Academic health centers (AHCs) are charged with educating the new generation of doctors, but rely heavily on PG scores for their business operation. AHCs need to know what impact medical student involvement has on patient care and their PG scores. Purpose: We sought to identify the impact students have on emergency department (ED) PG scores related to overall visit and the treating physician’s performance.
Methods: This was a retrospective, observational cohort study of discharged ED patients who completed PG satisfaction surveys at one academic, and one community-based ED. Outcomes were responses to questions about the overall visit assessment and doctor’s care, measured on a five-point scale. We compared the distribution of responses for each question through proportions with 95% confidence intervals (CIs) stratified by medical student participation. For each question, we constructed a multivariable ordinal logistic regression model including medical student involvement and other independent variables known to affect PG scores.
Results: We analyzed 2,753 encounters, of which 259 (9.4%) had medical student involvement. For all questions, there were no appreciable differences in patient responses when stratifying by medical student involvement. In regression models, medical student involvement was not associated with PG score for any outcome, including overall rating of care (odds ratio [OR] 1.10, 95% CI [0.90-1.34]) or likelihood of recommending our EDs (OR 1.07, 95% CI [0.86-1.32]). Findings were similar when each ED was analyzed individually.
Conclusion: We found that medical student involvement in patient care did not adversely impact ED PG scores in discharged patients. Neither overall scores nor physician-specific scores were impacted. Results were similar at both the academic medical center and the community teaching hospital at our institution.
What is the Prevalence and Success of Remediation of Emergency Medicine Residents?
Introduction: The primary objective of this study was to determine the prevalence of remediation, competency domains for remediation, the length, and success rates of remediation in emergency medicine (EM).
Methods: We developed the survey in SurveymonkeyTM with attention to content and response process validity. EM program directors responded how many residents had been placed on remediation in the last three years. Details regarding the remediation were collected including indication, length and success. We reported descriptive data and estimated a multinomial logistic regression model.
Results: We obtained 126/158 responses (79.7%). Ninety percent of programs had at least one resident on remediation in the last three years. The prevalence of remediation was 4.4%. Indications for remediation ranged from difficulties with one core competency to all six competencies (mean 1.9). The most common were medical knowledge (MK) (63.1% of residents), patient care (46.6%) and professionalism (31.5%). Mean length of remediation was eight months (range 1-36 months). Successful remediation was 59.9% of remediated residents; 31.3% reported ongoing remediation. In 8.7%, remediation was deemed “unsuccessful.” Training year at time of identification for remediation (post-graduate year [PGY] 1), longer time spent in remediation, and concerns with practice-based learning (PBLI) and professionalism were found to have statistically significant association with unsuccessful remediation.
Conclusion: Remediation in EM residencies is common, with the most common areas being MK and patient care. The majority of residents are successfully remediated. PGY level, length of time spent in remediation, and the remediation of the competencies of PBLI and professionalism were associated with unsuccessful remediation.
- 1 supplemental PDF
Results from the First Year of Implementation of CONSULT: Consultation with Novel Methods and Simulation for UME Longitudinal Training
Introduction: An important area of communication in healthcare is the consultation. Existing literature suggests that formal training in consultation communication is lacking. We aimed to conduct a targeted needs assessment of third-year students on their experience calling consultations, and based on these results, develop, pilot, and evaluate the effectiveness of a consultation curriculum for different learner levels that can be implemented as a longitudinal curriculum.
Methods: Baseline needs assessment data were gathered using a survey completed by third-year students at the conclusion of the clinical clerkships. The survey assessed students’ knowledge of the standardized consultation, experience and comfort calling consultations, and previous instruction received on consultation communication. Implementation of the consultation curriculum began the following academic year. Second-year students were introduced to Kessler’s 5 Cs consultation model through a didactic session consisting of a lecture, viewing of “trigger” videos illustrating standardized and informal consults, followed by reflection and discussion. Curriculum effectiveness was assessed through pre- and post- curriculum surveys that assessed knowledge of and comfort with the consultation process. Fourth-year students participated in a consultation curriculum that provided instruction on the 5 Cs model and allowed for continued practice of consultation skills through simulation during the Emergency Medicine clerkship. Proficiency in consult communication in this cohort was assessed using two assessment tools, the Global Rating Scale and the 5 Cs Checklist.
Results: The targetedneeds assessment of third-year students indicated that 93% of students have called a consultation during their clerkships, but only 24% received feedback. Post-curriculum, second-year students identified more components of the 5 Cs model (4.04 vs. 4.81, p<0.001) and reported greater comfort with the consultation process (0% vs. 69%, p<0.001). Post- curriculum, fourth-year students scored higher in all criteria measuring consultation effectiveness (p<0.001 for all) and included more necessary items in simulated consultations (62% vs. 77%, p<0.001).
Conclusion: While third-year medical students reported calling consultations, few felt comfortable and formal training was lacking. A curriculum in consult communication for different levels of learners can improve knowledge and comfort prior to clinical clerkships and improve consultation skills prior to residency training.
- 4 supplemental files
Does the Concept of the “Flipped Classroom” Extend to the Emergency Medicine Clinical Clerkship?
Introduction: Linking educational objectives and clinical learning during clerkships can be difficult. Clinical shifts during emergency medicine (EM) clerkships provide a wide variety of experiences, some of which may not be relevant to recommended educational objectives. Students can be directed to standardize their clinical experiences, and this improves performance on examinations. We hypothesized that applying a “flipped classroom” model to the clinical clerkship would improve performance on multiple-choice testing when compared to standard learning.
Methods: Students at two institutions were randomized to complete two of four selected EM clerkship topics in a “flipped fashion,” and two others in a standard fashion. For flipped topics, students were directed to complete chief complaint-based asynchronous modules prior to a shift, during which they were directed to focus on the chief complaint. For the other two topics, modules were to be performed at the students’ discretion, and shifts would not have a theme. At the end of the four-week clerkship, a 40-question multiple-choice examination was administered with 10 questions per topic. We compared performance on flipped topics with those performed in standard fashion. Students were surveyed on perceived effectiveness, ability to follow the protocol, and willingness of preceptors to allow a chief-complaint focus.
Results: Sixty-nine students participated; examination scores for 56 were available for analysis. For the primary outcome, no difference was seen between the flipped method and standard (p=0.494.) A mixed model approach showed no effect of flipped status, protocol adherence, or site of rotation on the primary outcome of exam scores. Students rated the concept of the flipped clerkship highly (3.48/5). Almost one third (31.1%) of students stated that they were unable to adhere to the protocol.
Conclusion: Preparation for a clinical shift with pre-assigned, web-based learning modules followed by an attempt at chief-complaint-focused learning during a shift did not result in improvements in performance on a multiple-choice assessment of knowledge; however, one third of participants did not adhere strictly to the protocol. Future investigations should ensure performance of pre-assigned learning as well as clinical experiences, and consider alternate measures of knowledge.
Coordinating a Team Response to Behavioral Emergencies in the Emergency Department: A Simulation-Enhanced Interprofessional Curriculum
Introduction: While treating potentially violent patients in the emergency department (ED), both patients and staff may be subject to unintentional injury. Emergency healthcare providers are at the greatest risk of experiencing physical and verbal assault from patients. Preliminary studies have shown that a team-based approach with targeted staff training has significant positive outcomes in mitigating violence in healthcare settings. Staff attitudes toward patient aggression have also been linked to workplace safety, but current literature suggests that providers experience fear and anxiety while caring for potentially violent patients. The objectives of the study were (1) to develop an interprofessional curriculum focusing on improving teamwork and staff attitudes toward patient violence using simulation-enhanced education for ED staff, and (2) to assess attitudes towards patient aggression both at pre- and post-curriculum implementation stages using a survey-based study design.
Methods: Formal roles and responsibilities for each member of the care team, including positioning during restraint placement, were predefined in conjunction with ED leadership. Emergency medicine residents, nurses and hospital police officers were assigned to interprofessional teams. The curriculum started with an introductory lecture discussing de-escalation techniques and restraint placement as well as core tenets of interprofessional collaboration. Next, we conducted two simulation scenarios using standardized participants (SPs) and structured debriefing. The study consisted of a survey-based design comparing pre- and post-intervention responses via a paired Student t-test to assess changes in staff attitudes. We used the validated Management of Aggression and Violence Attitude Scale (MAVAS) consisting of 30 Likert-scale questions grouped into four themed constructs.
Results: One hundred sixty-two ED staff members completed the course with >95% staff participation, generating a total of 106 paired surveys. Constructs for internal/biomedical factors, external/staff factors and situational/interactional perspectives on patient aggression significantly improved (p<0.0001, p<0.002, p<0.0001 respectively). Staff attitudes toward management of patient aggression did not significantly change (p=0.542). Multiple quality improvement initiatives were successfully implemented, including the creation of an interprofessional crisis management alert and response protocol. Staff members described appreciation for our simulation-based curriculum and welcomed the interaction with SPs during their training.
Conclusion: A structured simulation-enhanced interprofessional intervention was successful in improving multiple facets of ED staff attitudes toward behavioral emergency care.
- 1 supplemental PDF
Direct Observation Assessment of Milestones: Problems with Reliability
Introduction: Emergency medicine (EM) milestones are used to assess residents’ progress. While some milestone validity evidence exists, there is a lack of standardized tools available to reliably assess residents. Inherent to this is a concern that we may not be truly measuring what we intend to assess. The purpose of this study was to design a direct observation milestone assessment instrument supported by validity and reliability evidence. In addition, such a tool would further lend validity evidence to the EM milestones by demonstrating their accurate measurement.
Methods: This was a multi-center, prospective, observational validity study conducted at eight institutions. The Critical Care Direct Observation Tool (CDOT) was created to assess EM residents during resuscitations. This tool was designed using a modified Delphi method focused on content, response process, and internal structure validity. Paying special attention to content validity, the CDOT was developed by an expert panel, maintaining the use of the EM milestone wording. We built response process and internal consistency by piloting and revising the instrument. Raters were faculty who routinely assess residents on the milestones. A brief training video on utilization of the instrument was completed by all. Raters used the CDOT to assess simulated videos of three residents at different stages of training in a critical care scenario. We measured reliability using Fleiss’ kappa and interclass correlations.
Results: Two versions of the CDOT were used: one used the milestone levels as global rating scales with anchors, and the second reflected a current trend of a checklist response system. Although the raters who used the CDOT routinely rate residents in their practice, they did not score the residents’ performances in the videos comparably, which led to poor reliability. The Fleiss’ kappa of each of the items measured on both versions of the CDOT was near zero.
Conclusion: The validity and reliability of the current EM milestone assessment tools have yet to be determined. This study is a rigorous attempt to collect validity evidence in the development of a direct observation assessment instrument. However, despite strict attention to validity evidence, inter-rater reliability was low. The potential sources of reducible variance include rater- and instrument-based error. Based on this study, there may be concerns for the reliability of other EM milestone assessment tools that are currently in use.
- 2 supplemental files
Ready for Discharge? A Survey of Discharge Transition-of-Care Education and Evaluation in Emergency Medicine Residency Programs
This study aimed to assess current education and practices of emergency medicine (EM) residents as perceived by EM program directors to determine if there are deficits in resident discharge handoff training. This survey study was guided by the Kern model for medical curriculum development. A six-member Council of EM Residency Directors (CORD) Transitions of Care task force of EM physicians performed these steps and constructed a survey. The survey was distributed to program residency directors via the CORD listserve and/or direct contact. There were 119 responses to the survey, which were collected using an online survey tool. Over 71% of the 167 American College of Graduate Medical Education (ACGME) accredited EM residency programs were represented. Of those responding, 42.9% of programs reported formal training regarding discharges during initial orientation and 5.9% reported structured curriculum outside of orientation. A majority (73.9%) of programs reported that EM residents were not routinely evaluated on their discharge proficiency. Despite the ACGME requirements requiring formal handoff curriculum and evaluation, many programs do not provide formal curriculum on the discharge transition of care or evaluate EM residents on their discharge proficiency.
Combined Versus Detailed Evaluation Components in Medical Student Global Rating Indexes
Introduction: To determine if there is any correlation between any of the 10 individual components of a global rating index on an emergency medicine (EM) student clerkship evaluation form. If there is correlation, to determine if a weighted average of highly correlated components loses predictive value for the final clerkship grade.
Methods: This study reviewed medical student evaluations collected over two years of a required fourth-year rotation in EM. Evaluation cards, comprised of a detailed 10-part evaluation, were completed after each shift. We used a correlation matrix between evaluation category average scores, using Spearman’s rho, to determine if there was any correlation of the grades between any of the 10 items on the evaluation form.
Results: A total of 233 students completed the rotation over the two-year period of the study. There were strong correlations (>0.80) between assessment components of medical knowledge, history taking, physical exam, and differential diagnosis. There were also strong correlations between assessment components of team rapport, patient rapport, and motivation. When these highly correlated were combined to produce a four-component model, linear regression demonstrated similar predictive power in terms of final clerkship grade (R2=0.71, CI95=0.65–0.77 and R2=0.69, CI95=0.63–0.76 for the full and reduced models respectively).
Conclusion: This study revealed that several components of the evaluation card had a high degree of correlation. Combining the correlated items, a reduced model containing four items (clinical skills, interpersonal skills, procedural skills, and documentation) was as predictive of the student’s clinical grade as the full 10-item evaluation. Clerkship directors should be aware of the performance of their individual global rating scales when assessing medical student performance, especially if attempting to measure greater than four components.
Effect of Doximity Residency Rankings on Residency Applicants’ Program Choices
Introduction: Choosing a residency program is a stressful and important decision. Doximity released residency program rankings by specialty in September 2014. This study sought to investigate the impact of those rankings on residency application choices made by fourth year medical students.
Methods: A 12-item survey was administered in October 2014 to fourth year medical students at three schools. Students indicated their specialty, awareness of and perceived accuracy of the rankings, and the rankings’ impact on the programs to which they chose to apply. Descriptive statistics were reported for all students and those applying to Emergency Medicine (EM).
Results: A total of 461 (75.8%) students responded, with 425 applying in one of the 20 Doximity ranked specialties. Of the 425, 247 (58%) were aware of the rankings and 177 looked at them. On a 1-100 scale (100=very accurate), students reported a mean ranking accuracy rating of 56.7 (SD 20.3). Forty-five percent of students who looked at the rankings modified the number of programs to which they applied. The majority added programs. Of the 47 students applying to EM, 18 looked at the rankings and 33% changed their application list with most adding programs.
Conclusion: The Doximity rankings had real effects on students applying to residencies as almost half of students who looked at the rankings modified their program list. Additionally, students found the rankings to be moderately accurate. Graduating students might benefit from emphasis on more objective characterization of programs to assess in light of their own interests and personal/career goals.
Introducing Medical Students into the Emergency Department: The Impact upon Patient Satisfaction
Introduction: Performance on patient satisfaction surveys is becoming increasingly important for practicing emergency physicians and the introduction of learners into a new clinical environment may impact such scores. This study aimed to quantify the impact of introducing fourth-year medical students on patient satisfaction in two university-affiliated community emergency departments (EDs).
Methods: Two community-based EDs in the Indiana University Health (IUH) system began hosting medical students in March 2011 and October 2013, respectively. We analyzed responses from patient satisfaction surveys at each site for seven months before and after the introduction of students. Two components of the survey, “Would you recommend this ED to your friends and family?” and “How would you rate this facility overall?” were selected for analysis, as they represent the primary questions reviewed by the Center for Medicare Services (CMS) as part of value-based purchasing. We evaluated the percentage of positive responses for adult, pediatric, and all patients combined.
Results: Analysis did not reveal a statistically significant difference in the percentage of positive response for the “would you recommend” question at both clinical sites with regards to the adult and pediatric subgroups, as well as the all-patient group. At one of the sites, there was significant improvement in the percentage of positive response to the “overall rating” question following the introduction of medical students when all patients were analyzed (60.3% to 68.2%, p=0.038). However, there was no statistically significant difference in the “overall rating” when the pediatric or adult subgroups were analyzed at this site and no significant difference was observed in any group at the second site.
Conclusion: The introduction of medical students in two community-based EDs is not associated with a statistically significant difference in overall patient satisfaction, but was associated with a significant positive effect on the overall rating of the ED at one of the two clinical sites studied. Further study is needed to evaluate the effect of medical student learners upon patient satisfaction in settings outside of a single health system.
Teaching Emotional Intelligence: A Control Group Study of a Brief Educational Intervention for Emergency Medicine Residents
Introduction: Emotional Intelligence (EI) is defined as an ability to perceive another’s emotional state combined with an ability to modify one’s own. Physicians with this ability are at a distinct advantage, both in fostering teams and in making sound decisions. Studies have shown that higher physician EI’s are associated with lower incidence of burn-out, longer careers, more positive patient-physician interactions, increased empathy, and improved communication skills. We explored the potential for EI to be learned as a skill (as opposed to being an innate ability) through a brief educational intervention with emergency medicine (EM) residents.
Methods: This study was conducted at a large urban EM residency program. Residents were randomized to either EI intervention or control groups. The intervention was a two-hour session focused on improving the skill of social perspective taking (SPT), a skill related to social awareness. Due to time limitations, we used a 10-item sample of the Hay 360 Emotional Competence Inventory to measure EI at three time points for the training group: before (pre) and after (post) training, and at six-months post training (follow up); and at two time points for the control group: pre- and follow up. The preliminary analysis was a four-way analysis of variance with one repeated measure: Group x Gender x Program Year over Time. We also completed post-hoc tests.
Results: Thirty-three EM residents participated in the study (33 of 36, 92%), 19 in the EI intervention group and 14 in the control group. We found a significant interaction effect between Group and Time (p<0.05). Post-hoc tests revealed a significant increase in EI scores from Time 1 to 3 for the EI intervention group (62.6% to 74.2%), but no statistical change was observed for the controls (66.8% to 66.1%, p=0.77). We observed no main effects involving gender or level of training.
Conclusion: Our brief EI training showed a delayed but statistically significant positive impact on EM residents six months after the intervention involving SPT. One possible explanation for this finding is that residents required time to process and apply the EI skills training in order for us to detect measurable change. More rigorous measurement will be needed in future studies to aid in the interpretation of our findings.
Correlation of Simulation Examination to Written Test Scores for Advanced Cardiac Life Support Testing: Prospective Cohort Study
Introduction: Traditional Advanced Cardiac Life Support (ACLS) courses are evaluated using written multiple-choice tests. High-fidelity simulation is a widely used adjunct to didactic content, and has been used in many specialties as a training resource as well as an evaluative tool. There are no data to our knowledge that compare simulation examination scores with written test scores for ACLS courses. Objective: To compare and correlate a novel high-fidelity simulation-based evaluation with traditional written testing for senior medical students in an ACLS course.
Methods: We performed a prospective cohort study to determine the correlation between simulation-based evaluation and traditional written testing in a medical school simulation center. Students were tested on a standard acute coronary syndrome/ventricular fibrillation cardiac arrest scenario. Our primary outcome measure was correlation of exam results for 19 volunteer fourth-year medical students after a 32-hour ACLS-based Resuscitation Boot Camp course. Our secondary outcome was comparison of simulation-based vs. written outcome scores.
Results: The composite average score on the written evaluation was substantially higher (93.6%) than the simulation performance score (81.3%, absolute difference 12.3%, 95% CI [10.6-14.0%], p<0.00005). We found a statistically significant moderate correlation between simulation scenario test performance and traditional written testing (Pearson r=0.48, p=0.04), validating the new evaluation method.
Conclusion: Simulation-based ACLS evaluation methods correlate with traditional written testing and demonstrate resuscitation knowledge and skills. Simulation may be a more discriminating and challenging testing method, as students scored higher on written evaluation methods compared to simulation.
- 1 supplemental PDF
How Does Emergency Department Crowding Affect Medical Student Test Scores and Clerkship Evaluations?
Introduction: The effect of emergency department (ED) crowding has been recognized as a concern for more than 20 years; its effect on productivity, medical errors, and patient satisfaction has been studied extensively. Little research has reviewed the effect of ED crowding on medical education. Prior studies that have considered this effect have shown no correlation between ED crowding and resident perception of quality of medical education.
Objective: To determine whether ED crowding, as measured by the National ED Overcrowding Scale (NEDOCS) score, has a quantifiable effect on medical student objective and subjective experiences during emergency medicine (EM) clerkship rotations.
Methods: We collected end-of-rotation examinations and medical student evaluations for 21 EM rotation blocks between July 2010 and May 2012, with a total of 211 students. NEDOCS scores were calculated for each corresponding period. Weighted regression analyses examined the correlation between components of the medical student evaluation, student test scores, and the NEDOCS score for each period.
Results: When all 21 rotations are included in the analysis, NEDOCS scores showed a negative correlation with medical student tests scores (regression coefficient= -0.16, p=0.04) and three elements of the rotation evaluation (attending teaching, communication, and systems-based practice; p<0.05). We excluded an outlying NEDOCS score from the analysis and obtained similar results. When the data were controlled for effect of month of the year, only student test score remained significantly correlated with NEDOCS score (p=0.011). No part of the medical student rotation evaluation attained significant correlation with the NEDOCS score (p≥0.34 in all cases).
Conclusion: ED overcrowding does demonstrate a small but negative association with medical student performance on end-of-rotation examinations. Additional studies are recommended to further evaluate this effect.
Medical Student Performance on the National Board of Medical Examiners Emergency Medicine Advanced Clinical Examination and the National Emergency Medicine M4 Exams
Introduction: In April 2013, the National Board of Medical Examiners (NBME) released an Advanced Clinical Examination (ACE) in emergency medicine (EM). In addition to this new resource, CDEM (Clerkship Directors in EM) provides two online, high-quality, internally validated examinations. National usage statistics are available for all three examinations, however, it is currently unknown how students entering an EM residency perform as compared to the entire national cohort. This information may help educators interpret examination scores of both EM-bound and non-EM-bound students.
Objectives: The objective of this study was to compare EM clerkship examination performance between students who matched into an EM residency in 2014 to students who did not. We made comparisons were made using the EM-ACE and both versions of the National fourth year medical student (M4) EM examinations.
Method: In this retrospective multi-institutional cohort study, the EM-ACE and either Version 1 (V1) or 2 (V2) of the National EM M4 examination was given to students taking a fourth-year EM rotation at five institutions between April 2013 to February 2014. We collected examination performance, including the scaled EM-ACE score, and percent correct on the EM M4 exams, and 2014 NRMP Match status. Student t-tests were performed on the examination averages of students who matched in EM as compared with those who did not.
Results: A total of 606 students from five different institutions took both the EM-ACE and one of the EM M4 exams; 94 (15.5%) students matched in EM in the 2014 Match. The mean score for EM-bound students on the EM-ACE, V1 and V2 of the EM M4 exams were 70.9 (n=47, SD=9.0), 84.4 (n=36, SD=5.2), and 83.3 (n=11, SD=6.9), respectively. Mean scores for non-EM-bound students were 68.0 (n=256, SD=9.7), 82.9 (n=243, SD=6.5), and 74.5 (n=13, SD=5.9). There was a significant difference in mean scores in EM-bound and non-EM-bound student for the EM-ACE (p=0.05) and V2 (p<0.01) but not V1 (p=0.18) of the National EM M4 examination.
Conclusion: Students who successfully matched in EM performed better on all three exams at the end of their EM clerkship.
Competency Assessment in Senior Emergency Medicine Residents for Core Ultrasound Skills
Introduction: Quality resident education in point-of-care ultrasound (POC US) is becoming increasingly important in emergency medicine (EM); however, the best methods to evaluate competency in graduating residents has not been established. We sought to design and implement a rigorous assessment of image acquisition and interpretation in POC US in a cohort of graduating residents at our institution.
Methods: We evaluated nine senior residents in both image acquisition and image interpretation for five core US skills (focused assessment with sonography for trauma (FAST), aorta, echocardiogram (ECHO), pelvic, central line placement). Image acquisition, using an observed clinical skills exam (OSCE) directed assessment with a standardized patient model. Image interpretation was measured with a multiple-choice exam including normal and pathologic images.
Results: Residents performed well on image acquisition for core skills with an average score of 85.7% for core skills and 74% including advanced skills (ovaries, advanced ECHO, advanced aorta). Residents scored well but slightly lower on image interpretation with an average score of 76%.
Conclusion: Senior residents performed well on core POC US skills as evaluated with a rigorous assessment tool. This tool may be developed further for other EM programs to use for graduating resident evaluation.
Mentoring during Medical School and Match Outcome among Emergency Medicine Residents
Introduction: Few studies have documented the value of mentoring for medical students, and research has been limited to more subjective (e.g., job satisfaction, perceived career preparation) rather than objective outcomes. This study examined whether having a mentor is associated with match outcome (where a student matched based on their rank order list [ROL]).
Methods: We sent a survey link to all emergency medicine (EM) program coordinators to distribute to their residents. EM residents were surveyed about whether they had a mentor during medical school. Match outcome was assessed by asking residents where they matched on their ROL (e.g., first choice, fifth choice). They were also asked about rank in medical school, type of degree (MD vs. DO), and performance on standardized tests. Residents who indicated having a mentor completed the Mentorship Effectiveness Scale (MES), which evaluates behavioral characteristics of the mentor and yields a total score. We assessed correlations among these variables using Pearson’s correlation coefficient. Post-hoc analysis using independent sample t-test was conducted to compare differences in the MES score between those who matched to their first or second choice vs. third or higher choice.
Results: Participants were a convenience sample of 297 EM residents. Of those, 199 (67%) reported having a mentor during medical school. Contrary to our hypothesis, there was no significant correlation between having a mentor and match outcome (r=0.06, p=0.29). Match outcome was associated with class rank (r=0.13, p=0.03), satisfaction with match outcome (r= -0.37, p<0.001), and type of degree (r=0.12, p=0.04). Among those with mentors, a t-test revealed that the MES score was significantly higher among those who matched to their first or second choice (M=51.31, SD=10.13) compared to those who matched to their third or higher choice (M=43.59, SD=17.12), t(194)=3.65, p<0.001, d=0.55.
Conclusion: Simply having a mentor during medical school does not impact match outcome, but having an effective mentor is associated with a more favorable match outcome among medical students applying to EM programs.
Emergency Medicine Residents Consistently Rate Themselves Higher than Attending Assessments on ACGME Milestones
Introduction: In 2012 the Accreditation Council for Graduate Medical Education (ACGME) introduced the Next Accreditation System (NAS), which implemented milestones to assess the competency of residents and fellows. While attending evaluation and feedback is crucial for resident development, perhaps equally important is a resident’s self-assessment. If a resident does not accurately self-assess, clinical and professional progress may be compromised. The objective of our study was to compare emergency medicine (EM) resident milestone evaluation by EM faculty with the same resident’s self-assessment.
Methods: This is an observational, cross-sectional study that was performed at an academic, four-year EM residency program. Twenty-five randomly chosen residents completed milestone self-assessment using eight ACGME sub-competencies deemed by residency leadership as representative of core EM principles. These residents were also evaluated by 20 faculty members. The milestone levels were evaluated on a nine-point scale. We calculated the average difference between resident self-ratings and faculty ratings, and used sample t-tests to determine statistical significance of the difference in scores.
Results: Eighteen residents evaluated themselves. Each resident was assessed by an average of 16 attendings (min=10, max=20). Residents gave themselves statistically significant higher milestone ratings than attendings did for each sub-competency examined (p<0.0001).
Conclusion: Residents over-estimated their abilities in every sub-competency assessed. This underscores the importance of feedback and assessment transparency. More attention needs to be paid to methods by which residency leadership can make residents’ self-perception of their clinical ability more congruent with that of their teachers and evaluators. The major limitation of our study is small sample size of both residents and attendings.
Ultrasound Training in the Emergency Medicine Clerkship
Introduction: The curriculum in most emergency medicine (EM) clerkships includes very little formalized training in point-of-care ultrasound. Medical schools have begun to implement ultrasound training in the pre-clinical curriculum, and the EM clerkship is an appropriate place to build upon this training. The objectives are (1) to evaluate the effectiveness of implementing a focused ultrasound curriculum within an established EM clerkship and (2) to obtain feedback from medical students regarding the program.Methods: We conducted a prospective cohort study of medical students during an EM clerkship year from July 1, 2011, to June 30, 2012. Participants included fourth-year medical students (n=45) enrolled in the EM clerkship at our institution. The students underwent a structured program focused on the focused assessment with sonography for trauma exam and ultrasound-guided vascular access. At the conclusion of the rotation, they took a 10-item multiple choice test assessing knowledge and image interpretation skills. A cohort of EM residents (n=20) also took the multiple choice test but did not participate in the training with the students. We used an independent samples t-test to examine differences in test scores between the groups.
Results: The medical students in the ultrasound training program scored significantly higher on the multiple-choice test than the EM residents, t(63)=2.3, p<0.05. The feedback from the students indicated that 82.8% were using ultrasound on their current rotations and the majority (55.2%) felt that the one-on-one scanning shift was the most valuable aspect of the curriculum.
Discussion: Our study demonstrates support for an ultrasound training program for medical students in the EM clerkship. After completing the training, students were able to perform similarly to EM residents on a knowledge-based exam.
Assessing EM Patient Safety and Quality Improvement Milestones Using a Novel Debate Format
Graduate medical education is increasingly focused on patient safety and quality improvement; training programs must adapt their curriculum to address these changes. We propose a novel curriculum for emergency medicine (EM) residency training programs specifically addressing patient safety, systems-based management, and practice-based performance improvement, called “EM Debates.” Following implementation of this educational curriculum, we performed a cross-sectional study to evaluate the curriculum through resident self-assessment. Additionally, a cross-sectional study to determine the ED clinical competency committee’s (CCC) ability to assess residents on specific competencies was performed. Residents were overall very positive towards the implementation of the debates. Of those participating in a debate, 71% felt that it improved their individual performance within a specific topic, and 100% of those that led a debate felt that they could propose an evidence-based approach to a specific topic. The CCC found that it was easier to assess milestones in patient safety, systems-based management, and practice-based performance improvement (sub-competencies 16, 17, and 19) compared to prior to the implementation of the debates. The debates have been a helpful venue to teach EM residents about patient safety concepts, identifying medical errors, and process improvement.
Model for Developing Educational Research Productivity: The Medical Education Research Group
Introduction: Education research and scholarship are essential for promotion of faculty as well as dissemination of new educational practices. Educational faculty frequently spend the majority of their time on administrative and educational commitments and as a result educators often fall behind on scholarship and research. The objective of this educational advance is to promote scholarly productivity as a template for others to follow.
Methods: We formed the Medical Education Research Group (MERG) of education leaders from our emergency medicine residency, fellowship, and clerkship programs, as well as residents with a focus on education. First, we incorporated scholarship into the required activities of our education missions by evaluating the impact of programmatic changes and then submitting the curricula or process as peer-reviewed work. Second, we worked as a team, sharing projects that led to improved motivation, accountability, and work completion. Third, our monthly meetings served as brainstorming sessions for new projects, research skill building, and tracking work completion. Lastly, we incorporated a work-study graduate student to assist with basic but time-consuming tasks of completing manuscripts.
Results: The MERG group has been highly productive, achieving the following scholarship over a three-year period: 102 abstract presentations, 46 journal article publications, 13 MedEd Portal publications, 35 national didactic presentations and five faculty promotions to the next academic level.
Conclusion: An intentional focus on scholarship has led to a collaborative group of educators successfully improving their scholarship through team productivity, which ultimately leads to faculty promotions and dissemination of innovations in education.
Implementation of an Education Value Unit (EVU) System to Recognize Faculty Contributions
Introduction: Faculty educational contributions are hard to quantify, but in an era of limited resources it is essential to link funding with effort. The purpose of this study was to determine the feasibility of an educational value unit (EVU) system in an academic emergency department and to examine its effect on faculty behavior, particularly on conference attendance and completion of trainee evaluations.
Methods: A taskforce representing education, research, and clinical missions was convened to develop a method of incentivizing productivity for an academic emergency medicine faculty. Domains of educational contributions were defined and assigned a value based on time expended. A 30-hour EVU threshold for achievement was aligned with departmental goals. Targets included educational presentations, completion of trainee evaluations and attendance at didactic conferences. We analyzed comparisons of performance during the year preceding and after implementation.
Results: Faculty (N=50) attended significantly more didactic conferences (22.7 hours v. 34.5 hours, p<0.005) and completed more trainee evaluations (5.9 v. 8.8 months, p<0.005). During the pre-implementation year, 84% (42/50) met the 30-hour threshold with 94% (47/50) meeting post-implementation (p=0.11). Mean total EVUs increased significantly (94.4 hours v. 109.8 hours, p=0.04) resulting from increased conference attendance and evaluation completion without a change in other categories.
Conclusion: In a busy academic department there are many work allocation pressures. An EVU system integrated with an incentive structure to recognize faculty contributions increases the importance of educational responsibilities. We propose an EVU model that could be implemented and adjusted for differing departmental priorities at other academic departments.
Correlation of the National Board of Medical Examiners Emergency Medicine Advanced Clinical Examination Given in July to Intern American Board of Emergency Medicine in-training Examination Scores: A Predictor of Performance?
Introduction: There is great variation in the knowledge base of Emergency Medicine (EM) interns in July. The first objective knowledge assessment during residency does not occur until eight months later, in February, when the American Board of EM (ABEM) administers the in-training examination (ITE). In 2013, the National Board of Medical Examiners (NBME) released the EM Advanced Clinical Examination (EM-ACE), an assessment intended for fourth-year medical students. Administration of the EM-ACE to interns at the start of residency may provide an earlier opportunity to assess the new EM residents’ knowledge base. The primary objective of this study was to determine the correlation of the NBME EM-ACE, given early in residency, with the EM ITE. Secondary objectives included determination of the correlation of the United States Medical Licensing Examination (USMLE) Step 1 or 2 scores with early intern EM-ACE and ITE scores and the effect, if any, of clinical EM experience on examination correlation.
Methods: This was a multi-institutional, observational study. Entering EM interns at six residencies took the EM-ACE in July 2013 and the ABEM ITE in February 2014. We collected scores for the EM-ACE and ITE, age, gender, weeks of clinical EM experience in residency prior to the ITE, and USMLE Step 1 and 2 scores. Pearson’s correlation and linear regression were performed.
Results: Sixty-two interns took the EM-ACE and the ITE. The Pearson’s correlation coefficient between the ITE and the EM-ACE was 0.62. R-squared was 0.5 (adjusted 0.4). The coefficient of determination was 0.41 (95% CI [0.3-0.8]). For every increase of one in the scaled EM-ACE score, we observed a 0.4% increase in the EM in-training score. In a linear regression model using all available variables (EM-ACE, gender, age, clinical exposure to EM, and USMLE Step 1 and Step 2 scores), only the EM-ACE score was significantly associated with the ITE (p<0.05). We observed significant colinearity among the EM-ACE, ITE and USMLE scores. Gender, age and number of weeks of EM prior to the ITE had no effect on the relationship between EM-ACE and the ITE.
Conclusion: Given early during intern year, the EM-ACE score showed positive correlation with ITE. Clinical EM experience prior to the in-training exam did not affect the correlation.
Effect of a Novel Engagement Strategy Using Twitter on Test Performance
Introduction: Medical educators in recent years have been using social media for more penetrance to technologically-savvy learners. The utility of using Twitter for curriculum content delivery has not been studied. We sought to determine if participation in a social media-based educational supplement would improve student performance on a test of clinical images at the end of the semester.
Methods: 116 second-year medical students were enrolled in a lecture-based clinical medicine course, in which images of common clinical exam findings were presented. An additional, optional assessment was performed on Twitter. Each week, a clinical presentation and physical exam image (not covered in course lectures) were distributed via Twitter, and students were invited to guess the exam finding or diagnosis. After the completion of the course, students were asked to participate in a slideshow “quiz” with 24 clinical images, half from lecture and half from Twitter.
Results: We conducted a one-way analysis of variance to determine the effect Twitter participation had on total, Twitter-only, and lecture-only scores. Twitter participation data was collected from the end-of-course survey and was defined as submitting answers to the Twitter-only questions “all or most of the time”, “about half of the time”, and “little or none of the time.” We found a significant difference in overall scores (p<0.001) and in Twitter-only scores (p<0.001). There was not enough evidence to conclude a significant difference in lecture-only scores (p=0.124). Students who submitted answers to Twitter “all or most of the time” or “about half the time” had significantly higher overall scores and Twitter-only scores (p<0.001 and p<0.001, respectively) than those students who only submitted answers “little or none of the time.”
Conclusion: While students retained less information from Twitter than from traditional classroom lecture, some retention was noted. Future research on social media in medical education would benefit from clear control and experimental groups in settings where quantitative use of social media could be measured. Ultimately, it is unlikely for social media to replace lecture in medical curriculum; however, there is a reasonable role for social media as an adjunct to traditional medical education.