Community feedback helps docs discuss errors with patients

In a study, resident physicians’ communication skills improved with a smartphone-based test in which laypeople provided scoring and feedback.

Media Contact: Brian Donohue - 206-543-7856, bdonohue@uw.edu


In the United States, medical errors cause an estimated 250,000 deaths a year and affect about 10% of all hospital patients. Unfortunately, most patients don’t get the information or emotional support they need after a harmful error — in part because few physicians receive training in this communication skill. 

Last year, the organization that accredits U.S. medical residency programs mandated that resident physicians demonstrate competency in discussing errors with patients and family members.

picture of Dr. Andrew White
UW Medicine "A large part of success in these conversations is related to identifying the emotions that each individual patient has. Feelings like sadness, anger, disbelief, despair," said Dr. Andrew White, the study's lead author.

“The standard requires that residents not only receive instruction, but that they actually acquire skills to do it. So we need to create a credible measurement,” said Dr. Andrew White, a general internist and professor at the University of Washington School of Medicine. 

White and colleagues this month published study findings that showed a smartphone-based test model improved residents’ ability to discuss medical mistakes and harms. The model involved actors simulating clinical dramas, an app that displays video and records audio, and, perhaps most intriguingly, crowdsourced citizens in the role of evaluators.

“This work is exciting because the measurement is coming from laypeople around the country who all likely have been patients getting medical care at some point. There’s something empowering about taking the evaluation away from faculty supervisors, and asking members of the public to determine how well a doctor is communicating crucial information,” White said.

The findings were published Aug. 7 in JAMA Network Open.

The study involved seven U.S. residencies spanning several states. The 146 participants were second-year residents in internal medicine and family medicine. All received an initial lecture about how to talk with patients about clinical errors. 

“We wanted residents to have a playbook to anticipate patients’ reactions to hearing about an error,” White said. “A large part of success in these conversations is related to identifying the emotions that each individual patient has. Feelings like sadness, anger, disbelief, despair. The language we use in these situations needs to align with those emotional responses and communicate understanding, acknowledgment and comfort. There’s some preferred phrasing, but doctors need to be able to think on their feet.”

Using a smartphone app, participating residents viewed brief video vignettes of actors portraying patients in hospital beds and clinic rooms. Looking directly at the camera, the patients asked difficult questions about serious medical errors: a misdiagnosis and a medication overdose.

The questions conveyed anxiety, irritation, distrust and grief. The application also offered participants brief written context about the errors and resulting medical problems.

After each vignette played, residents were prompted to record audio of the response they would give that patient. Participants could take a minute to compose their thoughts — a benefit afforded to give them the best chance to respond effectively, White said. Such a pause would be inappropriate in a real-time patient exchange, he acknowledged.

undefined
Reprinted with permission from the National Board of Medical Examiners The video-based assessment interface, above, included: A) the case text and video prompt available for review; B) the overall rating from the panel of crowdsourced raters (orange) and peer average (blue); C) buttons that play the recorded response to this vignette and an exemplary response from a peer; and D) learning points derived from crowdsourced advice about what they would like the physician to say in this situation.

The vignettes and the residents’ recorded responses were sent to panels of crowdsourced laypeople, who numerically rated the responses on how well they conveyed accountability, honesty, apology, empathy and caring.

All 146 participants were invited to repeat the exercise a month later with a second series of faux patient vignettes, and 103 participants complied. Of those 103, 53 had been randomized to receive their crowdsourced scores and feedback from the first video series.

“Access to feedback was associated with a statistically significant improvement in skills,” White said. “All the residents were exposed to the same patient cases, and the cohort of residents who got the feedback on the first round got higher ratings on their second round of responses.”

An important secondary finding, White said, related to the residents who dropped out before the second round of assessment. “They tended to have lower first-round scores. Some folks who need this training the most are the least likely to stick with it.”

It’s a warning sign for medical educators, he added.

“It is ripe for the ACGME (Accreditation Council for Graduate Medical Education) to require this. And it is incumbent on programs and program faculty to work with the residents who are struggling with this.”

White nevertheless sounded upbeat about the potential to use this assessment model on a bigger scale.

“This demonstrates that we actually improved skills in a way that matters to patients, and also in a way that is scalable: Because it's an electronic tool, we can administer it almost as easily to a hundred people as to 10.”

The study received funding from the National Board of Medical Examiners.

 

For details about UW Medicine, please visit http://uwmedicine.org/about.


Tags:medical errorscommunitypatient-physician communication

UW Medicine