Standardized patients are a beneficial component of modern healthcare education and training, but few studies have explored cognitive factors potentially impacting clinical skills assessment during standardized patient encounters. This study examined the impact of a periodic (vs. traditional postencounter) evaluation approach and the appearance of critical verbal and nonverbal behaviors throughout a standardized patient encounter on scoring accuracy in a video-based scenario.Methods
Forty-nine standardized patients scored either periodically or at only 1 point in time (postencounter) a healthcare provider’s verbal and nonverbal clinical performance during a videotaped standardized patient encounter. The healthcare provider portrayed in this study was actually a standardized patient delivering carefully scripted verbal and nonverbal behaviors in their portrayal of an actual physician. The encounter itself was subdivided into 3 distinct segments for the purpose of supporting periodic evaluation, with the expectation that both verbal and nonverbal cues occurring in the middle segment would be more challenging to accurately report for participants in the postscenario evaluation group as a result of working memory decay.Results
Periodic evaluators correctly identified a significantly greater number of critical verbal cues midscenario than postencounter evaluators (P < 0.01) and correctly identified a significantly greater number of critical nonverbal cues than their postscenario counterparts across all 3 scenario segments (P < 0.001). Further, postscenario evaluations exhibited a performance decrement in terms of midscenario correct identifications that periodic evaluators did not (P < 0.01). Also, periodic evaluators exhibited fewer verbal cue false-positives during the first segment of the scenario than postscenario evaluators (P < 0.001), but this effect did not extend to other segments regardless of the cue type (ie, verbal or nonverbal).Discussion
Pausing lengthier standardized patient encounters periodically to allow for more frequent scoring may result in better reporting accuracy for certain clinical behavioral cues. This could enable educators to provide more specific formative feedback to individual learners at the session’s conclusion. The most effective encounter design will ultimately depend on the specific goals and training objectives of the exercise itself.