Adapting Simulation Responses From Judgment-Based to Analytic-Based Scores: A Process Model, Case Study, and Empirical Evaluation of Managers’ Responses Among a Sample of Managers

    loading  Checking for direct PDF access through Ovid

Abstract

Workplace simulations, often used to assess or train employees, historically rely on human raters who use judgment to evaluate and score the behavior they observe (judgment-based scoring). Such judgments are often complex and holistic, raising concerns about their reliability and susceptibility to bias. Human raters are also resource-intensive; thus, organizations are interested in strategies for reducing the role of human judgment in simulations. For example, using a checklist of discrete, clearly observable behaviors with predefined point values (analytic scoring) might be expected to simplify the rating process and produce more consistent scores. With the use of good text- or voice-recognition software, such a checklist might even be amenable to automation, eliminating the need for human raters altogether. Although the possibility of such potential benefits may appeal to organizations, it is unclear how changing the scoring method in this way may affect the meaning of scores. The authors developed a framework for converting judgment-based scores to analytic scores, using the automated scoring and qualitative content analysis literatures, and applied this framework to the original constructed responses of 84 managers in a workplace simulation. The responses were adapted into discrete behaviors and scored analytically. Results indicated that responses could be adequately summarized using a reasonable number of discrete behaviors, and that analytic scores converged significantly but not strongly with the original judgment-based scores from human raters. We discuss implications for future research and provide recommendations for practitioners considering automated scores in workplace simulations.

Related Topics

    loading  Loading Related Articles