In simulation, it may be important in some instances that the physiologic responses to given interventions are substantially repeatable. However, there is no agreed approach to evaluating the repeatability of simulators. We therefore aimed to develop such an approach.Methods
In repeated simulations, we evaluated the physiologic responses to 7 simple clinical interventions generated by a METI (Medical Education Technologies Incorporated, Sarasota, FL) HPS (Human Patient Simulator) simulator in connected and disconnected states and the screen-based Anesoft Anesthesia Simulator. For a selection of variables, we calculated 3 objective measures of similarity (root mean squared error, median performance error, and median absolute performance error). We also calculated divergence over time and compared 3 preprocessing techniques to reduce the effect of clinically irrelevant phase and frequency differences (simple phase shift, complex phase shift, and dynamic time warping).Results
We collected data from more than 85 hours of simulation time from 255 simulations. The Anesoft physiologic responses were reproduced exactly in each simulation for all variables and interventions. Minor divergence was present between the time series generated with the METI HPS in the connected state but not in the disconnected state. The METI HPS showed some variation between simulations in the raw data. This was most usefully quantified using median absolute performance error as an indicator and was substantially reduced by preprocessing, particularly with dynamic time warping.Conclusions
The repeatability of the physiologic response of model-controlled simulators to simple standardized interventions can be evaluated by considering divergence over time and the median absolute performance error of individual or pooled variables, but data should be preprocessed to eliminate irrelevant phase and frequency offsets in some variables. Dynamic time warping is an effective method for this purpose.