|| Checking for direct PDF access through Ovid
This study measured the quality of data extracted from a clinical information system widely used for critical care quality improvement and research.We abstracted data from 30 fields in a random sample of 207 patients admitted to nine adult, medical-surgical intensive care units. We assessed concordance between data collected: (1) manually from the bedside system (eCritical MetaVision) by trained auditors, and (2) electronically from the system data warehouse (eCritical TRACER). Agreement was assessed using Cohen's Kappa for categorical variables and intraclass correlation coefficient (ICC) for continuous variables.Concordance between data sets was excellent. There was perfect agreement for 11/30 variables (35%). The median Kappa score for the 16 categorical variables was 0.99 (IQR 0.92–1.00). APACHE II had an ICC of 0.936 (0.898–0.960). The lowest concordance was observed for SOFA renal and respiratory components (ICC 0.804 and 0.846, respectively). Score translation errors by the manual auditor were the most common source of data discrepancies.Manual validation processes of electronic data are complex in comparison to validation of traditional clinical documentation. This study represents a straightforward approach to validate the use of data repositories to support reliable and efficient use of high quality secondary use data.EMR data validation processes for secondary data use are complex.Translation errors by manual auditors are a common source of dataset discrepancies.EMR system repositories provide high quality secondary use data for QI and research.Independent quality checks should be a fundamental part of any multipurpose EMR system.