Objective Informatics programs need assurance that their curricula prepare students for intended roles as well as ensuring that students have mastered the appropriate competencies. The objective of this study is to describe a method for using assessment data to identify areas for curriculum, student selection, and assessment improvement.
Materials and Methods A multiple-choice examination covering the content in the Commission for Health Accreditation of Informatics and Information Management Education curricular facets/elements was developed and administered to 2 cohorts of entering students prior to the beginning of the program and to the first cohort after completion of the first year’s courses. The reliability of the examination was assessed using Cronbach’s alpha. Content validity was assessed by having 2 raters assess the match of the items to the Commission for Health Accreditation of Informatics and Information Management Education requirements. Construct validation included comparison of exam performance of instructed vs uninstructed students. Criterion-related validity was assessed by examining the relationship of background characteristics to exam performance and by comparing examination performance to graduate Grade Point Average (GPA).
Results Reliability of the examination was 0.91 and 0.82 (Cohort 1 pre/post-tests) and 0.43 (Cohort 2 pretest). Both raters judged 76% of the test items as appropriate. There were statistically significant differences between the instructed (Cohort 1 post-test) and uninstructed (Cohort 2 pretest) students (t = 2.95 P < .01), as well as between the Cohort 1 pre/post-tests (t = 6.52, P < .001). Neither the background variables nor the graduate GPA were significantly correlated with the examination scores.
Conclusion We found that the examination had generally good psychometric properties and the exceptions could be used to identify areas for curriculum and assessment improvement.