Development and Initial Validation of the Rutgers Instrument for Evaluation of Students in Psychology


    loading  Checking for direct PDF access through Ovid

Abstract

A rating scale was developed to measure the progress of graduate students in psychology programs toward competence. Content for the scale was based on American Psychological Association’s (APA) Standards of Accreditation for Health Service Psychology. Content validity evidence was collected on a pilot version of the Rutgers Instrument for Evaluation of Students in Psychology (RIESP) based on feedback from 11 experts in educational and psychological measurement, indicating the items were usable and fit the appropriate domains and areas. A field study was conducted on the next iteration of the RIESP to collect evidence including internal consistency, validity based on internal structure, and validity based on relations with other variables. Participants included 36 supervisors and 43 students associated with a doctoral level school psychology program. Scores from the RIESP primarily had internal consistency in the excellent range, and its subscale scores correlated in the very large and nearly perfect ranges. The RIESP total score shared correlations in the medium range with the Praxis School Psychologist test, in the very large range with the Graduate Record Examination (GRE) Psychology test, and in the large range with the verbal reasoning task of the GRE General test. These findings are discussed within a classical test theory framework, providing evidence for the RIESP and similar measures of graduate student competence in psychology.

    loading  Loading Related Articles