Objective Assessment of Fellows: Doable and Worth Doing
The ultimate goal of graduate medical education in pediatric subspecialties is to develop fully capable physicians who can provide high-quality health care to children. Increasingly, programs are being expected to evaluate their learning activities and assess their trainees using evidence-based educational practices. The Accreditation Council for Graduate Medical Education competencies provide domains for trainee development that reflect the breadth of knowledge, skills, and attitudes that physicians should demonstrate. Recognition that development occurs across a continuum of proficiency and at a pace specific to each trainee has led to the milestones assessment framework that aims to gauge progress and define readiness for advancement and independence (1). It is simplistic, but not altogether inappropriate to indict past practice of dubbing trainees proficient if they spent enough time with appropriate supervision and received a stamp of approval from those supervisors. Applying standards and reliable assessments of progress and proficiency improves the quality of education and fosters professional and public trust in their healthcare providers. Historically, demonstration of proficiency included written examinations, which assess knowledge, and “oral” examinations, which aimed to assess clinical reasoning as well, but these do not address other competencies we value such as psychomotor clinical skills, professionalism, and communication with patients and families. Miller's pyramid of assessment stratifies levels of proficiency as “knows,” “knows how,” “shows how,” and “does” (2). Traditional examinations can inform the 2 lower levels, but assessing progress to higher levels in a meaningful, reliable, and fair way has been a challenge, and great scholarship has been devoted to generating tools to do so. Simple direct observation of trainees in the workplace is typically haphazard and variable, due to random exposure to types of patients and lack of training of faculty as observers (3). Objective Structured Clinical Examinations (OSCEs) represent one of the tools that allow trainees to “show how” they perform in professional situations (4,5).
OSCEs were introduced by Harden et al in 1975 (6), and have been have been in use in undergraduate medical education in various iterations since then. In an OSCE, trainees rotate through stations with simulated clinical encounters and their performance is scored using standardized rubrics by trained assessors. The key benefits of an OSCE are the balance of validity and standardization of simulated environments and the objectivity applied to assessing professional performance (7). They were an answer to the unstructured and highly variable long- and short-case oral examinations in which trainees were asked to assess real patients and respond to examiners’ questions (4). OSCEs have been included in medical school curricula and are now a graduation requirement in many medical schools. The USMLE Step III CS is an OSCE that all recent US residents and fellows have taken. However well-studied and widely adopted they are at the medical school level, OSCEs have not yet been adopted in pediatric subspecialty fellowship training.
In this issue, Solomon et al describe their experience piloting an OSCE program among 6 pediatric gastroenterology fellowship programs in the New York City area. The authors describe the development of the program including selection of cases, logistical organization of faculty observers and standardized patients at a single site, as well as the assessments of fellows and of the program. They developed 4 case scenarios encountered in pediatric gastroenterology practice, chosen from content areas proposed by the NASPGHAN training committee (8). The authors provide promising data regarding acceptability and validity, and underscore the challenges associated in the endeavor.
A number of aspects of the present study merit highlighting in the context of the literature on OSCEs.