The Clinical Dementia Rating (CDR) is a common rating system used in clinical trials and longitudinal research projects to rate the presence and severity of cognitive problems in Alzheimer disease and related disorders. The interview process requires training and can be time-consuming. Here, we describe the validity, reliability, and discriminative ability of a computer-generated CDR using a personal digital assistant format. This project used clinical data from 138 archival and live evaluations (patient and informant interviews) collected for research purposes at Washington University to develop and test a software-based system for the administration and automatic scoring of the CDR. The system was programmed for use on a hand-held computer via the Palm Operating System. We developed domain-specific algorithms to quantify and translate clinical scoring decisions for the 3 cognitive (Memory, Orientation, Judgment and Problem Solving) and the 3 functional (Community Affairs, Home and Hobbies, Personal Care) domains of the CDR. An acceptable set of algorithms were developed using data from 104 research cases, reflecting a range of impairment levels (CDR 0 to 3) and expert scoring decisions. These algorithms were then tested for accuracy in a validation sample of 34 cases. The computer-generated CDR has excellent internal consistency (Cronbach's α ranging from 0.94 to 0.98) and interrater reliability (intraclass correlation coefficient ranging from 0.88 to 0.96). The computer-generated CDR showed excellent discrimination between demented and nondemented cases (Area under the curve=0.95; 95% confidence interval, 0.84-1.1). The computer-generated CDR using a Palm Operating System is easy to use, valid, and reliable. The level of agreement compares favorably to published interrater reliability data for the CDR. Software-based administration and automatic scoring of the CDR is a viable alternative to paper-based methods and may be useful in research and clinical settings, especially where electronic data management and reliability in scoring are critical.