This paper reviews the rationale for correcting for chance agreement between two raters. It is suggested that under certain conditions correction for chance agreement is both unnecessary and inappropriate. When a chance correction is indicated, the chosen measure should be one that can be shown to be logically in accord with the two judges' method of determining each case's disposition. Two agreement measures, kappa (k) and Maxwell's Random Error Coefficient of Agreement (RE), are described in common terms and are then compared in terms of their assumptions about the judgment process. The plausibility of k‘S assumptions are challenged, and k’S use is discouraged on the grounds that judgment table data are not treated in a way compatible with the process by which the table was constructed during the judgment process itself. Use of Maxwell's RE is recommended under conditions of equal proportions in the disagreement cells.