Rating scales are a standard measurement tool in psychological research. However, research has suggested that the cognitive burden involved in maintaining the criteria used to parcel subjective evidence into ratings introduces decision noise and affects estimates of performance in the underlying task. There has been debate over whether such decision noise is evident in recognition, with some authors arguing that it is substantial and others arguing that it is trivial or nonexistent. Here we directly assess the presence of decision noise by evaluating whether the length of a rating scale on which recognition judgments are provided is inversely related to performance on the recognition task. That prediction was confirmed: Rating scales with more options led to lower estimates of recognition than did scales with fewer options. This result supports the claim that decision noise contributes to recognition judgments and additionally suggests that caution is warranted when using rating scales more generally.