This study experimentally manipulated various rating strategies to test effects on interdimensional variance, reliability, and validity of interview ratings. Undergraduate research participants (N = 180) rated 60 interview transcripts under 3 experimental conditions that represented different rating strategies. Interdimensional variance was higher, internal consistency reliability was lower, and interrater reliability was higher in the rating condition that gave raters the least opportunity to form and use general impressions of interviewees. Rating strategy had no discernible effects on validity. To interpret these results, the authors argued that the rating strategies used in this study differed in the degree to which they allowed raters to form idiosyncratic general impressions of interviewees. These idiosyncratic halo effects explain why rating strategies that raise internal consistency reliability simultaneously lower interrater reliability.