The study presents a critical analysis of written expression curriculum-based measurement (WE-CBM) metrics derived from 3- and 10-min test lengths. Criterion validity and classification accuracy were examined for Total Words Written (TWW), Correct Writing Sequences (CWS), Percent Correct Writing Sequences (%CWS), and Correct Minus Incorrect Writing Sequences (CMIWS). Fourth grade students (n = 109) from 6 schools participated in the study. To assess criterion validity of each metric, total scores from writing tasks were correlated with the state achievement test’s composition subtest. Each index investigated was moderately correlated with the subtest. Correlations increased with the longer sampling period, however they were not statistically significant. The accuracy at distinguishing between proficient and not proficient writers on the state assessment was analyzed for each index using discriminant function analysis and Receiver Operating Characteristic (ROC) curves. CWS and CMIWS, indices encompassing production and accuracy, were most accurate for predicting proficiency. Improvements were observed in classification accuracy with an increased sampling time. Utilizing cut scores to hold sensitivity above .90, specificity for each metric increased with longer probes. Sensitivity and specificity increased for all metrics with longer probes when using a 25th percentile cut. Visual analyses of ROC curves reveal where classification improvements were made. The 10-min sample for CWS more accurately identified at-risk students in the center of the distribution. Without measurement guiding decisions, writers in the middle of the distribution are more difficult to classify than those who clearly write well or struggle. The findings have implications for screening using WE-CBM.