Accurate classification of the emotional state of others is of vital importance to human social functioning and is a process that relies heavily upon the extraction and processing of specific visual cues from faces. Although the successful identification of a given facial expression has been shown to rely on the processing of specific visual features, it remains largely unknown how fixed the processing of these specific visual cues actually is. In a series of experiments we tested if observers make use of different visual information from expressive faces depending on the nature of the categorization task. To this end, we determined the facial features crucial for the categorization of three key facial expressions of emotion (fear, disgust, and anger) during “expressive or neutral” and “which expression” categorization tasks. For fearful categorizations we observed that the same high spatial-frequency features were consistently used irrespective of task, but that low spatial-frequency features were important only in limited comparison tasks with one or two alternative comparison categories. Moreover, information use from the low spatial frequency bands was not fixed and varied depending on the comparison categories used in the task as participants made use of the most salient visual information available to perform the task at hand. These results provide novel evidence of flexible information use in categorizing expressive faces and highlight the crucial importance of the nature of categorization task in determining the spatial-frequency features that are attended to and encoded during the categorization of facial expressions of emotion.