Many weather forecast providers believe that forecast uncertainty in the form of the worst-case scenario would be useful for general public end users. We tested this suggestion in 4 studies using realistic weather-related decision tasks involving high winds and low temperatures. College undergraduates, given the statistical equivalent of the worst-case scenario (1 boundary of the 80% predictive interval), demonstrated biased understanding of future weather conditions compared with those given both bounds or no uncertainty information. We argue that this was due to an anchoring effect on numeric estimates, which were closer to the worst-case scenario than was warranted and increased linearly as the anchor became more extreme. In many situations tested here, anchoring in numeric estimates also extended to subsequent binary decisions, leading participants with the worst-case scenario to take action more often than did other participants. These results suggest that worst-case scenario forecasts can mislead the user. They appear to convince people that wind speeds will be higher and temperatures will be lower than what are indicated by the forecast. In addition, participants systematically “corrected” the forecast they were given. This effect was most prominent in the condition in which no uncertainty was provided, suggesting that people feel compelled to take uncertainty into account, even when it is not acknowledged by the forecast. Both the anchoring and correction biases were least evident when both bounds were provided, suggesting that balanced uncertainty leads to the best understanding of future weather conditions.