In a lexicographic semiorders model for preference, cues are searched in a subjective order, and an alternative is preferred if its value on a cue exceeds those of other alternatives by a threshold Δ, akin to a just noticeable difference in perception. We generalized this model from preference to inference and refer to it as Δ-inference. Unlike with preference, where accuracy is difficult to define, the problem a mind faces when making an inference is to select a Δ that can lead to accurate judgments. To find a solution to this problem, we applied Clyde Coombs’s theory of single-peaked preference functions. We show that the accuracy of Δ-inference can be understood as an approach–avoidance conflict between the decreasing usefulness of the first cue and the increasing usefulness of subsequent cues as Δ grows larger, resulting in a single-peaked function between accuracy and Δ. The peak of this function varies with the properties of the task environment: The more redundant the cues and the larger the differences in their information quality, the smaller the Δ. An analysis of 39 real-world task environments led to the surprising result that the best inferences are made when Δ is 0, which implies relying almost exclusively on the best cue and ignoring the rest. This finding provides a new perspective on the take-the-best heuristic. Overall, our study demonstrates the potential of integrating and extending established concepts, models, and theories from perception and preference to improve our understanding of how the mind makes inferences.