Cummins (1995) had subjects generate explanations of failures of naive causal conditional inferences, and showed that the heuristic “tallying” of inference defeaters thus generated, inversely predicts other subjects judgments of confidence in those inferences. This is the basis for a probability-free computational model of simple heuristic naive causal judgment. Judgment and decision-making research has treated intensional reasoning only as approximation to extensional reasoning, for lack of a suitable intensional logic, as we illustrate with the Linda task (Tversky & Kahneman, 1983). To explain the retrieval, validation, and combination of defeater-cues, we integrate the tallying heuristic with an intensional logic: logic programming (LP). A feature of LP is the abnormality clause of its conditionals p Λ¬ab →q. Inference from p to q proceeds unless there is evidence of abnormality, which blocks inference, leaving the conditional still true, but the current case as an exception. This feature is the foundation of a neural model of defeater retrieval. A logic well supported in discourse processing and in reasoning (van Lambalgen & Hamm, 2004; Stenning & van Lambalgen, 2008), thus provides a mental process account of how models of judgment situations can be constructed, cues found, and their validities estimated, all within tractable intensional reasoning. Intensional LP and extensional probability work together when LP computes the causal and temporal foundations for Bayes nets, revealing a range of systems for modeling mental processes of judgment, from LP, through extensional conditional frequency reasoning, to full probability. This sharpens empirical questions about the relation of probability to cognition.