Theories of decision-making often posit optimal or heuristic strategies for performing a task. In optimal strategies, information is integrated over time in order to achieve the ideal outcomes; in the heuristic case, some shortcut or simplification is applied in order to make the decision faster or easier. In this article, we use a computational framework to study the evolution of both types of decision strategies in artificial agents. The fitness of these agents is assessed based on their performance on a sequential decision task where they must accurately identify the source of as many incoming information signals as they can over a finite time span. In order to examine what decision strategies evolve as a function of task characteristics, we manipulate the quality of decision information (difficulty) and the magnitude of punishments for incorrect answers. We find that trivial (but optimal) strategies evolve when punishment magnitude is lower than the reward magnitude for correct answers, and optimal information-integrating strategies evolve when either punishment magnitude is low or information quality is high. However, the computational demands of the task become much greater as information quality decreases and punishment magnitudes increase. In these cases, heuristics are used to maintain decision accuracy in spite of the limited cognitive resources agents have available. The results suggest that heuristics are an evolved response to environments with high demands on cognitive resources, where optimal strategies are particularly difficult to achieve.