Errors in detecting randomness are often explained in terms of biases and misconceptions. We propose and provide evidence for an account that characterizes the contribution of the inherent statistical difficulty of the task. Our account is based on a Bayesian statistical analysis, focusing on the fact that a random process is a special case of systematic processes, meaning that the hypothesis of randomness is nested within the hypothesis of systematicity. This analysis shows that randomly generated outcomes are still reasonably likely to have come from a systematic process and are thus only weakly diagnostic of a random process. We tested this account through 3 experiments. Experiments 1 and 2 showed that the low accuracy in judging whether a sequence of coin flips is random (or biased toward heads or tails) is due to the weak evidence provided by random sequences. While randomness judgments were less accurate than judgments involving non-nested hypotheses in the same task domain, this difference disappeared once the strength of the available evidence was equated. Experiment 3 extended this finding to assessing whether a sequence was random or exhibited sequential dependence, showing that the distribution of statistical evidence has an effect that complements known misconceptions.