The Tipping Point of Perceived Change: Asymmetric Thresholds in Diagnosing Improvement Versus Decline

    loading  Checking for direct PDF access through Ovid


Change often emerges from a series of small doses. For example, a person may conclude that a happy relationship has eroded not from 1 obvious fight but from smaller unhappy signs that at some point “add up.” Everyday fluctuations therefore create ambiguity about when they reflect substantive shifts versus mere noise. Ten studies reveal an asymmetry in this first point when people conclude “official” change: people demand less evidence to diagnose lasting decline than lasting improvement, despite similar evidential quality. This effect was pervasive and replicated across many domains and parameters. For example, a handful of poor grades, bad games, and gained pounds led participants to diagnose intellect, athleticism, and health as “officially” changed; yet corresponding positive signs were dismissed as fickle flukes (Studies 1a, 1b, and 1c). This further manifested in real-time reactions: participants interpreted the same graphs of change in the economy and public health as more meaningful if framed as depicting decline versus improvement (Study 2), and were more likely to gamble actual money on continued bad versus good luck (Study 3). Why? Effects held across self/other change, added/subtracted change, and intended/unintended change (Studies 4a, 4b, and 4c), suggesting a generalized negativity bias. Teasing this apart, we highlight a novel “entropy” component beyond standard accounts like risk aversion: good things seem more truly capable of losing their positive qualities than bad things seem capable of gaining them, rendering signs of decline to appear more immediately diagnostic (Studies 5 and 6). An asymmetric tipping point raises theoretical and practical implications for how people might inequitably react to smaller signs of change.

Related Topics

    loading  Loading Related Articles