The representation of spatial information related to an event can influence behavior even when location is task-irrelevant, as in the case of Stimulus–Response (S-R) compatibility effects on the Simon task. However, unlike single-modality situations, which are often used to study the Simon effect, in real-life scenarios various sensory modalities provide spatial information coded in different coordinate systems. Here, we address the expression of S-R compatibility effects in mixed-modality contexts, where events can occur in 1 of various sensory modalities (i.e., vision, touch or audition). The results confirm that, in single-modality cases, Simon effects in vision are expressed in an external spatial frame of reference, while touch information is coded anatomically. Remarkably, when mixing visual and tactile trials in an unpredictable way, the Simon effect disappeared in vision whereas tactile Simon effects remained expressed in their own (anatomical) frame of reference. Mixing visual and auditory stimuli did not obliterate the visual Simon effect and S-R compatibility effects in an external reference frame were evident for both modalities. The extinction of visual Simon effects as a result of mixing visual and tactile modalities can be interpreted as a consequence of the dynamic reorganization of the weights associated to the different sources of spatial information at play.