Visuo-tactile integration and body ownership during self-generated action

    loading  Checking for direct PDF access through Ovid

Abstract

Although there is increasing knowledge about how visual and tactile cues from the hands are integrated, little is known about how self-generated hand movements affect such multisensory integration. Visuo-tactile integration often occurs under highly dynamic conditions requiring sensorimotor updating. Here, we quantified visuo-tactile integration by measuring cross-modal congruency effects (CCEs) in different bimanual hand movement conditions with the use of a robotic platform. We found that classical CCEs also occurred during bimanual self-generated hand movements, and that such movements lowered the magnitude of visuo-tactile CCEs as compared to static conditions. Visuo-tactile integration, body ownership and the sense of agency were decreased by adding a temporal visuo-motor delay between hand movements and visual feedback. These data show that visual stimuli interfere less with the perception of tactile stimuli during movement than during static conditions, especially when decoupled from predictive motor information. The results suggest that current models of visuo-tactile integration need to be extended to account for multisensory integration in dynamic conditions.

We have used a new robotic and virtual reality platform to investigate visuo-tactile integration (as assessed by the crossmodal congruency effect) in static and dynamic conditions. We report that movement reduces visual interference on touch and that visuo-motor mismatch (delay) modulates the spatial representation of visuo-tactile cues as well as the sense of ownership. Current models of visuo-tactile integration need to be extended to account for multisensory integration in dynamic conditions.

Related Topics

    loading  Loading Related Articles