Automatic and robust noise suppression in EEG and MEG: The SOUND algorithm
Electroencephalography (EEG) and magnetoencephalography (MEG) often suffer from noise- and artifact-contaminated channels and trials. Conventionally, EEG and MEG data are inspected visually and cleaned accordingly, e.g., by identifying and rejecting the so-called “bad” channels. This approach has several shortcomings: data inspection is laborious, the rejection criteria are subjective, and the process does not fully utilize all the information in the collected data.
Here, we present noise-cleaning methods based on modeling the multi-sensor and multi-trial data. These approaches offer objective, automatic, and robust removal of noise and disturbances by taking into account the sensor- or trial-specific signal-to-noise ratios.
We introduce a method called the source-estimate-utilizing noise-discarding algorithm (the SOUND algorithm). SOUND employs anatomical information of the head to cross-validate the data between the sensors. As a result, we are able to identify and suppress noise and artifacts in EEG and MEG. Furthermore, we discuss the theoretical background of SOUND and show that it is a special case of the well-known Wiener estimators. We explain how a completely data-driven Wiener estimator (DDWiener) can be used when no anatomical information is available. DDWiener is easily applicable to any linear multivariate problem; as a demonstrative example, we show how DDWiener can be utilized when estimating event-related EEG/MEG responses.
We validated the performance of SOUND with simulations and by applying SOUND to multiple EEG and MEG datasets. SOUND considerably improved the data quality, exceeding the performance of the widely used channel-rejection and interpolation scheme. SOUND also helped in localizing the underlying neural activity by preventing noise from contaminating the source estimates. SOUND can be used to detect and reject noise in functional brain data, enabling improved identification of active brain areas.