This example demonstrates how to perform representational similarity analysis (RSA) on
MEEG data containing magnetometers, gradiometers and EEG channels. In this scenario
there are important things we need to keep in mind:
Different sensor types see the underlying sources from different perspectives, hence
spatial searchlight patches based on the sensor positions are a bad idea. We will
perform a searchlight over time only, pooling data from all sensors at all times.
The sensors have different units of measurement, hence the numeric data is in
different orders of magnitude. If we don’t compensate for this, only the sensors with
data in the highest order of magnitude will matter when compuring RDMs. We will
compute a noise covariance matrix and perform data whitening to achieve this.
The dataset will be the MNE-sample dataset: a collection of 288 epochs in which the
participant was presented with an auditory beep or visual stimulus to either the left or
right ear or visual field.
# sphinx_gallery_thumbnail_number=2# Import required packagesimportoperatorimportmneimportmne_rsaimportnumpyasnpmne.set_log_level(False)# Be less verbose
To estimate the differences in signal amplitude between the different sensor types, we
compute the (co-)variance during a period of relative rest in the signal: the baseline
period (-200 to 0 milliseconds). See MNE-Python’s covariance tutorial for
details.
(<Figure size 1140x370 with 6 Axes>, <Figure size 1140x370 with 3 Axes>)
Now we compute a reference RDM (simply encoding visual vs audio condition) and RSA it
against the sensor data, which we will do in a sliding window across time.