Sensor-level RSA using a searchlight

This example demonstrates how to perform representational similarity analysis (RSA) on EEG data, using a searchlight approach.

In the searchlight approach, representational similarity is computed between the model and searchlight “patches”. A patch is defined by a seed point (e.g. sensor Pz) and everything within the given radius (e.g. all sensors within 4 cm. of Pz). Patches are created for all possible seed points (e.g. all sensors), so you can think of it as a “searchlight” that moves from seed point to seed point and everything that is in the spotlight is used in the computation.

The radius of a searchlight can be defined in space, in time, or both. In this example, our searchlight will have a spatial radius of 4.5 cm. and a temporal radius of 50 ms.

The dataset will be the kiloword dataset [1]: approximately 1,000 words were presented to 75 participants in a go/no-go lexical decision task while event-related potentials (ERPs) were recorded.

# sphinx_gallery_thumbnail_number=2

# Import required packages
import mne
import mne_rsa

MNE-Python contains a build-in data loader for the kiloword dataset. We use it here to read it as 960 epochs. Each epoch represents the brain response to a single word, averaged across all the participants. For this example, we speed up the computation, at a cost of temporal precision, by downsampling the data from the original 250 Hz. to 100 Hz.

data_path = mne.datasets.kiloword.data_path(verbose=True)
epochs = mne.read_epochs(data_path / 'kword_metadata-epo.fif')
epochs = epochs.resample(100)
Reading C:\Users\wmvan\mne_data\MNE-kiloword-data\kword_metadata-epo.fif ...
Isotrak not found
    Found the data of interest:
        t =    -100.00 ...     920.00 ms
        0 CTF compensation matrices available
Adding metadata with 8 columns
960 matching events found
No baseline correction applied
0 projection items activated

The kiloword datas was erroneously stored with sensor locations given in centimeters instead of meters. We will fix it now. For your own data, the sensor locations are likely properly stored in meters, so you can skip this step.

for ch in epochs.info['chs']:
    ch['loc'] /= 100

The epochs object contains a .metadata field that contains information about the 960 words that were used in the experiment. Let’s have a look at the metadata for the 10 random words:

WORD Concreteness WordFrequency OrthographicDistance NumberOfLetters BigramFrequency ConsonantVowelProportion VisualComplexity
312 honey 5.65 2.570543 1.75 5.0 670.200000 0.400000 70.445483
95 snout 5.90 1.672098 1.70 5.0 202.600000 0.600000 62.797616
718 fracture 5.30 1.579784 2.70 8.0 472.500000 0.625000 57.818087
345 health 3.25 3.373280 1.85 6.0 450.500000 0.666667 64.356975
687 exchange 3.95 2.888741 2.75 8.0 495.000000 0.625000 73.889893
43 drip 5.00 1.447158 1.70 4.0 429.250000 0.750000 63.003788
543 phase 3.10 2.745075 1.85 5.0 472.000000 0.600000 76.458061
117 estate 5.45 2.820201 1.85 6.0 415.333333 0.500000 66.833060
626 feeling 1.85 3.402261 1.75 7.0 805.714286 0.571429 63.803300
41 dune 5.30 1.397940 1.25 4.0 349.000000 0.500000 72.852817


Let’s pick something obvious for this example and build a dissimilarity matrix (DSM) based on the number of letters in each word.

dsm_vis = mne_rsa.compute_dsm(epochs.metadata[['NumberOfLetters']],
                              metric='euclidean')
mne_rsa.plot_dsms(dsm_vis)
plot sensor level
<Figure size 200x200 with 2 Axes>

The above DSM will serve as our “model” DSM. In this example RSA analysis, we are going to compare the model DSM against DSMs created from the EEG data. The EEG DSMs will be created using a “searchlight” pattern. We are using squared Euclidean distance for our DSM metric, since we only have a few data points in each searchlight patch. Feel free to play around with other metrics.

rsa_result = mne_rsa.rsa_epochs(
    epochs,                           # The EEG data
    dsm_vis,                          # The model DSM
    epochs_dsm_metric='sqeuclidean',  # Metric to compute the EEG DSMs
    rsa_metric='kendall-tau-a',       # Metric to compare model and EEG DSMs
    spatial_radius=0.45,              # Spatial radius of the searchlight patch in meters.
    temporal_radius=0.05,             # Temporal radius of the searchlight path in seconds.
    tmin=0.15, tmax=0.25,             # To save time, only analyze this time interval
    n_jobs=1,                         # Only use one CPU core. Increase this for more speed.
    n_folds=None,                     # Don't use any cross-validation
    verbose=False)                    # Set to True to display a progress bar
Performing RSA between Epochs and 1 model DSM(s)
    Spatial radius: 0.45 meters
    Using 29 sensors
    Temporal radius: 5 samples
    Time inverval: 0.15-0.25 seconds
Automatic dermination of folds: 1 (no cross-validation)
Creating spatio-temporal searchlight patches

The result is packed inside an MNE-Python mne.Evoked object. This object defines many plotting functions, for example mne.Evoked.plot_topomap() to look at the spatial distribution of the RSA values. By default, the signal is assumed to represent micro-Volts, so we need to explicitly inform the plotting function we are plotting RSA values and tweak the range of the colormap.

rsa_result.plot_topomap(rsa_result.times, units=dict(eeg='kendall-tau-a'),
                        scalings=dict(eeg=1), cbar_fmt='%.4f', vmin=0, nrows=2,
                        sphere=1)
0.150 s, 0.160 s, 0.170 s, 0.180 s, 0.190 s, 0.200 s, kendall-tau-a, 0.210 s, 0.220 s, 0.230 s, 0.240 s, 0.250 s
<MNEFigure size 1050x415 with 12 Axes>

Unsurprisingly, we get the highest correspondance between number of letters and EEG signal in areas in the visual word form area.

Total running time of the script: ( 0 minutes 32.290 seconds)

Gallery generated by Sphinx-Gallery