Note
Go to the end to download the full example code.
Construct a model RDM#
This example shows how to create RDMs from arbitrary data. A common use case for this is to construct a “model” RDM to RSA against the brain data. In this example, we will create a RDM based on the length of the words shown during an EEG experiment.
# Import required packages
import mne
import mne_rsa
MNE-Python contains a build-in data loader for the kiloword dataset, which is used
here as an example dataset. Since we only need the words shown during the experiment,
which are in the metadata, we can pass preload=False
to prevent MNE-Python from
loading the EEG data, which is a nice speed gain.
data_path = mne.datasets.kiloword.data_path(verbose=True)
epochs = mne.read_epochs(data_path / "kword_metadata-epo.fif", preload=False)
# Show the metadata of 10 random epochs
print(epochs.metadata.sample(10))
Using default location ~/mne_data for kiloword...
Downloading file 'MNE-kiloword-data.tar.gz' from 'https://osf.io/qkvf9/download?version=1' to '/home/vanvlm1/mne_data'.
0%| | 0.00/25.5M [00:00<?, ?B/s]
1%|▍ | 327k/25.5M [00:00<00:08, 2.83MB/s]
3%|█ | 740k/25.5M [00:00<00:07, 3.23MB/s]
5%|██ | 1.38M/25.5M [00:00<00:05, 4.24MB/s]
7%|██▌ | 1.81M/25.5M [00:00<00:06, 3.79MB/s]
17%|██████▎ | 4.30M/25.5M [00:00<00:02, 10.4MB/s]
26%|█████████▋ | 6.67M/25.5M [00:00<00:01, 14.5MB/s]
36%|█████████████▎ | 9.18M/25.5M [00:00<00:00, 17.7MB/s]
44%|████████████████▎ | 11.3M/25.5M [00:00<00:00, 18.5MB/s]
54%|███████████████████▊ | 13.7M/25.5M [00:00<00:00, 20.1MB/s]
63%|███████████████████████▍ | 16.1M/25.5M [00:01<00:00, 21.5MB/s]
72%|██████████████████████████▌ | 18.3M/25.5M [00:01<00:00, 21.0MB/s]
81%|██████████████████████████████▏ | 20.8M/25.5M [00:01<00:00, 22.0MB/s]
91%|█████████████████████████████████▌ | 23.1M/25.5M [00:01<00:00, 20.4MB/s]
100%|████████████████████████████████████▉| 25.5M/25.5M [00:01<00:00, 21.3MB/s]
0%| | 0.00/25.5M [00:00<?, ?B/s]
100%|██████████████████████████████████████| 25.5M/25.5M [00:00<00:00, 104GB/s]
Untarring contents of '/home/vanvlm1/mne_data/MNE-kiloword-data.tar.gz' to '/home/vanvlm1/mne_data'
Download complete in 03s (24.3 MB)
Reading /home/vanvlm1/mne_data/MNE-kiloword-data/kword_metadata-epo.fif ...
Isotrak not found
Found the data of interest:
t = -100.00 ... 920.00 ms
0 CTF compensation matrices available
Adding metadata with 8 columns
960 matching events found
No baseline correction applied
0 projection items activated
WORD Concreteness WordFrequency OrthographicDistance NumberOfLetters BigramFrequency ConsonantVowelProportion VisualComplexity
547 trust 2.45 2.828015 1.60 5.0 510.000000 0.800000 53.993651
324 tonic 5.30 1.845098 1.70 5.0 702.600000 0.600000 56.868425
335 flail 4.65 0.903090 1.85 5.0 472.200000 0.600000 48.824559
419 charter 4.40 2.155336 1.60 7.0 919.000000 0.714286 60.155132
754 chin 5.85 2.660865 1.45 4.0 637.750000 0.750000 59.599664
741 tree 6.70 3.111934 1.65 4.0 741.250000 0.500000 62.240529
806 sheen 4.00 1.662758 1.80 5.0 592.000000 0.600000 72.735896
78 bench 5.75 2.539076 1.75 5.0 633.800000 0.800000 72.166025
593 appeal 2.90 2.915927 2.40 6.0 340.333333 0.500000 73.575297
435 material 5.35 3.235781 2.45 8.0 987.750000 0.500000 62.126647
Now we are ready to create the “model” RDM, which will encode the difference in length between the words shown during the experiment.
rdm = mne_rsa.compute_rdm(epochs.metadata.NumberOfLetters, metric="euclidean")
# Plot the RDM
fig = mne_rsa.plot_rdms(rdm, title="Word length RDM")
fig.set_size_inches(3, 3) # Make figure a little bigger to show axis properly

Total running time of the script: (0 minutes 4.109 seconds)