.. only:: html
.. note::
:class: sphx-glr-download-link-note
Click :ref:`here ` to download the full example code
.. rst-class:: sphx-glr-example-title
.. _sphx_glr_auto_examples_plot_simulation.py:
DICS for power mapping and connectivity analysis
================================================
In this tutorial, we're going to simulate two signals originating from two
locations on the cortex. These signals will be sine waves, so we'll be looking
at oscillatory activity (as opposed to evoked activity). These signals will
also be coherent_, which means they are correlated in the frequency domain.
Coherence is thought to be indicative of communication between brain areas and
can be used as a measure of functional connectivity [1]_. So we are effectively
simulating two active brain regions that communicate with each-other.
We'll be using dynamic imaging of coherent sources (DICS) [2]_ to map out
all-to-all functional connectivity between brain regions. Let's see if we can
find our two simulated coherent sources.
This tutorial aims to give you an introduction to function connectivity
analysis on a toy dataset. Analyzing a real dataset entails many methodological
considerations, which are discussed in [3]_.
.. _coherent: https://en.wikipedia.org/wiki/Coherence_(signal_processing)
.. code-block:: default
# Author: Marijn van Vliet
#
# License: BSD (3-clause)
Setup
-----
We first import the required packages to run this tutorial and define a list
of filenames for various things we'll be using.
.. code-block:: default
import os.path as op
import numpy as np
from scipy.signal import welch, coherence
from mayavi import mlab
from matplotlib import pyplot as plt
# The conpy package implements DICS connectivity
from conpy import (dics_connectivity, all_to_all_connectivity_pairs,
one_to_all_connectivity_pairs, forward_to_tangential)
# MNE implements all other EEG analysis tools.
import mne
from mne.datasets import sample, testing
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
# Suppress irrelevant output
mne.set_log_level('ERROR')
# We use the MEG and MRI setup from the MNE-sample dataset
data_path = sample.data_path(download=False)
subjects_dir = op.join(data_path, 'subjects')
mri_path = op.join(subjects_dir, 'sample')
# Filenames for various files we'll be using
meg_path = op.join(data_path, 'MEG', 'sample')
trans_fname = op.join(meg_path, 'sample_audvis_raw-trans.fif')
fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')
# Later on, we'll use the forward model defined in the MNE-testing dataset
testing_path = op.join(testing.data_path(download=False), 'MEG', 'sample')
fwd_lim_fname = op.join(testing_path,
'sample_audvis_trunc-meg-eeg-oct-4-fwd.fif')
# Seed for the random number generator
np.random.seed(42)
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none
/work/modules/Ubuntu/14.04/amd64/common/anaconda3/latest/envs/neuroimaging/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
return f(*args, **kwds)
Data simulation
---------------
The following function generates a timeseries that contains an oscillator,
whose frequency fluctuates a little over time, but stays close to 10 Hz.
We'll use this function to generate our two signals.
.. code-block:: default
sfreq = 50 # Sampling frequency of the generated signal
times = np.arange(10. * sfreq) / sfreq # 10 seconds of signal
def coh_signal_gen():
"""Generate an oscillating signal.
Returns
-------
signal : ndarray
The generated signal.
"""
t_rand = 0.001 # Variation in the instantaneous frequency of the signal
std = 0.1 # Std-dev of the random fluctuations added to the signal
base_freq = 10. # Base frequency of the oscillators in Hertz
n_times = len(times)
# Generate an oscillator with varying frequency and phase lag.
iflaw = base_freq / sfreq + t_rand * np.random.randn(n_times)
signal = np.exp(1j * 2.0 * np.pi * np.cumsum(iflaw))
signal *= np.conj(signal[0])
signal = signal.real
# Add some random fluctuations to the signal.
signal += std * np.random.randn(n_times)
signal *= 1e-7
return signal
Let's simulate two timeseries and plot some basic information about them.
.. code-block:: default
signal1 = coh_signal_gen()
signal2 = coh_signal_gen()
plt.figure(figsize=(8, 4))
# Plot the timeseries
plt.subplot(221)
plt.plot(times, signal1)
plt.xlabel('Time (s)')
plt.title('Signal 1')
plt.subplot(222)
plt.plot(times, signal2)
plt.xlabel('Time (s)')
plt.title('Signal 2')
# Power spectrum of the first timeseries
f, p = welch(signal1, fs=sfreq, nperseg=128, nfft=256)
plt.subplot(223)
plt.plot(f[:100], p[:100]) # Only plot the first 30 frequencies
plt.xlabel('Frequency (Hz)')
plt.ylabel('Power')
plt.title('Power spectrum of signal 1')
# Compute the coherence between the two timeseries
f, coh = coherence(signal1, signal2, fs=sfreq, nperseg=100, noverlap=64)
plt.subplot(224)
plt.plot(f[:50], coh[:50])
plt.xlabel('Frequency (Hz)')
plt.ylabel('Coherence')
plt.title('Coherence between the timeseries')
plt.tight_layout()
.. image:: /auto_examples/images/sphx_glr_plot_simulation_001.png
:alt: Signal 1, Signal 2, Power spectrum of signal 1, Coherence between the timeseries
:class: sphx-glr-single-img
Now we put the signals at two locations on the cortex. We construct a
:class:`mne.SourceEstimate` object to store them in.
.. code-block:: default
# The locations on the cortex where the signal will originate from. These
# locations are indicated as vertex numbers.
source_vert1 = 146374
source_vert2 = 33830
# Construct a SourceEstimate object that describes the signal at the cortical
# level.
stc = mne.SourceEstimate(
np.vstack((signal1, signal2)), # The two signals
vertices=[[source_vert1], [source_vert2]], # Their locations
tmin=0,
tstep=1. / sfreq,
subject='sample', # We use the brain model of the MNE-Sample dataset
)
Before we simulate the sensor-level data, let's define a signal-to-noise
ratio. You are encouraged to play with this parameter and see the effect of
noise on our results.
.. code-block:: default
SNR = 5 # Signal-to-noise ratio. Decrease to add more noise.
Now we run the signal through the forward model to obtain simulated sensor
data. To save computation time, we'll only simulate gradiometer data. You can
try simulating other types of sensors as well.
.. code-block:: default
# Load the forward model
fwd = mne.read_forward_solution(fwd_fname)
# Use only gradiometers
fwd = mne.pick_types_forward(fwd, meg='grad', eeg=False, exclude='bads')
# Create an info object that holds information about the sensors (their
# location, etc.).
info = mne.create_info(fwd['info']['ch_names'], sfreq, ch_types='grad')
info.update(fwd['info'])
# To simulate the data, we need a version of the forward solution where each
# source has a "fixed" orientation, i.e. pointing orthogonally to the surface
# of the cortex.
fwd_fixed = mne.convert_forward_solution(fwd, force_fixed=True)
# Now we can run our simulated signal through the forward model, obtaining
# simulated sensor data.
sensor_data = mne.apply_forward_raw(fwd_fixed, stc, info).get_data()
# We're going to add some noise to the sensor data
noise = np.random.randn(*sensor_data.shape)
# Scale the noise to be in the ballpark of MEG data
noise_scaling = np.linalg.norm(sensor_data) / np.linalg.norm(noise)
noise *= noise_scaling
# Mix noise and signal with the given signal-to-noise ratio.
sensor_data = SNR * sensor_data + noise
We create an :class:`mne.EpochsArray` object containing two trials: one with
just noise and one with both noise and signal. The techniques we'll be
using in this tutorial depend on being able to contrast data that contains
the signal of interest versus data that does not.
.. code-block:: default
epochs = mne.EpochsArray(
data=np.concatenate(
(noise[np.newaxis, :, :],
sensor_data[np.newaxis, :, :]),
axis=0),
info=info,
events=np.array([[0, 0, 1], [10, 0, 2]]),
event_id=dict(noise=1, signal=2),
)
# Plot the simulated data
epochs.plot()
.. image:: /auto_examples/images/sphx_glr_plot_simulation_002.png
:alt: Ch.
:class: sphx-glr-single-img
.. rst-class:: sphx-glr-script-out
Out:
.. code-block:: none