Visual explanation of the three components of the post-hoc modification framework

In the post-hoc modification framework, a linear model is no longer described by a single matrix (the weight matrix), but by three subcomponents:

The pattern matrix

The data covariance

The normalizer

We will now take a look at an example linear regression model and see how the post-hoc modification framework subdivides the regression problem into three subproblems, each solved by one of the subcomponents.

Data simulation and linear regression model

This is a simulation of a signal that is being observed through two sensors.
Dots represent observations of the signal and the color of the dots indicates the true signal strength during each observation.
Linear regression is used to decode the true signal strength from the observed data.
In visual terms, the task of the model is to decode the color of a dot, based on its location in the graph.

A: The simulated data consists of two components.
The first component (large dots) dictates how the signal is measured by the sensors (i.e. the encoding model).
In this simulation, there is a one-to-one relationship between the true signal strength and the measurements at both sensors.
The second component (small dots) is simulated using random numbers drawn from a two-dimensional Gaussian distribution and is a simulating noise that is unrelated to the strength of the signal.

B: The data that is recorded by the sensors (large dots) is the summation of both the signal and noise components.
A linear regression model was trained on these observations, with the true signal strength as target.
It represents the optimal linear transformation to map the measured data to signal strength.
In this two-dimensional example, the model's weights can be visualized as a line (orange).
We see that the direction of the regression line is dictated by both the noise and the signal component, which is why the weight matrix is so hard to interpret.

C: Applying the linear regression to the data is equivalent to projecting the measured data onto the regression line.
By projecting the data orthogonal to the noise, a near perfect reconstruction of the signal strength can be obtained.
As a performance metric, the Pearson correlation (rho) between the signal strength and the position along the regression line is provided.

The subcomponents defined in the post-hoc modification framework

D:
The pattern matrix represents the signal of interest and, like the weight matrix, can be visualized as a line (green).
This line should approximate the direction of the actual signal (see panel A).

E:
The data covariance matrix is used to construct a whitening operator, which "disentangles" the data.
The data is projected such that the variance in all directions is 1, and no cross-correlations exist any more.
This transformation is then also applied to the pattern matrix (green line). Performing linear regression is now equivalent to projecting the whitened data onto the whitened pattern line.

F: Finally, the normalizer (orange) scales the result such that the position along the projection line maps to the true signal strength.
As a performance metric, the Pearson correlation (rho) between the signal strength and the position along the regression line is provided.