[An updated version of the work in this post appears in this preprint]
Cluster-based statistics are a family of methods that aim to correct for multiple comparisons while preserving power when effects are distributed over time, space and other dimensions (Maris & Oostenveld, 2007; Smith & Nichols, 2009; Pernet et al., 2015; Rosenblatt et al., 2018; Fields & Kuperberg, 2020; Frossard & Renaud, 2022). These methods capitalise on the correlation of effects across measurements to control the family-wise error rate (FWER, probability of at least one false positive): neighbouring effects that pass a threshold are lumped together, and compared to a reference distribution, typically a null distribution established using a permutation or bootstrap approach (Pernet et al., 2015). These cheap multivariate techniques (I say cheap because even if they take into account the distribution of effects over at least one dimension, there is no explicit modelling required) are particularly popular in fMRI, EEG and MEG research (for instance, according to Google Scholar, Smith & Nichols (2009) has been cited 4945 times, Maris & Oostenveld (2007) 7053 times). Cluster-based methods have the potential to be applied to a wide array of problems, including simpler ones involving only a few comparisons, as I’ve argued in another post (see for instance application to TMS latencies in Dugué et al. (2011). I’ve also used cluster-based statistics extensively in my EEG and behavioural research (Rousselet et al., 2014; Bieniek et al., 2016; Jaworska et al., 2020).
However, cluster-based methods are not without issues. Most notably, Sassenhagen & Draschkow (2019) reminded the community about problems with the interpretation of cluster-based inferences and suggested guidelines on how to report such results. This useful reminder has been heard, as demonstrated by the number of citations: 196 times according to the article’s publisher page and 320 times according to Google Scholar. Essentially, the main problem with cluster-based methods is that they provide p values at the cluster level only, or even for collection of clusters for hierarchical/meta-analysis methods such as TFCE (Smith & Nichols, 2009). There are no p values at individual testing points, and no error control for onsets and offsets. Consequently, cluster-based methods do not provide direct inferences about the localisation of effects. But does that mean we cannot use cluster-based methods to localise, for instance to estimate the onsets of effects?
Sassenhagen & Draschkow (2019) go on to report results from a simulation, which suggest that onset estimation using cluster-based inferences are very noisy. They used a truncated Gaussian as a template, to define a real onset to be estimated. Then, they compared 50 trials of noise only, to 50 trials of noise + template, using cluster-sum F statistics – see cluster-sum explanation here. These cluster-sums were compared to a permutation distribution to determine statistical significance. For each simulation iteration, the first statistically significant point was declared the onset of the effect. Here is their figure of results, showing the distribution of estimated onsets across 10,000 simulation iterations:

The distribution of onsets reveals two important patterns:
- Estimated onsets appear positively biased, with the mode higher than the real onset.
- The distribution is asymmetric, with thicker left tail than right tail.
Sassenhagen & Draschkow (2019) reported that “on >20% of runs, the effect onset was estimated too early; divergence of 40 ms or more were found at >10% of runs.”
No bias estimate was reported.
So, how bad are the results? We don’t know because no other method was used to estimate onsets, so we cannot put the results in perspective. Imagine a study that aimed to test the prediction of larger variance in EEG signals in population X relative to population Y, but only tested participants from population X. Having observed large absolute variance in X, what can we conclude without measurements in Y?
Less importantly, Sassenhagen & Draschkow (2019) used EEG data from one human participant as noise to add to their template in the simulations. The data illustrated in supplementary information look heavily contaminated by alpha noise – I could be wrong, but having recorded from hundreds of participants, it looks like the data are from a very sleepy participant. That’s a problem, because the claim is that there is a general issue with cluster-based inferences, not one specific to a particular participant.
Let’s address the comparison problem by contrasting cluster-based inferences with inferences from two other popular methods, the false discovery rate correction for multiple comparisons (FDR, Benjamini & Hochberg, 1995; Nichols & Hayasaka, 2003) and the maximum statistics correction for multiple comparisons (MAX, Holmes et al., 1996; Nichols & Holmes, 2002). FDR methods provide a weak control of the FWER by controlling the number of false positives among the positive tests; when there is no effect, an FDR correction is equivalent to controlling the FWER; when effects are present, it is more powerful than controlling the FWER (Benjamini & Hochberg, 1995; Fields & Kuperberg, 2020). The MAX correction achieves strong control of the FWER by comparing each statistic, say a t or F value, to a distribution of maximum statistics obtained under the null. These methods, like cluster-based methods, do not explicitly estimate onsets (they were not designed for that purpose) and do not provide error control for onsets, but are routinely used to localise effects.
Other methods have been proposed, for instance defining the onset as the time at which a certain proportion of the peak amplitude has been reached (Liesefeld, 2018; Fahrenfort, 2020). However, such methods depend on potentially subjective peak identification (especially when applied to individual participants), and they are guaranteed, by definition, to lead to positively biased onset estimates. These methods are not considered here but maybe they have some merit.
Another recent method, cluster depth tests, combines cluster inferences while delivering p values at every testing point, thus solving the main issue of cluster-based inferences as implemented so far (Frossard & Renaud, 2022). But this approach, like previous cluster-based methods, and alternative ways to correct for multiple comparisons such as FDR or MAX, do not solve a fundamental issue that has been ignored so far: whether a p value is available at each testing point or not is irrelevant, as none of the methods routinely used to localise MEG and EEG effects explicitly assess onsets. That’s because onsets are second order properties of the data. As such, they require to establish the lack of effects in some areas, the presence of effects in others, and a test of the interaction. However, none of the cluster-based methods, or any other standard methods used to localise in EEG/MEG/fMRI and other fields, explicitly test for the interaction. The interaction is an inference performed pretty much by eyeballing maps of thresholded statistics. So localisation attempts typically rely on an interaction fallacy, a problem well documented in neuroscience (Nieuwenhuis et al., 2011) and other fields (Gelman & Stern, 2006).
More explicit estimation of onsets could be achieved by using methods that go beyond mass-univariate tests and consider time-courses. For instance, there is a vast family of algorithms aimed at detecting change points in time-series that could be used to estimate onsets. Many packages in the R programming language (R Core Team, 2021) offer such algorithms, for instance RBeast (Zhao et al., 2019), mcp (Lindeløv, 2020), changepoint (Killick & Eckley, 2014). There is a very useful survey of such packages here. The Beast algorithm (Zhao et al., 2013) is also available in python and Matlab and I’m sure there are many others. Matlab offers several native solutions, including the findchangepts function. For comparison, here I used a basic change point detection method that relies on differences in mean and variance, as implemented in the changepoint package in R, and the built-in findchangepts function in Matlab.
So, let’s revisit the simulation results from Sassenhagen & Draschkow (2019) by performing our own simulations. Following their approach, I compared a noise only condition to a noise + signal condition. The signal was a truncated Gaussian defining an objective onset at 160 ms.

This approach is very similar to that of Sassenhagen & Draschkow (2019), except that I simulated only one electrode, with cluster inferences only in the time domain. There is no need to simulate extra dimensions (electrodes, time-frequencies), because the argument against cluster-based inferences is not specific to one particular dimension. Also, I used synthetic noise instead of using data from one participant. The noise was generated by adding a collection of sinusoids at different frequencies (Yeung et al., 2004). Using instead 1/f noise gave very similar results.
The simulations involved 50 trials per condition, 10,000 iterations per simulation, and the null distributions were estimated using a permutation with 2,000 iterations.
The simulations were performed twice, in R and in Matlab, with the code available on GitHub.
Here is an example of noise trials:

And 50 trials of signal + noise (grey), with the mean superimposed (black):

And an example of averages for the noise only and noise + signal conditions:

The vertical black line marks the 160 ms true onset. The two conditions were compared using two-sample t-tests with unequal variances. Here is the time-course of squared t values matching the previous figure.

We consider four methods to correct for multiple comparisons:
- [FDR] The standard 1995 FDR method.
- [MAX] The maximum statistics correction.
- [CS] Cluster-based + permutation inference, using a cluster-sum statistic of squared t values.
- [CP] Change point detection based on differences in mean and variance, applied to the time-course of t2 values. The number of change points was set to 2.
Here are the onsets estimated by each method, using the data from the previous example:

In grey is the permutation distribution of squared t values. The solid vertical line indicates the true onset. The dotted vertical line marks the estimated onset. The horizontal dashed lines are permutation thresholds.
After 10,000 iterations, we get these distributions of onsets for our four methods, with the vertical line marking the true onset:

Each distribution has a shape similar to that reported by Sassenhagen & Draschkow (2019): overall a positive bias and some amount of under-estimation. Let’s quantify the results.
[1] Bias was quantified by subtracting the true onset from the median of each distribution. Because the distributions are asymmetric, the median provides a better reflection of the typical onset than the mean (Rousselet & Wilcox, 2020).
Bias:
- FDR = 30 ms
- MAX = 40 ms
- CS = 30 ms
- CP = 10 ms
Confirming our visual inspection of the distributions, all methods are biased, the least biased being CP, the most biased being MAX. This is not surprising, as MAX is by definition conservative, the price to pay for strong FWER control.
[2] The mean absolute error (MAE) indicated how close to the true onset each method gets.
MAE:
- FDR = 42 ms
- MAX = 44 ms
- CS = 30 ms
- CP = 19 ms
Again, CS doesn’t perform worse than other methods: CP was best, MAX was worst.
[3] As Sassenhagen & Draschkow (2019) did, let’s look at the proportion of onsets earlier than the true onset.
% early onsets:
- FDR = 17.5%
- MAX = 1.6%
- CS = 0.5%
- CP = 13.4%
[4] And the proportion of onsets at least 40 ms earlier than the true onset.
% <40 ms onsets:
- FDR = 14.7%
- MAX = 1.3%
- CS = 0.1%
- CP = 3.7%
Again, there is nothing wrong with the cluster-sum approach at all: other methods that provide p values at individual testing points performed worst.
Using 1/f noise gave similar results, but the absolute results change substantially whether white noise, pink noise or red noise was used – see full simulations on GitHub.
So far we have considered results from one simulated participant. A typical experiment involves more than one participant. What happens if we try to estimate onsets from 20 participants instead? To find out, let’s assume that each participant has a random onset of 150, 160 or 170 ms (uniform distribution), so that the population onset remains 160 ms. There are 50 trials per condition. In each simulation iteration, after estimating the onset in each of the 20 participants, we use the median onset as our group estimate. Here are the results:

Again, there is nothing wrong with the cluster-sum results: they are very similar to the FDR results, better than MAX, worse than change point.
In numbers:
Bias:
- FDR = 30
- MAX = 40
- CS = 30
- CP = 10
Mean absolute error:
- FDR = 27
- MAX = 43
- CS = 29
- CP = 14
Proportion too early:
- FDR = 0.04%
- MAX = 0%
- CS = 0%
- CP = 0.08%
Note the very low proportion of under-estimation once results are combined across participants.
Of course, another way to reduce bias is to increase sample size. Here are the results from one participant and sample sizes ranging from 20 to 150 trials.

Bias can be reduced considerably by using larger sample sizes, but the MAX method remains overly conservative. Here is what happens for the mean absolute error:

And the proportion of early onsets:

Overall, the cluster-sum method appears to perform well, and the change point approach looks very promising.
Discussion
Onsets are inherently noisy because they correspond to the testing points with the lowest signal to noise ratio. But estimation variability can be reduced by increasing the number of trials and pooling estimates across participants. And based on the present simulation results, there is no rationale for doubting more localisation results obtained using cluster-based inferences than other methods. Certainly, I’m not going to ditch the estimates of face ERP onsets I’ve published so far. They are estimates after all, obtained in a larger number of participants, and with promising test-retest reliability obtained from independent EEG sessions (Bieniek et al., 2016).
From the simulations, it seems that one should be more worried about onsets estimated using FDR and MAX corrections. And the change point approach seems very promising. Certainly here I’ve used a relatively simple algorithm and it remains to be determined what can be achieved using modern methods, for instance Bayesian approaches that allow hierarchical pooling of estimates (Lindeløv, 2020).
Currently, no method routinely used in MEG/EEG explicitly estimates onsets, whether a p value is available at every testing point or not. Whether correction for multiple comparison is achieved using cluster-based statistics, FDR, MAX or any related method, even the new cluster depth tests (Frossard & Renaud, 2022), onset estimation relies on a statistical fallacy, because the implied interaction is never explicitly tested. Does it mean that non-sensical results are obtained? The simulations presented here clearly indicate that meaningful results can be obtained, and that cluster-based inferences can provide more accurate results than alternative methods.
Obviously, one experiment provides no guarantee, and that’s why it is essential to consider sampling distributions, like the ones presented here, to get an idea of how much variability to expect in the long run (Rousselet, 2019). But do not fall victim to the statistical orthodoxy: plenty of methods are used to estimate aspects of the data that they were not developed to handle. In doubt, simulations are essential to understand how well a job any of your tools can achieve – never trust the labels.
In addition to the onset estimation approach, it is worth considering other methodological aspects that can affect onsets. The most important factor is probably filtering, with high-pass filters introducing potentially severe signal distortions (Rousselet, 2012; Widmann & Schröger, 2012; Widmann et al., 2015; van Driel et al., 2021). If the goal is to estimate onsets, I recommend using causal filters to remove low frequencies and to avoid backpropagation of effects. On the stimulus front, screen luminance can strongly influence onsets (Bieniek et al., 2013).
In conclusion, can we use cluster-based permutation tests to estimate MEG/EEG onsets? Absolutely, yes, but:
- Instead of estimating onsets at the group level, measure onsets in each participant and report group distributions.
- Assess the test-retest reliability of the onsets, either using the same data (cross-validation) or a different one.
- Report results from more than one method, to assess computational consistency.
More advanced steps:
- Use large datasets to compare onset estimation methods, including techniques, so far ignored, that explicitly consider time-series.
- Use large datasets to provide benchmarks in your field.
- Use simulations to compare methods. To go beyond the simple approach presented here, one could use algorithms that generate more realistic time-series (Barzegaran et al., 2019).
References
Barzegaran, E., Bosse, S., Kohler, P.J., & Norcia, A.M. (2019) EEGSourceSim: A framework for realistic simulation of EEG scalp data using MRI-based forward models and biologically plausible signals and noise. J. Neurosci. Methods, 328, 108377.
Benjamini, Y. & Hochberg, Y. (1995) Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J. R. Stat. Soc. Ser. B Methodol., 57, 289–300.
Bieniek, M., Frei, L., & Rousselet, G. (2013) Early ERPs to faces: aging, luminance, and individual differences. Front. Psychol., 4.
Bieniek, M.M., Bennett, P.J., Sekuler, A.B., & Rousselet, G.A. (2016) A robust and representative lower bound on object processing speed in humans. Eur. J. Neurosci., 44, 1804–1814.
Dugué, L., Marque, P., & VanRullen, R. (2011) Transcranial Magnetic Stimulation Reveals Attentional Feedback to Area V1 during Serial Visual Search. PLOS ONE, 6, e19712.
Fahrenfort, J.J. (2020) Multivariate Methods to Track the Spatiotemporal Profile of Feature-Based Attentional Selection Using EEG. In Pollmann, S. (ed), Spatial Learning and Attention Guidance, Neuromethods. Springer US, New York, NY, pp. 129–156.
Fields, E.C. & Kuperberg, G.R. (2020) Having your cake and eating it too: Flexibility and power with mass univariate statistics for ERP data. Psychophysiology, 57, e13468.
Frossard, J. & Renaud, O. (2022) The cluster depth tests: Toward point-wise strong control of the family-wise error rate in massively univariate tests with application to M/EEG. NeuroImage, 247, 118824.
Gelman, A. & Stern, H. (2006) The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant. Am. Stat., 60, 328–331.
Holmes, A.P., Blair, R.C., Watson, J.D., & Ford, I. (1996) Nonparametric analysis of statistic images from functional mapping experiments. J. Cereb. Blood Flow Metab. Off. J. Int. Soc. Cereb. Blood Flow Metab., 16, 7–22.
Jaworska, K., Yi, F., Ince, R.A.A., van Rijsbergen, N.J., Schyns, P.G., & Rousselet, G.A. (2020) Healthy aging delays the neural processing of face features relevant for behavior by 40 ms. Hum. Brain Mapp., 41, 1212–1225.
Killick, R. & Eckley, I.A. (2014) changepoint: An R Package for Changepoint Analysis. J. Stat. Softw., 58, 1–19.
Liesefeld, H.R. (2018) Estimating the Timing of Cognitive Operations With MEG/EEG Latency Measures: A Primer, a Brief Tutorial, and an Implementation of Various Methods. Front. Neurosci., 12.
Lindeløv, J.K. (2020) mcp: An R Package for Regression With Multiple Change Points. OSF Prepr.,.
Maris, E. & Oostenveld, R. (2007) Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods, 164, 177–190.
Nichols, T.E. & Holmes, A.P. (2002) Nonparametric permutation tests for functional neuroimaging: A primer with examples. Hum. Brain Mapp., 15, 1–25.
Nichols, T. & Hayasaka, S. (2003) Controlling the familywise error rate in functional neuroimaging: a comparative review. Stat Methods Med Res, 12, 419–446.
Nieuwenhuis, S., Forstmann, B.U., & Wagenmakers, E.-J. (2011) Erroneous analyses of interactions in neuroscience: a problem of significance. Nat. Neurosci., 14, 1105–1107.
Pernet, C.R., Latinus, M., Nichols, T.E., & Rousselet, G.A. (2015) Cluster-based computational methods for mass univariate analyses of event-related brain potentials/fields: A simulation study. J. Neurosci. Methods, Cutting-edge EEG Methods, 250, 85–93.
R Core Team (2021) R: A Language and Environment for Statistical Computing.
Rosenblatt, J.D., Finos, L., Weeda, W.D., Solari, A., & Goeman, J.J. (2018) All-Resolutions Inference for brain imaging. NeuroImage, 181, 786–796.
Rousselet, G. (2012) Does Filtering Preclude Us from Studying ERP Time-Courses? Front. Psychol., 3.
Rousselet, G. (2019) Using simulations to explore sampling distributions: an antidote to hasty and extravagant inferences.
Rousselet, G.A., Ince, R.A.A., van Rijsbergen, N.J., & Schyns, P.G. (2014) Eye coding mechanisms in early human face event-related potentials. J. Vis., 14, 7.
Rousselet, G.A. & Wilcox, R.R. (2020) Reaction Times and other Skewed Distributions: Problems with the Mean and the Median. Meta-Psychol., 4.
Sassenhagen, J. & Draschkow, D. (2019) Cluster-based permutation tests of MEG/EEG data do not establish significance of effect latency or location. Psychophysiology, 56, e13335.
Smith, S.M. & Nichols, T.E. (2009) Threshold-free cluster enhancement: Addressing problems of smoothing, threshold dependence and localisation in cluster inference. NeuroImage, 44, 83–98.
Stevens, M.H.H. (2009) A Primer of Ecology with R, 2nd Printing. Springer, New York.
van Driel, J., Olivers, C.N.L., & Fahrenfort, J.J. (2021) High-pass filtering artifacts in multivariate classification of neural time series data. J. Neurosci. Methods, 352, 109080.
Widmann, A. & Schröger, E. (2012) Filter Effects and Filter Artifacts in the Analysis of Electrophysiological Data. Front. Psychol., 3.
Widmann, A., Schröger, E., & Maess, B. (2015) Digital filter design for electrophysiological data – a practical approach. J. Neurosci. Methods, Cutting-edge EEG Methods, 250, 34–46.
Yeung, N., Bogacz, R., Holroyd, C.B., & Cohen, J.D. (2004) Detection of synchronized oscillations in the electroencephalogram: An evaluation of methods. Psychophysiology, 41, 822–832.
Zhao, K., Valle, D., Popescu, S., Zhang, X., & Mallick, B. (2013) Hyperspectral remote sensing of plant biochemistry using Bayesian model averaging with variable and band selection. Remote Sens. Environ., 132, 102–119.
Zhao, K., Wulder, M.A., Hu, T., Bright, R., Wu, Q., Qin, H., Li, Y., Toman, E., Mallick, B., Zhang, X., & Brown, M. (2019) Detecting change-point, trend, and seasonality in satellite time series data to track abrupt changes and nonlinear dynamics: A Bayesian ensemble algorithm. Remote Sens. Environ., 232, 111181.












































































