[pymvpa] Permutation testing and FWE

Urcheon theurcheon at gmail.com
Thu Sep 13 14:18:51 UTC 2012


Dear all,
I have been following the excellent replies on this mailing list with
interest and now I need some of your wisdom. I have run a searchlight
variant in a large brain region (leave-one-run-out & binary SVM) and
have 10,000 permutations (labels in both training data and validation
data are shuffled; the procedure is otherwise identical to the true
labels). I'm only interested in whether there is significant
information in the ROI as a whole, on the single-subject level.
Unfortunately, my p-values don't pass FDR correction in all subjects
(all have plenty of p<0.001 though). Instead, I read Nichols pointers
on FWE control using the maximum statistic
(www.fil.ion.ucl.ac.uk/spm/doc/papers/NicholsHayasaka.pdf), so I
extracted the peak decoding accuracy from both the true label run and
from all the permutations (across all search volumes). I now get a
beautiful null distribution histogram for each subject and, voila, in
all subjects the corresponding p-value is <0.05 (computed as (1 + the
number of permuted max values > true max value)/(1 + the total number
of permutations)). I was hoping this would be a super rigorous
approach to multiple comparison control. What do you think?
Best regards,
M



More information about the Pkg-ExpPsy-PyMVPA mailing list