[pymvpa] effect of signal on null distributions

J.A. Etzel jetzel at artsci.wustl.edu
Sun Feb 10 18:03:38 UTC 2013


I've been running some simulations to look at the effect of permuting 
the training set only, testing set only, or both (together) under 
different amounts of signal and different numbers of examples and 
cross-validation folds.

I do not see the widening of the null distribution as the amount of 
signal increases that appears in some of the example figures 
(http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20130204/a36533de/attachment-0001.png) 
when the training labels are permuted.

I posted my version of this comparison at: 
http://mvpa.blogspot.com/2013/02/comparing-null-distributions-changing.html

Some translation might be needed: my plots show accuracy, so larger 
numbers are better, and more "bias" corresponds to easier 
classification. The number of "runs" is the number of cross-validation 
folds. I set up the examples with 50 voxels ("features"), all equally 
informative, and this simulation is for just one person.

Do you typically expect to see the null distribution wider for higher 
signal when the training set labels only are permuted?

That seems a strange thing to expect, and I couldn't reproduce the 
pattern. We have a new lab member who knows python and can help me sort 
out your code; I suspect we are doing something different in terms of 
how the relabelings are done over the cross-validation folds or how the 
results are tabulated.

Jo


-- 
Joset A. Etzel, Ph.D.
Research Analyst
Cognitive Control & Psychopathology Lab
Washington University in St. Louis
http://mvpa.blogspot.com/



More information about the Pkg-ExpPsy-PyMVPA mailing list