[pymvpa] search-light vs. ROI analysis: significance puzzle

J.A. Etzel jetzel at artsci.wustl.edu
Mon Feb 25 19:41:58 UTC 2013


On 2/25/2013 1:10 PM, Vadim Axel wrote:
> Absolutely naive question: suppose I have single a-priori defined ROI
> where I get a modest group-level beyond chance prediction of
> p-value=0.01 (one-tail t-test vs. 0.5, across subjects). Now I run a
> group level whole-brain search-light and I am expected to find at least
> one cluster of beyond chance prediction in the environment of my ROI.
> Correct?
No.

I have a paper in (hopefully the last cycle of) review that goes into 
detail about these issues. But here's a brief version of some of the 
relevant ideas. I'm assuming you're using a linear SVM and proper 
cross-validation, and also that the searchlight is substantially smaller 
than the ROI.

Two possible explanations come to mind:
1) The single searchlights are too small to hold enough voxels to 
classify accurately, but the ROI can, because there is weak information 
present in much of the ROI. Linear SVMs can combine weak information 
from many voxels, so can sometimes classify better with more voxels.

2) There is a lot of spatial variability between subjects. Suppose only 
a small part of the ROI is informative. If that part falls withing the 
ROI for everyone, then the ROI might classify well at the group level. 
But if each person only has a small informative area on their 
searchlight map, the group map could come out non-significant (people's 
maps don't overlap enough).


A few suggestions:
1) If your hypothesis is about the ROI, stick with the ROI-based 
analysis, adding control ROIs (or whatever) as necessary, but not doing 
the searchlight analysis.

2) If you need the searchlight analysis for a particular purpose, do 
some sensitivity testing, and look closely at the single-subject maps. 
For example, how much do the maps change with different searchlight 
radii? Did you normalize to atlas space before or after the searchlight? 
Did you smooth the data? Smooth the individual subject maps? etc.

3) Check the sensitivity of the ROI-based finding. For example, How much 
does it change if the ROI boundaries are altered slightly? How much 
variation is there between subjects - does the ROI classify well in most 
everyone, or just a few people?


Hope this gets you started, and good luck.
Jo



-- 
Joset A. Etzel, Ph.D.
Research Analyst
Cognitive Control & Psychopathology Lab
Washington University in St. Louis
http://mvpa.blogspot.com/



More information about the Pkg-ExpPsy-PyMVPA mailing list