[pymvpa] search-light vs. ROI analysis: significance puzzle

Vadim Axel axel.vadim at gmail.com
Tue Feb 26 13:05:29 UTC 2013


Hi Malin,

Yes, I agree with you that I also noted that sometimes exhaustive method
gives worse results than partial. Just a notation point: in your chart
"Averaged searchlight" line is exhaustive and "Searchlight" line is
randomly picked? Those levels were somehow established separately?

Thanks

On Tue, Feb 26, 2013 at 4:48 AM, Malin Björnsdotter <
malin.bjornsdotter at neuro.gu.se> wrote:

> Hi Vadim!
>
> I sequentially added more and more search volumes (lights),
> approaching the number in the exhaustive map. At some point
> (surprisingly early) in this process this random average map was
> better (= had a larger area under the receiver operating
> characteristic curve) than the non-average, exhaustive (searchlight)
> map on simulated data. I have attached a hotted up version of a figure
> from the Neuroimage paper, that may make it more clear: at less than
> 5,000 classifiers (=searchlights) the Monte Carlo approach performed
> at the same mapping level as the exhaustive (searchlight) algorithm
> (which required over 25,000 classifiers, i.e. as many voxels as there
> were in the brain). So, the number of search volumes you choose
> depends on your definition of satisfactory performance. :)
>
> Also, there is a trick - in my approach, the search volume selection
> is not quite random. I made sure that every voxel was included in the
> same number of search volumes, i.e. I partitioned the entire brain
> into search volumes (some much smaller than that specified by the
> radius parameter) such that every voxel was included in one.
>
> ~Malin
>
> On Tue, Feb 26, 2013 at 8:09 PM, Vadim Axel <axel.vadim at gmail.com> wrote:
> > Indeed, very similar - I only make it not random, but rather sequentially
> > iterating over all brain. In such a way each voxels participates in
> roughly
> > the same amount of lights. I could not figure out from paper, Malin, in
> how
> > many lights the voxel should participate in order to achieve satisfactory
> > performance?
> >
> > On Tue, Feb 26, 2013 at 3:47 AM, Malin Björnsdotter
> > <malin.bjornsdotter at neuro.gu.se> wrote:
> >>
> >> > In my method, the hitrate is assigned to all the
> >> > voxels in the light and given that each voxel participates in many
> >> > lights,
> >> > the hitrates are averaged. So, using my method a voxel hitrate
> reflects
> >> > many
> >> > possible environments. I try to compare the results of both and so far
> >> > it
> >> > seems that with your method the results are more patchy.
> >>
> >> That sounds pretty much exactly as what I've been doing. :-) Jo has an
> >> excellent blog entry about this:
> >> http://mvpa.blogspot.sg/2012/09/random-searchlight-averaging.html
> >>
> >> _______________________________________________
> >> Pkg-ExpPsy-PyMVPA mailing list
> >> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> >>
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20130226/ec0206e3/attachment.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list