[pymvpa] Suspicious results

MS Al-Rawi rawi707 at yahoo.com
Mon Feb 28 16:24:01 UTC 2011


Also, have you done ROC analysis?


----- Original Message ----
> From: Francisco Pereira <francisco.pereira at gmail.com>
> To: Development and support of PyMVPA 
><pkg-exppsy-pymvpa at lists.alioth.debian.org>
> Sent: Mon, February 28, 2011 4:11:27 PM
> Subject: Re: [pymvpa] Suspicious results
> 
> To all that Yaroslav is saying I would just add one suggestion: do you
> still  get this sort of results if you permute your class labels
> (within scanner  run, if you have multiple runs)? If you do, there's
> some contamination  between train and test sets in your analysis.
> 
> Francisco
> 
> On Mon,  Feb 28, 2011 at 9:14 AM, Yaroslav Halchenko
> <debian at onerussian.com>  wrote:
> >
> > On Mon, 28 Feb 2011, Nynke van der Laan  wrote:
> >> What I did is the following: I did a searchlight analysis  (radius 10
> >> mm)
> >
> > which makes it 20mm in diameter,  altogether meaning that you could get
> > "legally" >chance performance  in your searchlight center anywhere 1cm
> > apart from  the actual relevant  activation point.  That would be one of
> > the effects which would add up  to the heavy right tail in your resultant
> > distribution of the  performances.  to see how much an effect of this one
> > -- reduce radius to  1mm and run the same searchlight -- is distribution
> > loosing its heavy  >0.5 bias?
> >
> >> brain mask). I used a NFoldCrossvalidation (no  detrending or
> >> z-scoring).
> >
> > well, depending on the  actual data and experimental design, absent
> > detrending might add  confounds.
> >
> > Also, although you have mentioned that every chunk  had labels balanced,
> > what is the output of
> >
> >  dataset.summary()
> > ?
> >
> >
> > also, because of no  z-scoring with not tuned RBF (non-linear) SVM, I am
> > not sure if it  trained correctly per se.... what is the "picture" if you
> > use Linear  SVM? what if you introduce zscoring and detrending?
> >
> >> I use  two stimuluscategories. The task I used consisted of 38 chunks
> >> (38  trials) with in each chunk two stimuluspresentations (one of each
> >>  category). I have used blockaveraging to reduce features.
> >
> >  blockaveraging reduces samples, not features... ?
> >
> >> Because I  have two stimuluscategories the chance level accuracy would
> >> thus be  0.5
> >
> > yes, unless samples are disbalanced across labels/chunks  when
> > classifier might go for the 'overrepresented'  class.
> >
> >> correctly classified) So this would mean that there  is predictive
> >> information in all regions of the  brain..
> >
> > well -- more precisely, "every voxel seems to find a  relevant diagnostic
> > neighbor within 10mm radius", so not necessarily  carrying predictive
> > information itself.
> >
> >> The highest  peaks are located at the borders of the brain.
> >
> > was data motion  corrected? was motion correlated with the design? (what
> > accuracy would  obtain by using motion correction
> > parameters/characteristics such as  displacement as your features)
> >
> > --
> >  =------------------------------------------------------------------=
> >  Keep in touch                                     www.onerussian.com
> > Yaroslav  Halchenko                 www.ohloh.net/accounts/yarikoptic
> >
> >  _______________________________________________
> > Pkg-ExpPsy-PyMVPA  mailing list
> > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> > http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa
> >
> 
> _______________________________________________
> Pkg-ExpPsy-PyMVPA A mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa
> 


      



More information about the Pkg-ExpPsy-PyMVPA mailing list