[pymvpa] SMLR weights

Daqiang Sun sundaqiang at yahoo.com
Sat Jan 24 08:53:41 UTC 2009



----- Original Message ----
From: Daqiang Sun <sundaqiang at yahoo.com>
To: pkg-exppsy-pymvpa at lists.alioth.debian.org
Sent: Friday, January 23, 2009 6:31:03 PM
Subject: Re: [pymvpa] SMLR weights

Hi,



----- Original Message ----
From: Yaroslav Halchenko <debian at onerussian.com>
To: pkg-exppsy-pymvpa at lists.alioth.debian.org
Sent: Friday, January 23, 2009 5:34:29 PM
Subject: Re: [pymvpa] SMLR weights

> Hi, Thanks for your quick response!
you are welcome

> The save method is a nice trick, and it makes sense to use samples only for prediction.
btw -- if not too big of a secret -- why did you want to invoke .predict
manually? most of the time it is sufficient to use
CrossValidatedTransferError or SplitClassifier...


>no, not at all ;) for that case I was just curious to have a look at the values from training data themselves.
But your asking actually lead me to a question about cross-validation output esp. for leave-one-out method. 
I noticed the harvest_attribs option in CrossValidatedTransferError, and was wondering whether the sensitivities or 
other measures harvested there should always be used in place of those from full dataset without cross-validation. 
For sensitivity particularly, CrossValidatedTransferError gives a set of sens values for each run, and I'm not sure 
how could they be summed up. In SMLR for instance, I guess there might be tiny shifts of selected voxels in each 
leave-one-out run. Would a mean across runs still be a valid sensitivity? Any suggestions on that?


Sorry keeping bothering with the same problem ;)
I found a NFoldSplitter() can be added in SMLRWeights(SMLR() ) and give 
a sensitivity vector. This is however apparently not the mean of the harvest_attribs one, 
as the number of selected voxels are much smaller for the former. So it seems the voxel 
shift issues across CV runs is dealt with already. Is this the one that should be preferred 
to the one from full dataset (without cross-validation)? Is there a general form of
clf().getSensitivityAnalyzer() with a NFoldSplitter() option?

I know I need to go to documentation / source code and read more carefully. I guess
for now a simple hint about what you would choose /chose on this for a paper would be 
helpful enough. Thanks!

>Thanks again for your response!

>Best, Frank


> Yeah the dot_prod is not that important. I just tried to get a little more idea about how it works.
> Have a nice weekend!
U2 ;-)

> Best, Frank
-- 
Yaroslav Halchenko
Research Assistant, Psychology Department, Rutgers-Newark
Student  Ph.D. @ CS Dept. NJIT
Office: (973) 353-1412 | FWD: 82823 | Fax: (973) 353-1171
        101 Warren Str, Smith Hall, Rm 4-105, Newark NJ 07102
WWW:    http://www.linkedin.com/in/yarik        

_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa


---------------------------
Daqiang Sun
Clinical Neuroscience Lab
Department of Psychology
UCLA
---------------------------


      

_______________________________________________
Pkg-ExpPsy-PyMVPA mailing list
Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa



      



More information about the Pkg-ExpPsy-PyMVPA mailing list