<p>Thanks very much, Yarik.</p>
<div>The fellowing the summary of dataset. For hdf5 file, please see attachments.</div>
<div> </div>
<div>Zonglei Zhen</div>
<p>---------------------------------------------------------------------------------------</p>
<p>In [4]: print dataset.summary()<br>Dataset: <a href="mailto:72x821@float32">72x821@float32</a>, <sa: chunks,time_indices,targets,time_coords>, <fa: voxel_indices>, <a: mapper,voxel_eldim,voxel_dim,imghdr><br>
stats: mean=-5.2752e-08 std=0.999996 var=0.999993 min=-3.02555 max=3.05246</p>
<p>Counts of targets in each chunk:<br> chunks\targets attf attm<br> --- ---<br> 0.0 6 6<br> 1.0 6 6<br> 2.0 6 6<br> 3.0 6 6<br> 4.0 6 6<br>
5.0 6 6</p>
<p>Summary for targets across chunks<br> targets mean std min max #chunks<br> attf 6 0 6 6 6<br> attm 6 0 6 6 6</p>
<p>Summary for chunks across targets<br> chunks mean std min max #targets<br> 0 6 0 6 6 2<br> 1 6 0 6 6 2<br> 2 6 0 6 6 2<br> 3 6 0 6 6 2<br> 4 6 0 6 6 2<br>
5 6 0 6 6 2<br>Sequence statistics for 72 entries from set ['attf', 'attm']<br>Counter-balance table for orders up to 2:<br>Targets/Order O1 | O2 |<br> attf: 30 6 | 24 12 |<br>
attm: 5 30 | 10 24 |<br>Correlations: min=-1 max=1 mean=-0.014 sum(abs)=35<br><br><br></p>
<div class="gmail_quote">On Mon, Jul 25, 2011 at 11:17 PM, Yaroslav Halchenko <span dir="ltr"><<a href="mailto:debian@onerussian.com">debian@onerussian.com</a>></span> wrote:<br>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">my blunt guess is that it is due to unbalanced # of samples per each<br>chunk: i.e. in some data chunk having no samples of some label.<br>
<br>Could you share<br><br>print dataset.summary()<br><br>?<br><br>and ideally, if more help needed (;-)), dataset itself (h5save it)<br>so we could reproduce/fix it.<br><br>Cheers<br>
<div>
<div></div>
<div class="h5"><br>On Mon, 25 Jul 2011, Zhen Zonglei wrote:<br><br>> Hi,guys<br><br>> I used the following code to do GNB searchlight analysis,<br><br>> # GNB searchlight analysis<br><br>> dataset = alldata[timeselect,spatialslelect ]<br>
> sl=<br>> sphere_gnbsearchlight(GNB(),NFoldPartitioner(),radius=3,postproc=mean_s<br>> ample())<br>> res = sl(dataset)<br>> sphere_errors = res.samples[0]<br><br>> BUT, the errors were reported(see bellow). The version I used is<br>
> pymvpa 0.6.<br><br>> In addition, with the similar code, the general searchlight analysis<br>> can run sucessfully.<br><br>> # Genearal searchlight analysis<br>> 75 cv =<br>
> CrossValidation(linearCSVMC(),NFoldPartitioner(),errorfx=lambda p, t:<br>> np.mean(p == t),enabl e_ca=['stats'])<br>> 76 sl =<br>> sphere_searchlight(cv,radius=3,postproc=mean_sample())<br>
> 77 res = sl(dataset)<br>> 78 sphere_errors = res.samples[0]<br><br><br><br><br><br>> So, what happened in GNB searchlight analysis?<br><br><br><br>> Zonglei Zhen<br>
<br><br><br><br><br>> -----------------IndexError from GNB searchlight<br>> analysis------------------------<br><br>> In [2]: from gnbsearchlight import *<br>> Warning: divide by zero encountered in log<br>
> ERROR: An unexpected error occurred while tokenizing input<br>> The following traceback may be corrupted or invalid<br>> The error message is: ('EOF in multi-line statement', (43, 0))<br><br>> -----------------------------------------------------------------------<br>
> ----<br>> IndexError Traceback (most recent call<br>> last)<br><br>> //mystudy/code/python/<ipython console> in <module>()<br>> //mystudy/code/python/gnbsearchlight.py in <module>()<br>
> 70 #GNB searchlight<br>> sl =<br>> sphere_gnbsearchlight(GNB(),NFoldPartitioner(),radius=3,postproc=mean_s<br>> ample())<br>> ---> 72 res = sl(dataset)<br>
> 73 sphere_errors = res.samples[0]<br>> 74<br><br>> /usr/local/neurosoft/lib/python2.6/site-packages/mvpa/base/learner.pyc<br>> in __call__(self, ds)<br>> 220 "used and auto training is<br>
> disabled."<br>> 221 % str(self))<br>> --> 222 return super(Learner, self).__call__(ds)<br>> 223<br>> 224<br><br>> /usr/local/neurosoft/lib/python2.6/site-packages/mvpa/base/node.pyc in<br>
> __call__(self, ds)<br>> 74<br>> 75 self._precall(ds)<br>> ---> 76 result = self._call(ds)<br>> 77 result = self._postcall(ds, result)<br>> 78<br>
<br>> /usr/local/neurosoft/lib/python2.6/site-packages/mvpa/measures/searchli<br>> ght.pyc in _call(self, dataset)<br>> 108<br>> 109 # pass to subclass<br><br>> --> 110 results, roi_sizes = self._sl_call(dataset, roi_ids,<br>
> nproc)<br>> 111<br>> 112 if not roi_sizes is None:<br><br>> /usr/local/neurosoft/lib/python2.6/site-packages/mvpa/measures/gnbsearc<br>> hlight.pyc in _sl_call(self, dataset, roi_ids, nproc)<br>
> 459 norm_weight = -0.5 * np.log(2*np.pi*variances)<br>> 460 # last added dimension would be for ROIs<br><br>> --> 461 logpriors = np.log(priors[:, np.newaxis,<br>
> np.newaxis])<br>> 462<br>> 463 if __debug__:<br><br>> IndexError: 0-d arrays can only use a single () or a list of newaxes<br>> (and a single ...) as an index<br><br></div>
</div>> _______________________________________________<br>> Pkg-ExpPsy-PyMVPA mailing list<br>> <a href="mailto:Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org">Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org</a><br>> <a href="http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa" target="_blank">http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa</a><br>
<br><br>--<br>=------------------------------------------------------------------=<br>Keep in touch <a href="http://www.onerussian.com/" target="_blank">www.onerussian.com</a><br>Yaroslav Halchenko <a href="http://www.ohloh.net/accounts/yarikoptic" target="_blank">www.ohloh.net/accounts/yarikoptic</a><br>
<br>_______________________________________________<br>Pkg-ExpPsy-PyMVPA mailing list<br><a href="mailto:Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org">Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org</a><br><a href="http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa" target="_blank">http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa</a><br>
</blockquote></div><br>