<div dir="ltr"><div><div><div><div><div>Hello! I was hoping to get a little push in the right direction. Thanks in advance for all of your help!<br><br>I'm
now starting a basic analysis where instead of looking at an entire time
series, I will be using beta images for each condition (per run).
There are 6 runs, so for each condition (e.g., "monkey") there is a beta
image, and I just want to begin by doing an odd/even comparison (runs
0,2,4 vs runs 1,3,5):<br><br></div>beta_0001.nii<br></div>beta_0002.nii<br></div>beta_0003.nii<br>beta_0004.nii<br>beta_0005.nii<br>beta_0006.nii<br><br></div>When
building this dataset, should I just concatenate these beta images
(using vstack)? I only ask because I'm concerned about a quick sanity
check I did, in which I'm getting perfect classification (1.0) and zero
error on the cross-validation (0.0000). Here is the shape of my dataset
after v-stacking:<br><br>>>> detrended_orig_mds.shape<br>(6, 510340)<br><br></div>I
assume this is the 6 beta image as rows, with voxels as columns in this
matrix. I've setup an attributes file for each beta image that has
targets, chunks, etc (a 1x4 array for each beta). My question: is the
classifier (based on "chunk") actually getting access to the
(individual) voxel data within the beta images in this scenario? Or, is
it classifying simply on the target category (which is the same for
each beta). It seems that maybe I should be giving my attributes to each
voxel, but I'm not quite sure how to do this (each voxel in a beta
would be assigned to a particular chunk). Here's some relevant code:<br><br><br> >>> print mds.summary()<br><br>Dataset: 6x510340@float64, <sa: chunks,targets,time_coords,<div tabindex="1" id=":cm" class="">time_indices>, <fa: voxel_indices>, <a: imghdr,imgtype,mapper,voxel_dim,voxel_eldim><br>stats: mean=-9.18523e-19 std=1.57984e-16 var=2.49591e-32 min=-1.11022e-15 max=1.11022e-15<br><br>Counts of targets in each chunk:<br> chunks\targets animal<br> ---<br> 0 1<br> 1 1<br> 2 1<br> 3 1<br> 4 1<br> 5 1<br><br>Summary for targets across chunks<br> targets mean std min max #chunks<br> animal 1 0 1 1 6<br><br>Summary for chunks across targets<br> chunks mean std min max #targets<br> 0 1 0 1 1 1<br> 1 1 0 1 1 1<br> 2 1 0 1 1 1<br> 3 1 0 1 1 1<br> 4 1 0 1 1 1<br> 5 1 0 1 1 1<br>Sequence statistics for 6 entries from set ['animal']<br>Counter-balance table for orders up to 2:<br>Targets/Order O1 | O2 |<br> animal: 5 | 4 |<br>Correlations: min=nan max=nan mean=nan sum(abs)=nan<br><br><br>>>> detrender = PolyDetrendMapper(polyord=1, chunks_attr='chunks')<br>>>> detrended_mds = mds.get_mapped(detrender) <br>>>> zscore(mds, chunks_attr=None)<br>>>> clf = kNN(k=1, dfx=one_minus_correlation, voting='majority')<br>>>> cv = CrossValidation(clf, NFoldPartitioner(attr='chunks'))<br>>>> cv_glm = cv(detrended_orig_mds)<br>>>> print '%.2f' % np.mean(cv_glm)<br>0.00<br><br><br>>>> print <a href="http://detrended_orig_mds.sa.int" target="_blank">detrended_orig_mds.sa.int</a><br>['even' 'odd' 'even' 'odd' 'even' 'odd']<br>>>> detrended_orig_mds_split1 = detrended_orig_mds[<a href="http://detrended_orig_mds.sa.int" target="_blank">detrended_orig_mds.sa.int</a> == 'even'] <br>>>> len(detrended_orig_mds_split1)<br>3<br>>>> detrended_orig_mds_split2 = detrended_orig_mds[<a href="http://detrended_orig_mds.sa.int" target="_blank">detrended_orig_mds.sa.int</a> == 'odd'] <br>>>> len(detrended_orig_mds_split2) <br>3<br>>>> clf.train(detrended_orig_mds_split1) <br>>>> predictions = clf.predict(detrended_orig_mds_split2.samples) <br>>>> clf.set_postproc(BinaryFxNode(mean_mismatch_error, 'targets'))<br>>>> clf.train(detrended_orig_mds_split2) <br>>>> err = clf(detrended_orig_mds_split1)<br>>>> print np.asscalar(err) <br>0.0<br><br><br>mds = fmri_dataset(samples=bold_fname, targets=<a href="http://attr.cat">attr.cat</a>, chunks=attr.chunk)<br>>>> poly_detrend(mds, polyord=1, chunks_attr='chunks')<br>>>> mds = mds[np.array([l in ['animal'] for l in mds.sa.targets], dtype='bool')]<br>>>> cv = CrossValidation(SMLR(), OddEvenPartitioner(), errorfx=mean_mismatch_error)<br>>>> error = cv(mds)<br>mvpa2/clfs/smlr.py:375: RuntimeWarning: divide by zero encountered in divide<br> lambda_over_2_auto_corr = (self.params.lm/2.)/auto_corr<br>mvpa2/clfs/smlr.py:217: RuntimeWarning: invalid value encountered in double_scalars<br> w_new = w_old + grad/auto_corr[basis]<br>WARNING:
SMLR: detected ties in categories ['animal']. Small amount of noise
will be injected into result estimates upon prediction to break the ties<br>>>> print "Error for %i-fold cross-validation on %i-class problem: %f" % (len(mds.UC), len(mds.UT), np.mean(error))<br>Error for 6-fold cross-validation on 1-class problem: 0.000000<br><br><br><br>>>> <a href="http://detrended_orig_mds.sa" target="_blank">detrended_orig_mds.sa</a>['int'] = [ 'even', 'odd', 'even', 'odd', 'even', 'odd' ]<br>>>> clf = kNN(k=1, dfx=one_minus_correlation, voting='majority') <br>>>> clf.train(detrended_orig_mds)<br>>>> predictions = clf.predict(detrended_orig_mds.samples)<br>>>> np.mean(predictions == detrended_orig_mds.sa.targets)<br>1.0<br><div class="gmail_extra"><br></div></div></div>