[pymvpa] Hyperalignment: SVD did not converge

Swaroop Guntupalli swaroopgj at gmail.com
Mon May 21 19:30:19 UTC 2012


Yes, it computes Pearson correlation between all voxel pairs between every
pair of subjects.
As you said, it becomes tractable with ROI. If you are using a ROI,
ANOVA-based feature selection might work as well since all
selected voxels from different subjects come from approximately the same
brain region (restricted by that ROI).

Swaroop



On Mon, May 21, 2012 at 3:19 PM, Kiefer Katovich <kieferk at stanford.edu>wrote:

> As I understand, the method that you used in the Neuron paper when the
> subjects watched the movie involves multiplying one subject's data matrix
> by the transpose of another subject's?
>
> For example, in test_hyperalignment.py in pymvpa this is the code that I
> believe is doing that:
>
> for i, sd in enumerate(ds):
>             ds_temp = sd.copy()
>             zscore(ds_temp, chunks_attr=None)
>             for j, sd2 in enumerate(ds[i+1:]):
>                 ds_temp2 = sd2.copy()
>                 zscore(ds_temp2, chunks_attr=None)
>                 corr_temp = np.dot(ds_temp.samples.T, ds_temp2.samples)
>                 feature_scores[i] = feature_scores[i] + \
>                                     np.max(corr_temp, axis = 1)
>                 feature_scores[j+i+1] = feature_scores[j+i+1] + \
>                                         np.max(corr_temp, axis = 0)
>
>
> I actually had tried to use this type of feature selection originally, but
> the matrix multiplication ended up being so computationally expensive that
> I switched to the feature selection method in the hyperalignment tutorial
> on the website.
>
> I suppose if I use that feature selection with a predetermined ROI (that
> is small enough) then the computational requirements should become
> reasonable.
>
>
>
>
> On Sat, May 19, 2012 at 12:41 PM, Swaroop Guntupalli <swaroopgj at gmail.com>
> wrote:
> >
> > It's not so much about using ROIs from prior assumptions than to make
> sure that you are selecting voxels from approximately the same regions in
> the brain from all your subjects. Your current approach doesn't necessarily
> guarantee that. This is independent of SVD non-convergence error.
> > I am not sure if it would be appropriate/better, but you can try the
> method we used on our Neuron paper which needs no trial information except
> that the TRs should all be aligned across subjects (make sure you do this
> if you randomized trials or runs for each subject).
> >
> > Swaroop
> >
> >
> >
> >
> > On Sat, May 19, 2012 at 3:21 PM, Kiefer Katovich <kieferk at stanford.edu>
> wrote:
> >>
> >> I could definitely use an ROI for the feature selection, but preferably
> I would be able to do a feature selection using the whole brain without any
> a priori assumptions about which areas will contain the best voxels. Is
> there a more appropriate feature selector than the one way anova that could
> potentially give me a better set of voxels from the whole brain to use in
> hyperalignment?
> >>
> >>
> >> On Sat, May 19, 2012 at 12:04 PM, Swaroop Guntupalli <
> swaroopgj at gmail.com> wrote:
> >>>
> >>> Looks like it could be a problem with the voxels selected. It is
> possible that the voxel selection one way vs the other is leading to
> degenerate matrices for SVD to work on. If you have an independent ROI
> (function/anatomical), try hyperalignment on voxels within that ROI.
> >>>
> >>> Swaroop
> >>>
> >>>
> >>>
> >>>
> >>> On Fri, May 18, 2012 at 7:27 PM, Kiefer Katovich <kieferk at stanford.edu>
> wrote:
> >>>>
> >>>> Hi again,
> >>>>
> >>>> I will try those two ways of outputting the feature selection, they
> both seem straightforward.
> >>>>
> >>>> I spoke to soon on the SVD non-convergence issue. As it turns out it
> really seems to depend on how I set the targets for the datasets. SVD seems
> to converge better when there are more "classes" set with the targets. For
> example, if i set there to be 8 different categories in my data SVD can
> converge on the entire dataset, but if I set it to be binary it has a lot
> of trouble converging.
> >>>>
> >>>> For example, I set the first two TRs of every trial to 1 and all
> other TRs to 0 in the targets file. If I use this for feature selection and
> then hyperalignment, it does not converge with most combinations of
> subjects.
> >>>>
> >>>> However, if I designate trialtype and specific TRs within the targets
> file it is able to converge with all the subjects.
> >>>>
> >>>> I'm not to clear on why this would be the case. I assume that the
> anova feature selection is picking out the voxels that best match the time
> series of the targets that you assign to each TR? I know that in a
> univariate solution with the binary targets there are definitely voxels
> that correlate significantly with the 1s (first two TRs of every trial), so
> I figured that the feature selector would probably pull out those.
> >>>>
> >>>> I hope that wasn't too confusing. I'm just wondering if there is some
> criterion that I am missing when assigning the target file that is
> necessary for hyperalignment to run correctly.
> >>>>
> >>>> Thank you,
> >>>> Kiefer
> >>>>
> >>>>
> >>>>
> >>>> On Fri, May 18, 2012 at 12:29 PM, Swaroop Guntupalli <
> swaroopgj at gmail.com> wrote:
> >>>>>
> >>>>> Hi Kiefer,
> >>>>>
> >>>>> Glad that it's working.
> >>>>> Whatever mapper you are using (StaticFeatureSelection?) should have
> a slicearg argument that contains the list of voxel indices.
> >>>>> Another way is to create an array of ones of the same size as the
> number of features selected and pass it bakward through the mappers through
> which the Data came through before hyperalignment
> (mapper_name.reverse(new_data)), which should put the data in the original
> space. You can then use map2nifti to map those selected voxels (as ones)
> into a nifti file.
> >>>>> Does that make sense?
> >>>>>
> >>>>> Best,
> >>>>> Swaroop
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Fri, May 18, 2012 at 3:02 PM, Kiefer Katovich <
> kieferk at stanford.edu> wrote:
> >>>>>>
> >>>>>> Hey Swaroop,
> >>>>>>
> >>>>>> I actually managed to fix the SVD non-convergence issue. Turns out
> that I had foolishly not been lagging my data for the hemodynamic response.
> Once I lagged the targets appropriately, i was able to hyperalign all of
> the brains without encountering any SVD problems.
> >>>>>>
> >>>>>> I would like to visualize the features that are being selected that
> hyperalignment is using for the transformation. What should I do after
> preforming the OneWayAnova and the StaticFeatureSelector to save those
> selected features into a nifti that I can overlay on the subjects' brains?
> It would be really nice to know which areas of the brain end up being
> selected for alignment (I am allowing it to choose the top 5% voxels of any
> voxels in the brain).
> >>>>>>
> >>>>>> Thanks for your help!
> >>>>>> Kiefer
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Fri, May 18, 2012 at 6:33 AM, Swaroop Guntupalli <
> swaroopgj at gmail.com> wrote:
> >>>>>>>
> >>>>>>> Hi Kiefer,
> >>>>>>>
> >>>>>>> Sorry for late response (I blame abstract submission deadlines).
> >>>>>>>
> >>>>>>> I sometimes (very few though) encounter this SVD non convergence
> problem.
> >>>>>>> One workaround I use (and that works for me) is to try a different
> SVD implementation: dgesvd instead of numpy (option in ProcrusteanMapper)
> >>>>>>> If it doesn't work any SVD implementation, it means the matrix is
> probably bad for some reason, which might mean one or more of the data
> >>>>>>> matrices is messed up (SVD is on the product of 2 data matrices),
> so make sure you exclude all invariant voxels from the data (you can do
> that using "remove_invariant_features".
> >>>>>>> HTH.
> >>>>>>> Keep us posted on your progress.
> >>>>>>>
> >>>>>>> Thanks,
> >>>>>>> Swaroop
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> On Tue, May 1, 2012 at 2:38 PM, Kiefer Katovich <
> kieferk at stanford.edu> wrote:
> >>>>>>>>
> >>>>>>>> Hi again,
> >>>>>>>>
> >>>>>>>> Sorry my messages keep starting new threads; I've been receiving
> email
> >>>>>>>> in digest mode but changed my settings to single mail, so I
> should be
> >>>>>>>> able to reply properly soon.
> >>>>>>>>
> >>>>>>>> First off – I re-ran the iterative test of hyperalignment starting
> >>>>>>>> with a different set of subjects. This time 8 of the 21 subjects
> >>>>>>>> managed to be hyperaligned to each other, and most of the
> successful
> >>>>>>>> subjects were different than in the last batch. I just did this to
> >>>>>>>> confirm that the success of hyperalignment is contingent upon the
> >>>>>>>> unique set of datasets that you put into it, and not just that
> some
> >>>>>>>> subjects were bad and others good.
> >>>>>>>>
> >>>>>>>> Now, on to your comments:
> >>>>>>>>
> >>>>>>>> Thanks for the clarification on hyperalignment and SVD. I should
> >>>>>>>> probably read the source code to get a better idea of exactly what
> >>>>>>>> hyperalignment and the procrustean transformation is attempting
> to do
> >>>>>>>> with the datasets I give it.
> >>>>>>>>
> >>>>>>>> By "classification error" I only meant the way in which I had
> coded
> >>>>>>>> the time points of the dataset into separate classes, not actually
> >>>>>>>> running a classification algorithm. Sorry for the misconception,
> that
> >>>>>>>> was poor phrasing on my part.
> >>>>>>>>
> >>>>>>>> A related question: how much of an impact does the coding of time
> >>>>>>>> points have on hyperalignment? I assume that the feature selector,
> >>>>>>>> such as OneWayAnova, chooses features according to the "targets"
> that
> >>>>>>>> you assign to each time point, and that this is then fed into
> >>>>>>>> hyperalignment and procrustean?
> >>>>>>>>
> >>>>>>>> Here are some details on my data:
> >>>>>>>>
> >>>>>>>> 432 time points
> >>>>>>>> ~58000 voxels per time point (whole brain, masked)
> >>>>>>>> 3000 features selected using FixedNElementTailSelector
> >>>>>>>>
> >>>>>>>> I assumed that it is only the 3000 features from the tail selector
> >>>>>>>> that hyperalignment and procrustean use to make the alignment?
> >>>>>>>>
> >>>>>>>> Ideally I would not have to mask out to a specific area of the
> brain
> >>>>>>>> prior to the feature selection. I prefer, for this data, to not
> make
> >>>>>>>> an initial assumption about which brain areas contain the best
> >>>>>>>> features for alignment.
> >>>>>>>>
> >>>>>>>> Thanks you,
> >>>>>>>>
> >>>>>>>> Kiefer
> >>>>>>>>
> >>>>>>>> _______________________________________________
> >>>>>>>> Pkg-ExpPsy-PyMVPA mailing list
> >>>>>>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> >>>>>>>>
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> _______________________________________________
> >>>>>>> Pkg-ExpPsy-PyMVPA mailing list
> >>>>>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> >>>>>>>
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> Pkg-ExpPsy-PyMVPA mailing list
> >>>>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> >>>>>>
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
> >>>>>
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> Pkg-ExpPsy-PyMVPA mailing list
> >>>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> >>>>>
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
> >>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> Pkg-ExpPsy-PyMVPA mailing list
> >>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> >>>>
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> Pkg-ExpPsy-PyMVPA mailing list
> >>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> >>>
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
> >>
> >>
> >>
> >> _______________________________________________
> >> Pkg-ExpPsy-PyMVPA mailing list
> >> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> >>
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
> >
> >
> >
> > _______________________________________________
> > Pkg-ExpPsy-PyMVPA mailing list
> > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> >
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
>
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20120521/d882c259/attachment-0001.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list