[pymvpa] Hyperalignment: SVD did not converge

Swaroop Guntupalli swaroopgj at gmail.com
Fri May 18 19:29:02 UTC 2012


Hi Kiefer,

Glad that it's working.
Whatever mapper you are using (StaticFeatureSelection?) should have a
slicearg argument that contains the list of voxel indices.
Another way is to create an array of ones of the same size as the number of
features selected and pass it bakward through the mappers through which the
Data came through before hyperalignment (mapper_name.reverse(new_data)),
which should put the data in the original space. You can then use map2nifti
to map those selected voxels (as ones) into a nifti file.
Does that make sense?

Best,
Swaroop



On Fri, May 18, 2012 at 3:02 PM, Kiefer Katovich <kieferk at stanford.edu>wrote:

> Hey Swaroop,
>
> I actually managed to fix the SVD non-convergence issue. Turns out that I
> had foolishly not been lagging my data for the hemodynamic response. Once I
> lagged the targets appropriately, i was able to hyperalign all of the
> brains without encountering any SVD problems.
>
> I would like to visualize the features that are being selected that
> hyperalignment is using for the transformation. What should I do after
> preforming the OneWayAnova and the StaticFeatureSelector to save those
> selected features into a nifti that I can overlay on the subjects' brains?
> It would be really nice to know which areas of the brain end up being
> selected for alignment (I am allowing it to choose the top 5% voxels of any
> voxels in the brain).
>
> Thanks for your help!
> Kiefer
>
>
>
> On Fri, May 18, 2012 at 6:33 AM, Swaroop Guntupalli <swaroopgj at gmail.com>wrote:
>
>> Hi Kiefer,
>>
>> Sorry for late response (I blame abstract submission deadlines).
>>
>> I sometimes (very few though) encounter this SVD non convergence problem.
>> One workaround I use (and that works for me) is to try a different SVD
>> implementation: dgesvd instead of numpy (option in ProcrusteanMapper)
>> If it doesn't work any SVD implementation, it means the matrix is
>> probably bad for some reason, which might mean one or more of the data
>> matrices is messed up (SVD is on the product of 2 data matrices), so make
>> sure you exclude all invariant voxels from the data (you can do that using
>> "remove_invariant_features".
>> HTH.
>> Keep us posted on your progress.
>>
>> Thanks,
>> Swaroop
>>
>>
>>
>> On Tue, May 1, 2012 at 2:38 PM, Kiefer Katovich <kieferk at stanford.edu>wrote:
>>
>>> Hi again,
>>>
>>> Sorry my messages keep starting new threads; I've been receiving email
>>> in digest mode but changed my settings to single mail, so I should be
>>> able to reply properly soon.
>>>
>>> First off – I re-ran the iterative test of hyperalignment starting
>>> with a different set of subjects. This time 8 of the 21 subjects
>>> managed to be hyperaligned to each other, and most of the successful
>>> subjects were different than in the last batch. I just did this to
>>> confirm that the success of hyperalignment is contingent upon the
>>> unique set of datasets that you put into it, and not just that some
>>> subjects were bad and others good.
>>>
>>> Now, on to your comments:
>>>
>>> Thanks for the clarification on hyperalignment and SVD. I should
>>> probably read the source code to get a better idea of exactly what
>>> hyperalignment and the procrustean transformation is attempting to do
>>> with the datasets I give it.
>>>
>>> By "classification error" I only meant the way in which I had coded
>>> the time points of the dataset into separate classes, not actually
>>> running a classification algorithm. Sorry for the misconception, that
>>> was poor phrasing on my part.
>>>
>>> A related question: how much of an impact does the coding of time
>>> points have on hyperalignment? I assume that the feature selector,
>>> such as OneWayAnova, chooses features according to the "targets" that
>>> you assign to each time point, and that this is then fed into
>>> hyperalignment and procrustean?
>>>
>>> Here are some details on my data:
>>>
>>> 432 time points
>>> ~58000 voxels per time point (whole brain, masked)
>>> 3000 features selected using FixedNElementTailSelector
>>>
>>> I assumed that it is only the 3000 features from the tail selector
>>> that hyperalignment and procrustean use to make the alignment?
>>>
>>> Ideally I would not have to mask out to a specific area of the brain
>>> prior to the feature selection. I prefer, for this data, to not make
>>> an initial assumption about which brain areas contain the best
>>> features for alignment.
>>>
>>> Thanks you,
>>>
>>> Kiefer
>>>
>>> _______________________________________________
>>> Pkg-ExpPsy-PyMVPA mailing list
>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>>
>>
>>
>> _______________________________________________
>> Pkg-ExpPsy-PyMVPA mailing list
>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>>
>
>
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20120518/f2370ad7/attachment-0001.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list