[pymvpa] question about cross-subject analysis

John Magnotti john.magnotti at gmail.com
Wed Jan 18 15:30:24 UTC 2012


Hi All,

I'm trying to work build a cross-subject analysis using the Haxby et
al data (http://data.pymvpa.org/datasets/haxby2001/). The problem is
that the masks for each subject don't necessarily cover the same
voxels. Poldrack et al. [1] mention using an intersection mask to
ensure they were looking at the same voxels across subjects. Is there
a way to do this in PyMVPA, and should I do something like convert to
standard space beforehand? I could also just use the whole timeseries,
but I think there is still the issue of ensuring that the voxels
"match" across subjects, right?

Any hints or tips would be much appreciated.


Thanks,

John






1. Poldrack, R. A., Halchenko, Y. O., & Hanson, S. J. (2009). Decoding
the large-scale structure of brain function by classifying mental
states across individuals. Psychological Science, 20(11), 1364-72.
doi:10.1111/j.1467-9280.2009.02460.x



More information about the Pkg-ExpPsy-PyMVPA mailing list