[pymvpa] the effect of ROI size on classification accuracy

Meng Liang meng.liang at hotmail.co.uk
Sun Jul 20 16:10:09 UTC 2014




Dear Jo,
Thanks for your reply! 
I generated a series of smoothed images with Gaussian sigma from 1 mm to 5 mm using the same code (a for loop was used to run different sigma, and FSL smoothing command was used). Smoothing was done on the 4d nifti file directly, so I suppose it is unlikely to change the order of the 3d volumes. By visually inspecting the unsmoothed image and the smoothed image with sigma=1 mm, they look almost identical. The classification accuracies for all different datasets and ROIs were the following:======================================================	        sigma0	sigma1	sigma2	sigma3	sigma4	sigma5ROI1	0.7500	0.7917	0.8333	0.8750	0.8750	0.8750ROI2	0.7917	0.7917	0.7500	0.7500	0.6667	0.6667ROI3	0.7917	0.7917	0.7500	0.7500	0.6250	0.5833======================================================
Now my impression is that it wasn't due to some mistake but smoothing somehow changed the distribution of the data points in the hyperspace in a strange way for ROI3 so that the classification accuracy was changed. I guess it is theorectically possible. 
If this is true, it raises another question: can we use smoothing as a way to test whether it is the fine-grained pattern across neiggbouring voxels or the very coarse pattern across different brain regions that drives the successful classification? The above example seems to make the interpretation of the results from such test a bit complicated, as the smoothing can have very different effect on a combined ROI (ROI3) than on the separate ROIs (ROI1 and ROI2). Any thoughts?
Best,Meng


> Date: Fri, 18 Jul 2014 16:53:54 -0500
> From: jetzel at artsci.wustl.edu
> To: pkg-exppsy-pymvpa at lists.alioth.debian.org
> Subject: Re: [pymvpa] the effect of ROI size on classification accuracy
> 
> 
> On 7/18/2014 12:06 PM, Meng Liang wrote:
> > That's one reason I'm puzzled about the results. Having said that,
> > sigma=5mm smoothing equals FWHM=11.8mm smoothing, so the smoothed
> > image does look considerably smoother than the unsmoothed image.
> That helps - I'm more used to thinking in FWHM. 11.8 with 2x2x2 voxels
> is fairly substantial and likely make some sort of difference in the
> results.
> 
> > I was also wondering whether this was due to some mistakes. But all
> > results were generated from the same code (the only difference is the
> > nifti image files being read into the script). Not sure what other
> > things to check... Ideas?
> Hmm. So you have 4d niftis with the (smoothed or not) functional data,
> plus 3d niftis with the ROI masks, and just send different 4d niftis to
> the same classification code? I think you're right then to look at the
> smoothed niftis. Perhaps something went strange with the smoothing
> procedure, say resulting in some sort of reordering? You could try
> something like running the images through the smoothing code, but with
> zero (or nearly zero) smoothing, which shouldn't change the actual
> functional data, to see if it turns up anything weird (i.e. if the
> zero-smoothed images don't exactly match the before-smoothing images).
> 
> Jo
> 
> 
> -- 
> Joset A. Etzel, Ph.D.
> Research Analyst
> Cognitive Control & Psychopathology Lab
> Washington University in St. Louis
> http://mvpa.blogspot.com/
> 
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa

 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20140720/4a0f7b4e/attachment.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list