<div dir="ltr">Thank you a lot for your advices!<br>
I have one more related question:<br>
How reliable in your opinion would be to test significance of the
classification using t-test vs.0.5, where my vector of classification
results contain subjects results? In other words, subject's A
prediction was 0.6, B - 0.52, C - 0.55 etc. I take all those values as
an input to t-test. The values are independent and the normality
condition is also fulfilled (I can check it using Lilie-test). <br><br><br><div style="padding: 0px; overflow: hidden; visibility: hidden; left: -5000px; position: absolute; z-index: 9999; margin-left: 0px; margin-top: 0px; word-wrap: break-word; color: black; font-size: 10px; text-align: left; line-height: 130%;" id="avg_ls_inline_popup">
</div><div class="gmail_quote">On Wed, May 18, 2011 at 10:54 PM, Yaroslav Halchenko <span dir="ltr"><<a href="mailto:debian@onerussian.com">debian@onerussian.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="im"><br>
On Wed, 18 May 2011, J.A. Etzel wrote:<br>
</div><div class="im">> I agree; I would be worried if the *middle* of the permutation<br>
> distribution was around 0.6, but a wide distribution such that 0.6 is in<br>
> the top 0.05 can happen.<br>
<br>
</div>yeap -- and even could be heavy tails at 0.7 and 0.8 and even 0.9 --<br>
everything depends, especially on number of trials ;)<br>
<div class="im"><br>
> >permute truly independent (must be in the correct design) items:<br>
> >sequences of trials across runs: i.e. take sequence of labels from<br>
> >run 1, and place it into run X, and so across all runs. That should<br>
> >account for possible inter-trial dependencies within runs, and thus I<br>
> >would expect that distribution would get even slightly wider (than if<br>
> >permuted within each run)<br>
> Not sure I follow ... you mean taking the order of trials from one<br>
> run and copying it to another, then partitioning on the runs?<br>
<br>
</div>I guess "yes", if "partitioning on the runs" means "splitting into<br>
training and testing sets for cross-validation".<br>
<div class="im"><br>
> >please correct me if I am wrong -- under permutation of samples<br>
> >labels, those must differ regardless of block structure, simple due<br>
> >to the change of number of trials (just compare binomial<br>
> >distributions for 2 trials vs 4 ;) )<br>
> Yes, the change in the variance of the permutation distribution<br>
> could be just from the smaller number of samples. But I can imagine<br>
> setting up dodgy classifications of individual trials from block<br>
> designs that could also make the permutation distributions change<br>
> (not that Vadim did that!), so wanted to mention double-checking the<br>
> not-averaged partitioning scheme.<br>
<br>
</div>yeap ;)<br>
<div class="im"><br>
--<br>
=------------------------------------------------------------------=<br>
Keep in touch <a href="http://www.onerussian.com" target="_blank">www.onerussian.com</a><br>
Yaroslav Halchenko <a href="http://www.ohloh.net/accounts/yarikoptic" target="_blank">www.ohloh.net/accounts/yarikoptic</a><br>
<br>
_______________________________________________<br>
</div><div><div></div><div class="h5">Pkg-ExpPsy-PyMVPA mailing list<br>
<a href="mailto:Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org">Pkg-ExpPsy-PyMVPA@lists.alioth.debian.org</a><br>
<a href="http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa" target="_blank">http://lists.alioth.debian.org/mailman/listinfo/pkg-exppsy-pymvpa</a><br>
</div></div></blockquote></div><br></div>