From robbenson18 at gmail.com Thu Jul 2 15:24:13 2015 From: robbenson18 at gmail.com (Roberto Guidotti) Date: Thu, 2 Jul 2015 17:24:13 +0200 Subject: [pymvpa] Time resolved decoding and classifier customization. Message-ID: Dear all, Suppose I have two conditions, n runs with m trial and each trial is composed by t volumes (frames). I would like to do a time-wise (time-resolved) decoding to have the decoding accuracy curve for each time frame. I saw several approaches to do this: 1) the first is to train the classifier using t datasets one for each time frame using volumes belonging to the same time frame thus I will have t accuracies 2) the second is to train the classifier using the within-trial averaged dataset (mean of t frames) and then test using each time-frame so having also t accuracies. Q1: Which approach will you use? Are there other approaches? I would like to implement the second one in PyMVPA, so that the classifier needs to average in the training step on the training chunk of the dataset. I made a little test extending LinearCSVM class overriding the _train function: def _train(self, ds): avg_mapper = mean_group_sample(['trial']) # I build my ds with this attr ds = ds.get_mapped(avg_mapper) return LinearCSVMC._train(self, ds) Moreover I implemented an error class to compute time wise test error, to be passed to the CrossValidation: class ErrorPerTrial(BinaryFxNode): def _call(self, ds): [...same code of the parent class...] err = [self.fx(values[ds.sa.frame == i], targets[ds.sa.frame == i]) for i in np.unique(ds.sa.frame)] # I compute a list of fx values I think that maybe the ErrorPerTrial class is almost good (but if you have a more elegant solution is well-accepted :)), while extending every classifier is the worst solution, maybe a decorator or a wrapper is a possible solution. Q2: How to implement this (if not yet implemented in pymvpa)? Could be a solution build an TrialAverager class that extends Learner with a classifier as input parameter and perform averaging? Sorry for the tricky and long post. Thank you, Roberto -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201421210014 at mail.bnu.edu.cn Thu Jul 9 01:50:30 2015 From: 201421210014 at mail.bnu.edu.cn (=?UTF-8?B?5a2U5Luk5Yab?=) Date: Thu, 9 Jul 2015 09:50:30 +0800 (GMT+08:00) Subject: [pymvpa] =?utf-8?q?Multiple_datasets_in_one_hdf5_file?= Message-ID: Hi,Thank you so much for the replies!! I now have 15 subject's datasets in one file, but seem to still be missing something. (I had used the h5save('/tmp/out.h5', [ds1, ds2]) to make the one file consist of the 15 subjects.) Working through the hyperalignment tutorial, after loading using h5load, zscoring the individual datasets or inserting subject IDs to the individual datasets doesn't work with my combined dataset file (missing some attributes). It definitely works when using the example hyperalignment dataset file though. Pardon me for the repeated questions on this. Would anyone please let me know if there is something I am neglecting to do before saving all the datasets into the one file? Thank you very much! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: QQ??20150709094120.png Type: image/png Size: 55770 bytes Desc: not available URL: From 201421210014 at mail.bnu.edu.cn Thu Jul 9 01:59:01 2015 From: 201421210014 at mail.bnu.edu.cn (=?UTF-8?B?5a2U5Luk5Yab?=) Date: Thu, 9 Jul 2015 09:59:01 +0800 (GMT+08:00) Subject: [pymvpa] =?utf-8?q?Pkg-ExpPsy-PyMVPA_Digest=2C_Vol_89=2C_Issue_3?= In-Reply-To: Message-ID: Hi I am sorry for that I did not get anything about the way to solve the problem, can you show me again? Thank you > -----????----- > ???: pkg-exppsy-pymvpa-request at lists.alioth.debian.org > ????: 2015?7?9? ??? > ???: pkg-exppsy-pymvpa at lists.alioth.debian.org > ??: > ??: Pkg-ExpPsy-PyMVPA Digest, Vol 89, Issue 3 > > Send Pkg-ExpPsy-PyMVPA mailing list submissions to > pkg-exppsy-pymvpa at lists.alioth.debian.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > > or, via email, send a message with subject or body 'help' to > pkg-exppsy-pymvpa-request at lists.alioth.debian.org > > You can reach the person managing the list at > pkg-exppsy-pymvpa-owner at lists.alioth.debian.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Pkg-ExpPsy-PyMVPA digest..." > > > Today's Topics: > > 1. Multiple datasets in one hdf5 file (???) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 9 Jul 2015 09:50:30 +0800 (GMT+08:00) > From: ??? <201421210014 at mail.bnu.edu.cn> > To: pkg-exppsy-pymvpa at lists.alioth.debian.org > Subject: [pymvpa] Multiple datasets in one hdf5 file > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > > Hi,Thank you so much for the replies!! I now have 15 subject's datasets in one file, but seem to still be missing something. > (I had used the h5save('/tmp/out.h5', [ds1, ds2]) to make the one file consist of the 15 subjects.) > Working through the hyperalignment tutorial, after loading using h5load, zscoring the individual datasets or inserting subject IDs to the individual datasets doesn't work with my combined dataset file (missing some attributes). It definitely works when using the example hyperalignment dataset file though. > Pardon me for the repeated questions on this. Would anyone please let me know if there is something I am neglecting to do before saving all the datasets into the one file? Thank you very much! > > > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: QQ??20150709094120.png > Type: image/png > Size: 55770 bytes > Desc: not available > URL: > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > > ------------------------------ > > End of Pkg-ExpPsy-PyMVPA Digest, Vol 89, Issue 3 > ************************************************ -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Thu Jul 9 02:21:50 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Wed, 8 Jul 2015 22:21:50 -0400 Subject: [pymvpa] Multiple datasets in one hdf5 file In-Reply-To: References: Message-ID: <20150709022150.GX28964@onerussian.com> On Thu, 09 Jul 2015, ??? wrote: > Hi, > Thank you so much for the replies!! I now have 15 subject's datasets in > one file, but seem to still be missing something. > (I had used the h5save('/tmp/out.h5', [ds1, ds2]) to make the one file > consist of the 15 subjects.) this seems to save only 2 datasets, not 15 datasets (1 per each subject). > Working through the hyperalignment tutorial, after loading using h5load, > zscoring the individual datasets or inserting subject IDs to the > individual datasets doesn't work with my combined dataset file (missing > some attributes). we can't help unless we know details... so whenever something doesn't work give as much detail as possible (without overflooding the email though! ;) ): what "some attributes?" > It definitely works when using the example > hyperalignment dataset file though. > Pardon me for the repeated questions on this. Would anyone please let me > know if there is something I am neglecting to do before saving all > the datasets into the one file? once again -- "evil is in detail" -- what are the errors you are getting? ;) > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From 201421210014 at mail.bnu.edu.cn Fri Jul 10 00:39:26 2015 From: 201421210014 at mail.bnu.edu.cn (=?UTF-8?B?5a2U5Luk5Yab?=) Date: Fri, 10 Jul 2015 08:39:26 +0800 (GMT+08:00) Subject: [pymvpa] =?utf-8?q?Multiple_datasets_in_one_hdf5_file?= Message-ID: Hi, Thank you so much for the replies!! I now have 15 subject's datasets in one file, but seem to still be missing something. (I had used the h5save('/tmp/out.h5', [ds1, ds2]) to make the one file consist of the 15 subjects. And can be loaded correctly) Working through the hyperalignment tutorial, after loading using h5load, zscoring the individual datasets or inserting subject IDs to the individual datasets doesn't work with my combined dataset file (missing some attributes). It definitely works when using the example hyperalignment dataset file though.The code and error display are showed at the end. Pardon me for the repeated questions on this. Would anyone please let me know if there is something I am neglecting to do before saving all the datasets into the one file? Thank you very much! >>> _ = [zscore(ds) for ds in ds_all] Traceback (most recent call last): File "", line 1, in File "/usr/lib/pymodules/python2.7/mvpa2/mappers/zscore.py", line 286, in zscore zm.train(Dataset(ds)) File "/usr/lib/pymodules/python2.7/mvpa2/base/dataset.py", line 210, in __init__ samples = np.array(samples) ValueError: setting an array element with a sequence. >>> for i,sd in enumerate(ds_all): ... sd.sa['subject'] = np.repeat(i, len(sd)) ... Traceback (most recent call last): File "", line 2, in AttributeError: 'list' object has no attribute 'sa' -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Fri Jul 10 01:13:34 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Thu, 9 Jul 2015 21:13:34 -0400 Subject: [pymvpa] Multiple datasets in one hdf5 file In-Reply-To: References: Message-ID: <20150710011334.GC28964@onerussian.com> On Fri, 10 Jul 2015, ??? wrote: > Thank you so much for the replies!! I now have 15 subject's datasets in > one file yeay! > , but seem to still be missing something. > (I had used the h5save('/tmp/out.h5', [ds1, ds2]) to make the one file > consist of the 15 subjects. And can be loaded correctly) once again -- why [ds1, ds2] ? that sounds like 2 datasets in a list saved... Where are 15 subjects? > Working through the hyperalignment tutorial, after loading using h5load, > zscoring the individual datasets or inserting subject IDs to the > individual datasets doesn't work with my combined dataset file (missing > some attributes). It definitely works when using the example > hyperalignment dataset file though.The code and error display are showed > at the end. > Pardon me for the repeated questions on this. Would anyone please let me > know if there is something I am neglecting to do before saving all > the datasets into the one file? Thank you very much! > >>> _ = [zscore(ds) for ds in ds_all] > Traceback (most recent call last): > File "", line 1, in > File "/usr/lib/pymodules/python2.7/mvpa2/mappers/zscore.py", line 286, > in zscore > zm.train(Dataset(ds)) > File "/usr/lib/pymodules/python2.7/mvpa2/base/dataset.py", line 210, in > __init__ > samples = np.array(samples) > ValueError: setting an array element with a sequence. is ds_all something you loaded from that /tmp/out.h5? then it should be a list of two datasets ([ds1, ds2]) and if those ds1, ds2 are datasets, it should work.... what is output of print ds_all ? ;) > >>> for i,sd in enumerate(ds_all): > ... sd.sa['subject'] = np.repeat(i, len(sd)) > ... > Traceback (most recent call last): > File "", line 2, in > AttributeError: 'list' object has no attribute 'sa' same here -- seems like ds_all is a list of lists, not a list of datasets -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From 201421210014 at mail.bnu.edu.cn Mon Jul 13 01:36:10 2015 From: 201421210014 at mail.bnu.edu.cn (=?UTF-8?B?5a2U5Luk5Yab?=) Date: Mon, 13 Jul 2015 09:36:10 +0800 (GMT+08:00) Subject: [pymvpa] =?utf-8?q?Multiple_datasets_in_one_hdf5_file?= Message-ID: Hi , The 15 subjects is my total data ,I just use 2 of them for a simple try. Sorry for not recommonded it. I had also finded where my datas' error is that my ds_all is a list of datasets , not a list of lists , When I input ds_all[*] ,it will show me the detail data ,not a simple discribe of the data . How to make it right ? I had just use the function of h5save() witout any parameter setting Thanks for you help very much . -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Mon Jul 13 03:08:57 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Sun, 12 Jul 2015 23:08:57 -0400 Subject: [pymvpa] Multiple datasets in one hdf5 file In-Reply-To: References: Message-ID: <20150713030857.GG28964@onerussian.com> On Mon, 13 Jul 2015, ??? wrote: > Hi , > The 15 subjects is my total data ,I just use 2 of them for a simple try. > Sorry for not recommonded it. > I had also finded where my datas' error is that my ds_all is a list of > datasets , not a list of lists , > When I input ds_all[*] ,it will show me the detail data ,not a simple > discribe of the data . > How to make it right ? just share that dataset, and I will help you to figure it out. -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From boly.melanie at gmail.com Wed Jul 15 03:02:57 2015 From: boly.melanie at gmail.com (Melanie Boly) Date: Tue, 14 Jul 2015 22:02:57 -0500 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release Message-ID: <9B8C9AAD-EFDA-4C2E-8BB9-C7CAB9B3705F@gmail.com> Dear, I just downloaded the PyMVPA-upstream-2.4.0 release and was trying out the tutorial but cannot find anywhere the LinearCSVMC module. I saw in the online documentation it should be in: mvpa2.clfs.svm.LinearCSVMC but I cannot find this structure in the python modules. Could you please help me on how to use the SVM classifier in this release and what should be the equivalent module to call/import then? Thank you so much in advance, Melanie Boly, M.D. Ph.D. Assistant Scientist University of Wisconsin, Madison, USA From debian at onerussian.com Wed Jul 15 03:46:57 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Tue, 14 Jul 2015 23:46:57 -0400 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: <9B8C9AAD-EFDA-4C2E-8BB9-C7CAB9B3705F@gmail.com> References: <9B8C9AAD-EFDA-4C2E-8BB9-C7CAB9B3705F@gmail.com> Message-ID: <20150715034657.GJ28964@onerussian.com> On Tue, 14 Jul 2015, Melanie Boly wrote: > Dear, > I just downloaded the PyMVPA-upstream-2.4.0 release and was trying out > the tutorial but cannot find anywhere the LinearCSVMC module. I saw in > the online documentation it should be in: > mvpa2.clfs.svm.LinearCSVMC > but I cannot find this structure in the python modules. > Could you please help me on how to use the SVM classifier in this > release and what should be the equivalent module to call/import then? > Thank you so much in advance, Hi Melanie, give us more detail - what is your OS - how did you "Download" PyMVPA-upstream-2.4.0 release? if "sources" only, did you see/follow smth like http://www.pymvpa.org/installation.html#alternative-build-procedure ? -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From boly.melanie at gmail.com Wed Jul 15 13:37:21 2015 From: boly.melanie at gmail.com (Melanie Boly) Date: Wed, 15 Jul 2015 08:37:21 -0500 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: <20150715034657.GJ28964@onerussian.com> References: <9B8C9AAD-EFDA-4C2E-8BB9-C7CAB9B3705F@gmail.com> <20150715034657.GJ28964@onerussian.com> Message-ID: Dear Yaroslav, I use a MacOsX Yosemite 10.10.4 I downloaded the release from https://github.com/PyMVPA/PyMVPA/tags I did try once again the alternative procedure installation just now; everything works until I try to call the LinearCSVMC function; the only ones that are available in my python workspace are LinearKernel & LinearLSKernel. And when I try to call the module sum to import libsvm or sg it does not recognize these names.. Thanks so much for your help, Melanie On Tue, Jul 14, 2015 at 10:46 PM, Yaroslav Halchenko wrote: > > On Tue, 14 Jul 2015, Melanie Boly wrote: > >> Dear, > >> I just downloaded the PyMVPA-upstream-2.4.0 release and was trying out >> the tutorial but cannot find anywhere the LinearCSVMC module. I saw in >> the online documentation it should be in: > >> mvpa2.clfs.svm.LinearCSVMC > >> but I cannot find this structure in the python modules. >> Could you please help me on how to use the SVM classifier in this >> release and what should be the equivalent module to call/import then? >> Thank you so much in advance, > > Hi Melanie, > > give us more detail > > - what is your OS > - how did you "Download" PyMVPA-upstream-2.4.0 release? > if "sources" only, did you see/follow smth like > http://www.pymvpa.org/installation.html#alternative-build-procedure > > ? > > -- > Yaroslav O. Halchenko, Ph.D. > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Research Scientist, Psychological and Brain Sciences Dept. > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa From debian at onerussian.com Wed Jul 15 13:48:02 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Wed, 15 Jul 2015 09:48:02 -0400 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: References: <9B8C9AAD-EFDA-4C2E-8BB9-C7CAB9B3705F@gmail.com> <20150715034657.GJ28964@onerussian.com> Message-ID: <20150715134802.GL28964@onerussian.com> On Wed, 15 Jul 2015, Melanie Boly wrote: > Dear Yaroslav, > I use a MacOsX Yosemite 10.10.4 > I downloaded the release from https://github.com/PyMVPA/PyMVPA/tags > I did try once again the alternative procedure installation just now; so you did: cd PyMVPA make and that one completed without errors? if not -- cut/paste output if completed without errors you must have got LinearCSVMC ;) just paste output from above commands > everything works until I try to call the LinearCSVMC function; the > only ones that are available in my python workspace are LinearKernel & > LinearLSKernel. > And when I try to call the module sum to import libsvm or sg it does > not recognize these names.. for sg -- you would need shogun installed... probably easiest would be first to resolve this issue with libsvm -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From boly.melanie at gmail.com Wed Jul 15 13:54:34 2015 From: boly.melanie at gmail.com (Melanie Boly) Date: Wed, 15 Jul 2015 08:54:34 -0500 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: <20150715134802.GL28964@onerussian.com> References: <9B8C9AAD-EFDA-4C2E-8BB9-C7CAB9B3705F@gmail.com> <20150715034657.GJ28964@onerussian.com> <20150715134802.GL28964@onerussian.com> Message-ID: Dear Yaroslav, the make script completed but I had a long error indeed Here it is: ----------------------------------------------------------------------------------------------------------------------------------------------------------- $ sudo make fatal: Not a git repository (or any of the parent directories): .git python setup.py config --noisy running config python setup.py build_ext --inplace running build_ext running build_src build_src building extension "mvpa2.clfs.libsmlrc.smlrc" sources building extension "mvpa2.clfs.libsvmc._svmc" sources building data_files sources build_src: building npy-pkg config files customize UnixCCompiler customize UnixCCompiler using build_ext customize UnixCCompiler customize UnixCCompiler using build_ext building 'mvpa2.clfs.libsmlrc.smlrc' extension compiling C sources C compiler: cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe compile options: '-I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c' cc: mvpa2/clfs/libsmlrc/smlr.c mvpa2/clfs/libsmlrc/smlr.c:172:10: warning: implicit conversion loses integer precision: 'long long' to 'unsigned int' [-Wshorten-64-to-32] srand (seed); ~~~~~ ^~~~ mvpa2/clfs/libsmlrc/smlr.c:342:10: warning: implicit conversion loses integer precision: 'long' to 'int' [-Wshorten-64-to-32] return cycle; ~~~~~~ ^~~~~ 2 warnings generated. mvpa2/clfs/libsmlrc/smlr.c:172:10: warning: implicit conversion loses integer precision: 'long long' to 'unsigned int' [-Wshorten-64-to-32] srand (seed); ~~~~~ ^~~~ 1 warning generated. cc -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsmlrc/smlr.o -lm -o mvpa2/clfs/libsmlrc/smlrc.so -bundle building 'mvpa2.clfs.libsvmc._svmc' extension compiling C++ sources C compiler: c++ -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe creating build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsvmc compile options: '-I3rd/libsvm -I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c' c++: 3rd/libsvm/svm.cpp c++: mvpa2/clfs/libsvmc/svmc_wrap.cpp In file included from mvpa2/clfs/libsvmc/svmc_wrap.cpp:3087: In file included from /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/arrayobject.h:4: In file included from /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarrayobject.h:17: In file included from /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarraytypes.h:1760: /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] #warning "Using deprecated NumPy API, disable it by " \ ^ mvpa2/clfs/libsvmc/svmc_wrap.cpp:3615:15: warning: implicit conversion loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' [-Wshorten-64-to-32] int length = PyList_Size(indices); ~~~~~~ ^~~~~~~~~~~~~~~~~~~~ 2 warnings generated. In file included from mvpa2/clfs/libsvmc/svmc_wrap.cpp:3087: In file included from /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/arrayobject.h:4: In file included from /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarrayobject.h:17: In file included from /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarraytypes.h:1760: /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] #warning "Using deprecated NumPy API, disable it by " \ ^ 1 warning generated. c++ -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. build/temp.macosx-10.10-intel-2.7/3rd/libsvm/svm.o build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsvmc/svmc_wrap.o -o mvpa2/clfs/libsvmc/_svmc.so -bundle touch build-stamp ---------------------------------------------------------------------------------------------------- Any feedback appreciated. I do have Git installed as well as SWIG and Makeports on my Mac though Thanks a lot! Melanie On Wed, Jul 15, 2015 at 8:48 AM, Yaroslav Halchenko wrote: > > On Wed, 15 Jul 2015, Melanie Boly wrote: > >> Dear Yaroslav, >> I use a MacOsX Yosemite 10.10.4 >> I downloaded the release from https://github.com/PyMVPA/PyMVPA/tags >> I did try once again the alternative procedure installation just now; > > so you did: > > cd PyMVPA > make > > and that one completed without errors? if not -- cut/paste output > > if completed without errors you must have got LinearCSVMC ;) just paste > output from above commands > >> everything works until I try to call the LinearCSVMC function; the >> only ones that are available in my python workspace are LinearKernel & >> LinearLSKernel. >> And when I try to call the module sum to import libsvm or sg it does >> not recognize these names.. > > for sg -- you would need shogun installed... probably easiest would be > first to resolve this issue with libsvm > -- > Yaroslav O. Halchenko, Ph.D. > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Research Scientist, Psychological and Brain Sciences Dept. > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa From debian at onerussian.com Wed Jul 15 14:05:25 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Wed, 15 Jul 2015 10:05:25 -0400 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: References: <9B8C9AAD-EFDA-4C2E-8BB9-C7CAB9B3705F@gmail.com> <20150715034657.GJ28964@onerussian.com> <20150715134802.GL28964@onerussian.com> Message-ID: <20150715140525.GM28964@onerussian.com> On Wed, 15 Jul 2015, Melanie Boly wrote: > Dear Yaroslav, > the make script completed but I had a long error indeed > Here it is: > ----------------------------------------------------------------------------------------------------------------------------------------------------------- > $ sudo make > fatal: Not a git repository (or any of the parent directories): .git > python setup.py config --noisy > running config > python setup.py build_ext --inplace > running build_ext > running build_src > build_src > building extension "mvpa2.clfs.libsmlrc.smlrc" sources > building extension "mvpa2.clfs.libsvmc._svmc" sources > building data_files sources > build_src: building npy-pkg config files > customize UnixCCompiler > customize UnixCCompiler using build_ext > customize UnixCCompiler > customize UnixCCompiler using build_ext > building 'mvpa2.clfs.libsmlrc.smlrc' extension > compiling C sources > C compiler: cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 > -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv > -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes > -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes > -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe > compile options: > '-I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include > -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 > -c' > cc: mvpa2/clfs/libsmlrc/smlr.c > mvpa2/clfs/libsmlrc/smlr.c:172:10: warning: implicit conversion loses > integer precision: 'long long' to 'unsigned int' [-Wshorten-64-to-32] > srand (seed); > ~~~~~ ^~~~ > mvpa2/clfs/libsmlrc/smlr.c:342:10: warning: implicit conversion loses > integer precision: 'long' to 'int' [-Wshorten-64-to-32] > return cycle; > ~~~~~~ ^~~~~ > 2 warnings generated. > mvpa2/clfs/libsmlrc/smlr.c:172:10: warning: implicit conversion loses > integer precision: 'long long' to 'unsigned int' [-Wshorten-64-to-32] > srand (seed); > ~~~~~ ^~~~ > 1 warning generated. > cc -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. > build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsmlrc/smlr.o -lm -o > mvpa2/clfs/libsmlrc/smlrc.so -bundle > building 'mvpa2.clfs.libsvmc._svmc' extension > compiling C++ sources > C compiler: c++ -fno-strict-aliasing -fno-common -dynamic -arch x86_64 > -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv > -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wshorten-64-to-32 -DNDEBUG -g > -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 > -arch i386 -pipe > creating build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsvmc > compile options: '-I3rd/libsvm > -I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include > -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 > -c' > c++: 3rd/libsvm/svm.cpp > c++: mvpa2/clfs/libsvmc/svmc_wrap.cpp > In file included from mvpa2/clfs/libsvmc/svmc_wrap.cpp:3087: > In file included from > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/arrayobject.h:4: > In file included from > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarrayobject.h:17: > In file included from > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarraytypes.h:1760: > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: > warning: "Using deprecated NumPy API, disable it by " > "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] > #warning "Using deprecated NumPy API, disable it by " \ > ^ > mvpa2/clfs/libsvmc/svmc_wrap.cpp:3615:15: warning: implicit conversion > loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' > [-Wshorten-64-to-32] > int length = PyList_Size(indices); > ~~~~~~ ^~~~~~~~~~~~~~~~~~~~ > 2 warnings generated. > In file included from mvpa2/clfs/libsvmc/svmc_wrap.cpp:3087: > In file included from > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/arrayobject.h:4: > In file included from > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarrayobject.h:17: > In file included from > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarraytypes.h:1760: > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: > warning: "Using deprecated NumPy API, disable it by " > "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] > #warning "Using deprecated NumPy API, disable it by " \ > ^ > 1 warning generated. > c++ -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. > build/temp.macosx-10.10-intel-2.7/3rd/libsvm/svm.o > build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsvmc/svmc_wrap.o -o > mvpa2/clfs/libsvmc/_svmc.so -bundle > touch build-stamp actually those are all just warnings and it generated bindings just fine. So what happens if you run in that directory: MVPA_DEBUG=EXT.* PYTHONPATH=$PWD python -c 'from mvpa2.clfs.svm import LinearCSVMC; print(LinearCSVMC)' -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From boly.melanie at gmail.com Wed Jul 15 14:13:19 2015 From: boly.melanie at gmail.com (Melanie Boly) Date: Wed, 15 Jul 2015 09:13:19 -0500 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: <20150715140525.GM28964@onerussian.com> References: <9B8C9AAD-EFDA-4C2E-8BB9-C7CAB9B3705F@gmail.com> <20150715034657.GJ28964@onerussian.com> <20150715134802.GL28964@onerussian.com> <20150715140525.GM28964@onerussian.com> Message-ID: Dear Yaroslav, I just did it, and here is the message: ----------------------------------------- $ MVPA_DEBUG=EXT.* PYTHONPATH=$PWD python -c 'from mvpa2.clfs.svm import LinearCSVMC; print(LinearCSVMC)' [EXT ] DBG: Checking for the presence of running ipython env [EXT ] DBG: Presence of running ipython env is NOT verified. Caught exception was: Not running in IPython session [EXT ] DBG: Checking for the presence of numpy [EXT ] DBG: Presence of numpy is verified [EXT ] DBG: Checking for the presence of scipy [EXT ] DBG: Skip retesting for 'numpy'. [EXT ] DBG: Presence of scipy is verified [EXT ] DBG: Checking for the presence of running ipython env [EXT ] DBG: Presence of running ipython env is NOT verified. Caught exception was: Not running in IPython session [EXT ] DBG: Checking for the presence of matplotlib [EXT ] DBG: Presence of matplotlib is verified [EXT ] DBG: Skip retesting for 'running ipython env'. [EXT ] DBG: Skip retesting for 'scipy'. [EXT ] DBG: Skip retesting for 'scipy'. [EXT ] DBG: Skip retesting for 'scipy'. [EXT ] DBG: Skip retesting for 'scipy'. [EXT ] DBG: Checking for the presence of good scipy.stats.rdist [EXT ] DBG: Presence of good scipy.stats.rdist is NOT verified. Caught exception was: scipy.stats carries misbehaving rdist distribution [EXT ] DBG: Fixing up scipy.stats.rdist [EXT ] DBG: Checking for the presence of good scipy.stats.rdist /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/quadpack.py:293: UserWarning: Extremely bad integrand behavior occurs at some points of the integration interval. warnings.warn(msg) [EXT ] DBG: Presence of good scipy.stats.rdist is verified [EXT ] DBG: Checking for the presence of good scipy.stats.rv_discrete.ppf [EXT ] DBG: Presence of good scipy.stats.rv_discrete.ppf is verified [EXT ] DBG: Checking for the presence of good scipy.stats.rv_continuous._reduce_func(floc,fscale) [EXT ] DBG: Presence of good scipy.stats.rv_continuous._reduce_func(floc,fscale) is verified [EXT ] DBG: Checking for the presence of pylab [EXT ] DBG: Skip retesting for 'matplotlib'. [EXT ] DBG: Presence of pylab is verified [EXT ] DBG: Skip retesting for 'scipy'. [EXT ] DBG: Skip retesting for 'scipy'. [EXT ] DBG: Checking for the presence of shogun [EXT ] DBG: Presence of shogun is NOT verified. Caught exception was: No module named shogun.Classifier [EXT ] DBG: Checking for the presence of libsvm [EXT ] DBG: Presence of libsvm is NOT verified. Caught exception was: cannot import name C_SVC WARNING: None of SVM implementation libraries was found * Please note: warnings are printed only once, but underlying problem might occur many times * Traceback (most recent call last): File "", line 1, in ImportError: cannot import name LinearCSVMC ----------------- I am very grateful for your help. Very best wishes, Melanie On Wed, Jul 15, 2015 at 9:05 AM, Yaroslav Halchenko wrote: > > On Wed, 15 Jul 2015, Melanie Boly wrote: > >> Dear Yaroslav, >> the make script completed but I had a long error indeed > >> Here it is: > >> ----------------------------------------------------------------------------------------------------------------------------------------------------------- > >> $ sudo make > >> fatal: Not a git repository (or any of the parent directories): .git > >> python setup.py config --noisy > >> running config > >> python setup.py build_ext --inplace > >> running build_ext > >> running build_src > >> build_src > >> building extension "mvpa2.clfs.libsmlrc.smlrc" sources > >> building extension "mvpa2.clfs.libsvmc._svmc" sources > >> building data_files sources > >> build_src: building npy-pkg config files > >> customize UnixCCompiler > >> customize UnixCCompiler using build_ext > >> customize UnixCCompiler > >> customize UnixCCompiler using build_ext > >> building 'mvpa2.clfs.libsmlrc.smlrc' extension > >> compiling C sources > >> C compiler: cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 >> -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv >> -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes >> -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes >> -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe > > >> compile options: >> '-I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include >> -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 >> -c' > >> cc: mvpa2/clfs/libsmlrc/smlr.c > >> mvpa2/clfs/libsmlrc/smlr.c:172:10: warning: implicit conversion loses >> integer precision: 'long long' to 'unsigned int' [-Wshorten-64-to-32] > >> srand (seed); > >> ~~~~~ ^~~~ > >> mvpa2/clfs/libsmlrc/smlr.c:342:10: warning: implicit conversion loses >> integer precision: 'long' to 'int' [-Wshorten-64-to-32] > >> return cycle; > >> ~~~~~~ ^~~~~ > >> 2 warnings generated. > >> mvpa2/clfs/libsmlrc/smlr.c:172:10: warning: implicit conversion loses >> integer precision: 'long long' to 'unsigned int' [-Wshorten-64-to-32] > >> srand (seed); > >> ~~~~~ ^~~~ > >> 1 warning generated. > >> cc -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. >> build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsmlrc/smlr.o -lm -o >> mvpa2/clfs/libsmlrc/smlrc.so -bundle > >> building 'mvpa2.clfs.libsvmc._svmc' extension > >> compiling C++ sources > >> C compiler: c++ -fno-strict-aliasing -fno-common -dynamic -arch x86_64 >> -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv >> -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wshorten-64-to-32 -DNDEBUG -g >> -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 >> -arch i386 -pipe > > >> creating build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsvmc > >> compile options: '-I3rd/libsvm >> -I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include >> -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 >> -c' > >> c++: 3rd/libsvm/svm.cpp > >> c++: mvpa2/clfs/libsvmc/svmc_wrap.cpp > >> In file included from mvpa2/clfs/libsvmc/svmc_wrap.cpp:3087: > >> In file included from >> /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/arrayobject.h:4: > >> In file included from >> /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarrayobject.h:17: > >> In file included from >> /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarraytypes.h:1760: > >> /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: >> warning: "Using deprecated NumPy API, disable it by " >> "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] > >> #warning "Using deprecated NumPy API, disable it by " \ > >> ^ > >> mvpa2/clfs/libsvmc/svmc_wrap.cpp:3615:15: warning: implicit conversion >> loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' >> [-Wshorten-64-to-32] > >> int length = PyList_Size(indices); > >> ~~~~~~ ^~~~~~~~~~~~~~~~~~~~ > >> 2 warnings generated. > >> In file included from mvpa2/clfs/libsvmc/svmc_wrap.cpp:3087: > >> In file included from >> /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/arrayobject.h:4: > >> In file included from >> /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarrayobject.h:17: > >> In file included from >> /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarraytypes.h:1760: > >> /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: >> warning: "Using deprecated NumPy API, disable it by " >> "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] > >> #warning "Using deprecated NumPy API, disable it by " \ > >> ^ > >> 1 warning generated. > >> c++ -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. >> build/temp.macosx-10.10-intel-2.7/3rd/libsvm/svm.o >> build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsvmc/svmc_wrap.o -o >> mvpa2/clfs/libsvmc/_svmc.so -bundle > >> touch build-stamp > > actually those are all just warnings and it generated bindings just > fine. > > > So what happens if you run in that directory: > > MVPA_DEBUG=EXT.* PYTHONPATH=$PWD python -c 'from mvpa2.clfs.svm import LinearCSVMC; print(LinearCSVMC)' > > -- > Yaroslav O. Halchenko, Ph.D. > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Research Scientist, Psychological and Brain Sciences Dept. > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa From debian at onerussian.com Wed Jul 15 15:23:38 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Wed, 15 Jul 2015 11:23:38 -0400 Subject: [pymvpa] Nick? Re: LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: References: <9B8C9AAD-EFDA-4C2E-8BB9-C7CAB9B3705F@gmail.com> <20150715034657.GJ28964@onerussian.com> <20150715134802.GL28964@onerussian.com> <20150715140525.GM28964@onerussian.com> Message-ID: <20150715152338.GN28964@onerussian.com> On Wed, 15 Jul 2015, Melanie Boly wrote: > [EXT ] DBG: Presence of libsvm is NOT verified. Caught exception was: > cannot import name C_SVC > WARNING: None of SVM implementation libraries was found > * Please note: warnings are printed only once, but underlying problem > might occur many times * > Traceback (most recent call last): > File "", line 1, in > ImportError: cannot import name LinearCSVMC heh ... I was able to replicate this and not yet sure how to overcome. May be Nick would know better (I am not using OSX myself) Meanwhile, if you don't mind spending some bandwidth and probably 20 min of waiting time, I would recommend you to give a shot to our NeuroDebian virtualbox appliance which is very easy to "deploy" 1. install virtualbox 2. download appliance, select OSX as OS and closest to you mirror on http://neuro.debian.net/ 3. when running first time at the end if will offer at the end a simple dialog with multiple options of collections of stuff to install one of which will be "PyMVPA Tutorial" and it will install everything necessary for you -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From boly.melanie at gmail.com Wed Jul 15 15:33:58 2015 From: boly.melanie at gmail.com (Melanie Boly) Date: Wed, 15 Jul 2015 10:33:58 -0500 Subject: [pymvpa] Nick? Re: LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: <20150715152338.GN28964@onerussian.com> References: <9B8C9AAD-EFDA-4C2E-8BB9-C7CAB9B3705F@gmail.com> <20150715034657.GJ28964@onerussian.com> <20150715134802.GL28964@onerussian.com> <20150715140525.GM28964@onerussian.com> <20150715152338.GN28964@onerussian.com> Message-ID: ok thanks will try that too! vbw Melanie > On Jul 15, 2015, at 10:23 AM, Yaroslav Halchenko wrote: > > > On Wed, 15 Jul 2015, Melanie Boly wrote: >> [EXT ] DBG: Presence of libsvm is NOT verified. Caught exception was: >> cannot import name C_SVC > >> WARNING: None of SVM implementation libraries was found > >> * Please note: warnings are printed only once, but underlying problem >> might occur many times * > >> Traceback (most recent call last): > >> File "", line 1, in > >> ImportError: cannot import name LinearCSVMC > > heh ... I was able to replicate this and not yet sure how to overcome. > May be Nick would know better (I am not using OSX myself) > > Meanwhile, if you don't mind spending some bandwidth and probably 20 min > of waiting time, I would recommend you to give a shot to our NeuroDebian > virtualbox appliance which is very easy to "deploy" > > 1. install virtualbox > 2. download appliance, select OSX as OS and closest to you mirror on > http://neuro.debian.net/ > 3. when running first time at the end if will offer at the end a simple > dialog with multiple options of collections of stuff to install one of > which will be "PyMVPA Tutorial" and it will install everything necessary > for you > > -- > Yaroslav O. Halchenko, Ph.D. > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Research Scientist, Psychological and Brain Sciences Dept. > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa From n.n.oosterhof at googlemail.com Wed Jul 15 16:45:09 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Wed, 15 Jul 2015 18:45:09 +0200 Subject: [pymvpa] Nick? Re: LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: <20150715152338.GN28964@onerussian.com> References: <9B8C9AAD-EFDA-4C2E-8BB9-C7CAB9B3705F@gmail.com> <20150715034657.GJ28964@onerussian.com> <20150715134802.GL28964@onerussian.com> <20150715140525.GM28964@onerussian.com> <20150715152338.GN28964@onerussian.com> Message-ID: <9C26B6E1-6DFE-474C-9E64-64F995737517@googlemail.com> > On 15 Jul 2015, at 17:23, Yaroslav Halchenko wrote: > > > On Wed, 15 Jul 2015, Melanie Boly wrote: >> [EXT ] DBG: Presence of libsvm is NOT verified. Caught exception was: >> cannot import name C_SVC > >> WARNING: None of SVM implementation libraries was found > >> * Please note: warnings are printed only once, but underlying problem >> might occur many times * > >> Traceback (most recent call last): > >> File "", line 1, in > >> ImportError: cannot import name LinearCSVMC > > heh ... I was able to replicate this and not yet sure how to overcome. > May be Nick would know better (I am not using OSX myself) Yes, at least on 2.4.0 (haven?t checked other versions) a basic install does not provide any SVM implementation. On my computer (OS X 10.10) it works fine, but unfortunately I don?t remember the details how I got it to work in the past. Starting from scratch following the installation instructions [1], running ?make 3rd? works fine, but then trying to build the SVM stuff does not work: - "python setup.py build_ext?: after this, running ?from mvpa2.suite import *" reports "Failed to load fast implementation of SMLR. ? and "SMLR: C implementation is not available. ? - "python setup.py build_ext --with-libsvm?: it fails to build: "fatal error: 'svm.h' file not found? - "python setup.py build_ext --with-libsvm -I3rd/libsvm? it also fails to build, but with a different error (full output pasted below). I can see and try tomorrow if I can get the svm stuff to work again starting from scratch. In the meantime, if anyone has suggestions, or has solved this issue before *and* (unlike me) remembers how they fixed it, it would be appreciated if you could share this with us. [1] http://www.pymvpa.org/installation.html bash output: $ python setup.py build_ext --with-libsvm running build_ext running build_src build_src building extension "mvpa2.clfs.libsmlrc.smlrc" sources building extension "mvpa2.clfs.libsvmc._svmc" sources building data_files sources build_src: building npy-pkg config files customize UnixCCompiler customize UnixCCompiler using build_ext customize UnixCCompiler #### ['clang', '-DNDEBUG', '-g', '-fwrapv', '-O3', '-Wall', '-Wstrict-prototypes', '-I/usr/local/include', '-I/usr/local/include'] ####### Missing compiler_cxx fix for UnixCCompiler customize UnixCCompiler using build_ext building 'mvpa2.clfs.libsvmc._svmc' extension compiling C++ sources C compiler: clang++ -DNDEBUG -g -fwrapv -O3 -Wall -I/usr/local/include -I/usr/local/include compile options: '-I/usr/include/libsvm-3.0/libsvm -I/usr/include/libsvm-2.0/libsvm -I/usr/include/libsvm -I/usr/local/include/libsvm -I/usr/local/include/libsvm-2.0/libsvm -I/usr/local/include -I/usr/local/lib/python2.7/site-packages/numpy/core/include -I/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c' clang++: build/src.macosx-10.9-x86_64-2.7/mvpa2/clfs/libsvmc/svmc_wrap.cpp build/src.macosx-10.9-x86_64-2.7/mvpa2/clfs/libsvmc/svmc_wrap.cpp:3085:10: fatal error: 'svm.h' file not found #include "svm.h" ^ 1 error generated. build/src.macosx-10.9-x86_64-2.7/mvpa2/clfs/libsvmc/svmc_wrap.cpp:3085:10: fatal error: 'svm.h' file not found #include "svm.h" ^ 1 error generated. error: Command "clang++ -DNDEBUG -g -fwrapv -O3 -Wall -I/usr/local/include -I/usr/local/include -I/usr/include/libsvm-3.0/libsvm -I/usr/include/libsvm-2.0/libsvm -I/usr/include/libsvm -I/usr/local/include/libsvm -I/usr/local/include/libsvm-2.0/libsvm -I/usr/local/include -I/usr/local/lib/python2.7/site-packages/numpy/core/include -I/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c build/src.macosx-10.9-x86_64-2.7/mvpa2/clfs/libsvmc/svmc_wrap.cpp -o build/temp.macosx-10.9-x86_64-2.7/build/src.macosx-10.9-x86_64-2.7/mvpa2/clfs/libsvmc/svmc_wrap.o" failed with exit status 1 $ python setup.py build_ext --with-libsvm -I3rd/libsvm running build_ext running build_src build_src building extension "mvpa2.clfs.libsmlrc.smlrc" sources building extension "mvpa2.clfs.libsvmc._svmc" sources building data_files sources build_src: building npy-pkg config files customize UnixCCompiler customize UnixCCompiler using build_ext customize UnixCCompiler #### ['clang', '-DNDEBUG', '-g', '-fwrapv', '-O3', '-Wall', '-Wstrict-prototypes', '-I/usr/local/include', '-I/usr/local/include'] ####### Missing compiler_cxx fix for UnixCCompiler customize UnixCCompiler using build_ext building 'mvpa2.clfs.libsvmc._svmc' extension compiling C++ sources C compiler: clang++ -DNDEBUG -g -fwrapv -O3 -Wall -I/usr/local/include -I/usr/local/include compile options: '-I/usr/include/libsvm-3.0/libsvm -I/usr/include/libsvm-2.0/libsvm -I/usr/include/libsvm -I/usr/local/include/libsvm -I/usr/local/include/libsvm-2.0/libsvm -I/usr/local/include -I/usr/local/lib/python2.7/site-packages/numpy/core/include -I3rd/libsvm -I/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c' clang++: build/src.macosx-10.9-x86_64-2.7/mvpa2/clfs/libsvmc/svmc_wrap.cpp In file included from build/src.macosx-10.9-x86_64-2.7/mvpa2/clfs/libsvmc/svmc_wrap.cpp:3087: In file included from /usr/local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4: In file included from /usr/local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17: In file included from /usr/local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1804: /usr/local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] #warning "Using deprecated NumPy API, disable it by " \ ^ 1 warning generated. creating build/lib.macosx-10.9-x86_64-2.7/mvpa2/clfs/libsvmc clang++ -bundle -undefined dynamic_lookup -L/usr/local/lib -L/usr/local/opt/sqlite/lib -L/usr/local/lib -I/usr/local/include -I/usr/local/include build/temp.macosx-10.9-x86_64-2.7/build/src.macosx-10.9-x86_64-2.7/mvpa2/clfs/libsvmc/svmc_wrap.o -lsvm -o build/lib.macosx-10.9-x86_64-2.7/mvpa2/clfs/libsvmc/_svmc.so -bundle ld: library not found for -lsvm clang: error: linker command failed with exit code 1 (use -v to see invocation) ld: library not found for -lsvm clang: error: linker command failed with exit code 1 (use -v to see invocation) error: Command "clang++ -bundle -undefined dynamic_lookup -L/usr/local/lib -L/usr/local/opt/sqlite/lib -L/usr/local/lib -I/usr/local/include -I/usr/local/include build/temp.macosx-10.9-x86_64-2.7/build/src.macosx-10.9-x86_64-2.7/mvpa2/clfs/libsvmc/svmc_wrap.o -lsvm -o build/lib.macosx-10.9-x86_64-2.7/mvpa2/clfs/libsvmc/_svmc.so -bundle" failed with exit status 1 From mafeilong at gmail.com Wed Jul 15 16:46:32 2015 From: mafeilong at gmail.com (Feilong Ma) Date: Wed, 15 Jul 2015 16:46:32 +0000 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: References: Message-ID: I had a similar problem while installing PyMVPA on Mac OS (10.10.4). I think the problem is related to this line: https://github.com/PyMVPA/PyMVPA/blob/master/mvpa2/clfs/libsvmc/_svm.py#L22 When I tried to run this line in ipython from mvpa2.clfs.libsvmc._svmc import C_SVC, NU_SVC, ONE_CLASS, EPSILON_SVR What I got is: ImportError: cannot import name C_SVC I guess the problem might be related to compiling LibSVM. I vaguely remember there was some error messages with CLANG blah blah. I installed python and some other packages using Homebrew, in case it's related to this issue. Best, Feilong On Wed, Jul 15, 2015 at 10:13 AM < pkg-exppsy-pymvpa-request at lists.alioth.debian.org> wrote: > Send Pkg-ExpPsy-PyMVPA mailing list submissions to > pkg-exppsy-pymvpa at lists.alioth.debian.org > > To subscribe or unsubscribe via the World Wide Web, visit > > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > > or, via email, send a message with subject or body 'help' to > pkg-exppsy-pymvpa-request at lists.alioth.debian.org > > You can reach the person managing the list at > pkg-exppsy-pymvpa-owner at lists.alioth.debian.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Pkg-ExpPsy-PyMVPA digest..." > Today's Topics: > > 1. Re: LinearCSVMC not found in PyMVPA-upstream-2.4.0 release > (Melanie Boly) > 2. Re: LinearCSVMC not found in PyMVPA-upstream-2.4.0 release > (Yaroslav Halchenko) > 3. Re: LinearCSVMC not found in PyMVPA-upstream-2.4.0 release > (Melanie Boly) > 4. Re: LinearCSVMC not found in PyMVPA-upstream-2.4.0 release > (Yaroslav Halchenko) > 5. Re: LinearCSVMC not found in PyMVPA-upstream-2.4.0 release > (Melanie Boly) > > > > ---------- Forwarded message ---------- > From: Melanie Boly > To: Development and support of PyMVPA < > pkg-exppsy-pymvpa at lists.alioth.debian.org> > Cc: > Date: Wed, 15 Jul 2015 08:37:21 -0500 > Subject: Re: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 > release > Dear Yaroslav, > I use a MacOsX Yosemite 10.10.4 > I downloaded the release from https://github.com/PyMVPA/PyMVPA/tags > I did try once again the alternative procedure installation just now; > everything works until I try to call the LinearCSVMC function; the > only ones that are available in my python workspace are LinearKernel & > LinearLSKernel. > And when I try to call the module sum to import libsvm or sg it does > not recognize these names.. > Thanks so much for your help, > Melanie > > On Tue, Jul 14, 2015 at 10:46 PM, Yaroslav Halchenko > wrote: > > > > On Tue, 14 Jul 2015, Melanie Boly wrote: > > > >> Dear, > > > >> I just downloaded the PyMVPA-upstream-2.4.0 release and was trying out > >> the tutorial but cannot find anywhere the LinearCSVMC module. I saw in > >> the online documentation it should be in: > > > >> mvpa2.clfs.svm.LinearCSVMC > > > >> but I cannot find this structure in the python modules. > >> Could you please help me on how to use the SVM classifier in this > >> release and what should be the equivalent module to call/import then? > >> Thank you so much in advance, > > > > Hi Melanie, > > > > give us more detail > > > > - what is your OS > > - how did you "Download" PyMVPA-upstream-2.4.0 release? > > if "sources" only, did you see/follow smth like > > http://www.pymvpa.org/installation.html#alternative-build-procedure > > > > ? > > > > -- > > Yaroslav O. Halchenko, Ph.D. > > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > > Research Scientist, Psychological and Brain Sciences Dept. > > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > > WWW: http://www.linkedin.com/in/yarik > > > > _______________________________________________ > > Pkg-ExpPsy-PyMVPA mailing list > > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > > > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > > > > > > ---------- Forwarded message ---------- > From: Yaroslav Halchenko > To: pkg-exppsy-pymvpa at lists.alioth.debian.org > Cc: > Date: Wed, 15 Jul 2015 09:48:02 -0400 > Subject: Re: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 > release > > On Wed, 15 Jul 2015, Melanie Boly wrote: > > > Dear Yaroslav, > > I use a MacOsX Yosemite 10.10.4 > > I downloaded the release from https://github.com/PyMVPA/PyMVPA/tags > > I did try once again the alternative procedure installation just now; > > so you did: > > cd PyMVPA > make > > and that one completed without errors? if not -- cut/paste output > > if completed without errors you must have got LinearCSVMC ;) just paste > output from above commands > > > everything works until I try to call the LinearCSVMC function; the > > only ones that are available in my python workspace are LinearKernel & > > LinearLSKernel. > > And when I try to call the module sum to import libsvm or sg it does > > not recognize these names.. > > for sg -- you would need shogun installed... probably easiest would be > first to resolve this issue with libsvm > -- > Yaroslav O. Halchenko, Ph.D. > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Research Scientist, Psychological and Brain Sciences Dept. > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > > > > > ---------- Forwarded message ---------- > From: Melanie Boly > To: Development and support of PyMVPA < > pkg-exppsy-pymvpa at lists.alioth.debian.org> > Cc: > Date: Wed, 15 Jul 2015 08:54:34 -0500 > Subject: Re: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 > release > Dear Yaroslav, > the make script completed but I had a long error indeed > Here it is: > > > ----------------------------------------------------------------------------------------------------------------------------------------------------------- > > $ sudo make > > fatal: Not a git repository (or any of the parent directories): .git > > python setup.py config --noisy > > running config > > python setup.py build_ext --inplace > > running build_ext > > running build_src > > build_src > > building extension "mvpa2.clfs.libsmlrc.smlrc" sources > > building extension "mvpa2.clfs.libsvmc._svmc" sources > > building data_files sources > > build_src: building npy-pkg config files > > customize UnixCCompiler > > customize UnixCCompiler using build_ext > > customize UnixCCompiler > > customize UnixCCompiler using build_ext > > building 'mvpa2.clfs.libsmlrc.smlrc' extension > > compiling C sources > > C compiler: cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 > -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv > -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes > -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes > -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe > > > compile options: > > '-I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include > > -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 > -c' > > cc: mvpa2/clfs/libsmlrc/smlr.c > > mvpa2/clfs/libsmlrc/smlr.c:172:10: warning: implicit conversion loses > integer precision: 'long long' to 'unsigned int' [-Wshorten-64-to-32] > > srand (seed); > > ~~~~~ ^~~~ > > mvpa2/clfs/libsmlrc/smlr.c:342:10: warning: implicit conversion loses > integer precision: 'long' to 'int' [-Wshorten-64-to-32] > > return cycle; > > ~~~~~~ ^~~~~ > > 2 warnings generated. > > mvpa2/clfs/libsmlrc/smlr.c:172:10: warning: implicit conversion loses > integer precision: 'long long' to 'unsigned int' [-Wshorten-64-to-32] > > srand (seed); > > ~~~~~ ^~~~ > > 1 warning generated. > > cc -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. > build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsmlrc/smlr.o -lm -o > mvpa2/clfs/libsmlrc/smlrc.so -bundle > > building 'mvpa2.clfs.libsvmc._svmc' extension > > compiling C++ sources > > C compiler: c++ -fno-strict-aliasing -fno-common -dynamic -arch x86_64 > -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv > -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wshorten-64-to-32 -DNDEBUG -g > -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 > -arch i386 -pipe > > > creating build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsvmc > > compile options: '-I3rd/libsvm > > -I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include > > -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 > -c' > > c++: 3rd/libsvm/svm.cpp > > c++: mvpa2/clfs/libsvmc/svmc_wrap.cpp > > In file included from mvpa2/clfs/libsvmc/svmc_wrap.cpp:3087: > > In file included from > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/arrayobject.h:4: > > In file included from > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarrayobject.h:17: > > In file included from > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarraytypes.h:1760: > > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: > warning: "Using deprecated NumPy API, disable it by " > "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] > > #warning "Using deprecated NumPy API, disable it by " \ > > ^ > > mvpa2/clfs/libsvmc/svmc_wrap.cpp:3615:15: warning: implicit conversion > loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' > [-Wshorten-64-to-32] > > int length = PyList_Size(indices); > > ~~~~~~ ^~~~~~~~~~~~~~~~~~~~ > > 2 warnings generated. > > In file included from mvpa2/clfs/libsvmc/svmc_wrap.cpp:3087: > > In file included from > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/arrayobject.h:4: > > In file included from > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarrayobject.h:17: > > In file included from > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarraytypes.h:1760: > > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: > warning: "Using deprecated NumPy API, disable it by " > "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] > > #warning "Using deprecated NumPy API, disable it by " \ > > ^ > > 1 warning generated. > > c++ -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. > build/temp.macosx-10.10-intel-2.7/3rd/libsvm/svm.o > build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsvmc/svmc_wrap.o -o > mvpa2/clfs/libsvmc/_svmc.so -bundle > > touch build-stamp > > > > > > > > ---------------------------------------------------------------------------------------------------- > > Any feedback appreciated. I do have Git installed as well as SWIG and > Makeports on my Mac though > Thanks a lot! > Melanie > > On Wed, Jul 15, 2015 at 8:48 AM, Yaroslav Halchenko > wrote: > > > > On Wed, 15 Jul 2015, Melanie Boly wrote: > > > >> Dear Yaroslav, > >> I use a MacOsX Yosemite 10.10.4 > >> I downloaded the release from https://github.com/PyMVPA/PyMVPA/tags > >> I did try once again the alternative procedure installation just now; > > > > so you did: > > > > cd PyMVPA > > make > > > > and that one completed without errors? if not -- cut/paste output > > > > if completed without errors you must have got LinearCSVMC ;) just paste > > output from above commands > > > >> everything works until I try to call the LinearCSVMC function; the > >> only ones that are available in my python workspace are LinearKernel & > >> LinearLSKernel. > >> And when I try to call the module sum to import libsvm or sg it does > >> not recognize these names.. > > > > for sg -- you would need shogun installed... probably easiest would be > > first to resolve this issue with libsvm > > -- > > Yaroslav O. Halchenko, Ph.D. > > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > > Research Scientist, Psychological and Brain Sciences Dept. > > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > > WWW: http://www.linkedin.com/in/yarik > > > > _______________________________________________ > > Pkg-ExpPsy-PyMVPA mailing list > > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > > > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > > > > > > ---------- Forwarded message ---------- > From: Yaroslav Halchenko > To: pkg-exppsy-pymvpa at lists.alioth.debian.org > Cc: > Date: Wed, 15 Jul 2015 10:05:25 -0400 > Subject: Re: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 > release > > On Wed, 15 Jul 2015, Melanie Boly wrote: > > > Dear Yaroslav, > > the make script completed but I had a long error indeed > > > Here it is: > > > > ----------------------------------------------------------------------------------------------------------------------------------------------------------- > > > $ sudo make > > > fatal: Not a git repository (or any of the parent directories): .git > > > python setup.py config --noisy > > > running config > > > python setup.py build_ext --inplace > > > running build_ext > > > running build_src > > > build_src > > > building extension "mvpa2.clfs.libsmlrc.smlrc" sources > > > building extension "mvpa2.clfs.libsvmc._svmc" sources > > > building data_files sources > > > build_src: building npy-pkg config files > > > customize UnixCCompiler > > > customize UnixCCompiler using build_ext > > > customize UnixCCompiler > > > customize UnixCCompiler using build_ext > > > building 'mvpa2.clfs.libsmlrc.smlrc' extension > > > compiling C sources > > > C compiler: cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 > > -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv > > -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes > > -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes > > -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe > > > > compile options: > > > '-I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include > > > -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 > > -c' > > > cc: mvpa2/clfs/libsmlrc/smlr.c > > > mvpa2/clfs/libsmlrc/smlr.c:172:10: warning: implicit conversion loses > > integer precision: 'long long' to 'unsigned int' [-Wshorten-64-to-32] > > > srand (seed); > > > ~~~~~ ^~~~ > > > mvpa2/clfs/libsmlrc/smlr.c:342:10: warning: implicit conversion loses > > integer precision: 'long' to 'int' [-Wshorten-64-to-32] > > > return cycle; > > > ~~~~~~ ^~~~~ > > > 2 warnings generated. > > > mvpa2/clfs/libsmlrc/smlr.c:172:10: warning: implicit conversion loses > > integer precision: 'long long' to 'unsigned int' [-Wshorten-64-to-32] > > > srand (seed); > > > ~~~~~ ^~~~ > > > 1 warning generated. > > > cc -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. > > build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsmlrc/smlr.o -lm -o > > mvpa2/clfs/libsmlrc/smlrc.so -bundle > > > building 'mvpa2.clfs.libsvmc._svmc' extension > > > compiling C++ sources > > > C compiler: c++ -fno-strict-aliasing -fno-common -dynamic -arch x86_64 > > -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv > > -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wshorten-64-to-32 -DNDEBUG -g > > -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 > > -arch i386 -pipe > > > > creating build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsvmc > > > compile options: '-I3rd/libsvm > > > -I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include > > > -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 > > -c' > > > c++: 3rd/libsvm/svm.cpp > > > c++: mvpa2/clfs/libsvmc/svmc_wrap.cpp > > > In file included from mvpa2/clfs/libsvmc/svmc_wrap.cpp:3087: > > > In file included from > > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/arrayobject.h:4: > > > In file included from > > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarrayobject.h:17: > > > In file included from > > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarraytypes.h:1760: > > > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: > > warning: "Using deprecated NumPy API, disable it by " > > "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] > > > #warning "Using deprecated NumPy API, disable it by " \ > > > ^ > > > mvpa2/clfs/libsvmc/svmc_wrap.cpp:3615:15: warning: implicit conversion > > loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' > > [-Wshorten-64-to-32] > > > int length = PyList_Size(indices); > > > ~~~~~~ ^~~~~~~~~~~~~~~~~~~~ > > > 2 warnings generated. > > > In file included from mvpa2/clfs/libsvmc/svmc_wrap.cpp:3087: > > > In file included from > > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/arrayobject.h:4: > > > In file included from > > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarrayobject.h:17: > > > In file included from > > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarraytypes.h:1760: > > > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: > > warning: "Using deprecated NumPy API, disable it by " > > "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] > > > #warning "Using deprecated NumPy API, disable it by " \ > > > ^ > > > 1 warning generated. > > > c++ -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. > > build/temp.macosx-10.10-intel-2.7/3rd/libsvm/svm.o > > build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsvmc/svmc_wrap.o -o > > mvpa2/clfs/libsvmc/_svmc.so -bundle > > > touch build-stamp > > actually those are all just warnings and it generated bindings just > fine. > > > So what happens if you run in that directory: > > MVPA_DEBUG=EXT.* PYTHONPATH=$PWD python -c 'from mvpa2.clfs.svm import > LinearCSVMC; print(LinearCSVMC)' > > -- > Yaroslav O. Halchenko, Ph.D. > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Research Scientist, Psychological and Brain Sciences Dept. > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > > > > > ---------- Forwarded message ---------- > From: Melanie Boly > To: Development and support of PyMVPA < > pkg-exppsy-pymvpa at lists.alioth.debian.org> > Cc: > Date: Wed, 15 Jul 2015 09:13:19 -0500 > Subject: Re: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 > release > Dear Yaroslav, I just did it, and here is the message: > > ----------------------------------------- > > $ MVPA_DEBUG=EXT.* PYTHONPATH=$PWD python -c 'from mvpa2.clfs.svm > import LinearCSVMC; print(LinearCSVMC)' > > [EXT ] DBG: Checking for the presence of running ipython env > > [EXT ] DBG: Presence of running ipython env is NOT verified. Caught > exception was: Not running in IPython session > > [EXT ] DBG: Checking for the presence of numpy > > [EXT ] DBG: Presence of numpy is verified > > [EXT ] DBG: Checking for the presence of scipy > > [EXT ] DBG: Skip retesting for 'numpy'. > > [EXT ] DBG: Presence of scipy is verified > > [EXT ] DBG: Checking for the presence of running ipython env > > [EXT ] DBG: Presence of running ipython env is NOT verified. Caught > exception was: Not running in IPython session > > [EXT ] DBG: Checking for the presence of matplotlib > > [EXT ] DBG: Presence of matplotlib is verified > > [EXT ] DBG: Skip retesting for 'running ipython env'. > > [EXT ] DBG: Skip retesting for 'scipy'. > > [EXT ] DBG: Skip retesting for 'scipy'. > > [EXT ] DBG: Skip retesting for 'scipy'. > > [EXT ] DBG: Skip retesting for 'scipy'. > > [EXT ] DBG: Checking for the presence of good scipy.stats.rdist > > [EXT ] DBG: Presence of good scipy.stats.rdist is NOT verified. > Caught exception was: scipy.stats carries misbehaving rdist > distribution > > [EXT ] DBG: Fixing up scipy.stats.rdist > > [EXT ] DBG: Checking for the presence of good scipy.stats.rdist > > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/quadpack.py:293: > UserWarning: Extremely bad integrand behavior occurs at some points of > the > > integration interval. > > warnings.warn(msg) > > [EXT ] DBG: Presence of good scipy.stats.rdist is verified > > [EXT ] DBG: Checking for the presence of good > scipy.stats.rv_discrete.ppf > > [EXT ] DBG: Presence of good scipy.stats.rv_discrete.ppf is verified > > [EXT ] DBG: Checking for the presence of good > scipy.stats.rv_continuous._reduce_func(floc,fscale) > > [EXT ] DBG: Presence of good > scipy.stats.rv_continuous._reduce_func(floc,fscale) is verified > > [EXT ] DBG: Checking for the presence of pylab > > [EXT ] DBG: Skip retesting for 'matplotlib'. > > [EXT ] DBG: Presence of pylab is verified > > [EXT ] DBG: Skip retesting for 'scipy'. > > [EXT ] DBG: Skip retesting for 'scipy'. > > [EXT ] DBG: Checking for the presence of shogun > > [EXT ] DBG: Presence of shogun is NOT verified. Caught exception was: > No module named shogun.Classifier > > [EXT ] DBG: Checking for the presence of libsvm > > [EXT ] DBG: Presence of libsvm is NOT verified. Caught exception was: > cannot import name C_SVC > > WARNING: None of SVM implementation libraries was found > > * Please note: warnings are printed only once, but underlying problem > might occur many times * > > Traceback (most recent call last): > > File "", line 1, in > > ImportError: cannot import name LinearCSVMC > > > > ----------------- > I am very grateful for your help. > Very best wishes, > Melanie > > > On Wed, Jul 15, 2015 at 9:05 AM, Yaroslav Halchenko > wrote: > > > > On Wed, 15 Jul 2015, Melanie Boly wrote: > > > >> Dear Yaroslav, > >> the make script completed but I had a long error indeed > > > >> Here it is: > > > >> > ----------------------------------------------------------------------------------------------------------------------------------------------------------- > > > >> $ sudo make > > > >> fatal: Not a git repository (or any of the parent directories): .git > > > >> python setup.py config --noisy > > > >> running config > > > >> python setup.py build_ext --inplace > > > >> running build_ext > > > >> running build_src > > > >> build_src > > > >> building extension "mvpa2.clfs.libsmlrc.smlrc" sources > > > >> building extension "mvpa2.clfs.libsvmc._svmc" sources > > > >> building data_files sources > > > >> build_src: building npy-pkg config files > > > >> customize UnixCCompiler > > > >> customize UnixCCompiler using build_ext > > > >> customize UnixCCompiler > > > >> customize UnixCCompiler using build_ext > > > >> building 'mvpa2.clfs.libsmlrc.smlrc' extension > > > >> compiling C sources > > > >> C compiler: cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 > >> -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv > >> -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes > >> -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes > >> -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe > > > > > >> compile options: > >> > '-I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include > >> > -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 > >> -c' > > > >> cc: mvpa2/clfs/libsmlrc/smlr.c > > > >> mvpa2/clfs/libsmlrc/smlr.c:172:10: warning: implicit conversion loses > >> integer precision: 'long long' to 'unsigned int' [-Wshorten-64-to-32] > > > >> srand (seed); > > > >> ~~~~~ ^~~~ > > > >> mvpa2/clfs/libsmlrc/smlr.c:342:10: warning: implicit conversion loses > >> integer precision: 'long' to 'int' [-Wshorten-64-to-32] > > > >> return cycle; > > > >> ~~~~~~ ^~~~~ > > > >> 2 warnings generated. > > > >> mvpa2/clfs/libsmlrc/smlr.c:172:10: warning: implicit conversion loses > >> integer precision: 'long long' to 'unsigned int' [-Wshorten-64-to-32] > > > >> srand (seed); > > > >> ~~~~~ ^~~~ > > > >> 1 warning generated. > > > >> cc -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. > >> build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsmlrc/smlr.o -lm -o > >> mvpa2/clfs/libsmlrc/smlrc.so -bundle > > > >> building 'mvpa2.clfs.libsvmc._svmc' extension > > > >> compiling C++ sources > > > >> C compiler: c++ -fno-strict-aliasing -fno-common -dynamic -arch x86_64 > >> -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv > >> -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wshorten-64-to-32 -DNDEBUG -g > >> -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 > >> -arch i386 -pipe > > > > > >> creating build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsvmc > > > >> compile options: '-I3rd/libsvm > >> > -I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include > >> > -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 > >> -c' > > > >> c++: 3rd/libsvm/svm.cpp > > > >> c++: mvpa2/clfs/libsvmc/svmc_wrap.cpp > > > >> In file included from mvpa2/clfs/libsvmc/svmc_wrap.cpp:3087: > > > >> In file included from > >> > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/arrayobject.h:4: > > > >> In file included from > >> > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarrayobject.h:17: > > > >> In file included from > >> > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarraytypes.h:1760: > > > >> > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: > >> warning: "Using deprecated NumPy API, disable it by " > >> "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] > > > >> #warning "Using deprecated NumPy API, disable it by " \ > > > >> ^ > > > >> mvpa2/clfs/libsvmc/svmc_wrap.cpp:3615:15: warning: implicit conversion > >> loses integer precision: 'Py_ssize_t' (aka 'long') to 'int' > >> [-Wshorten-64-to-32] > > > >> int length = PyList_Size(indices); > > > >> ~~~~~~ ^~~~~~~~~~~~~~~~~~~~ > > > >> 2 warnings generated. > > > >> In file included from mvpa2/clfs/libsvmc/svmc_wrap.cpp:3087: > > > >> In file included from > >> > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/arrayobject.h:4: > > > >> In file included from > >> > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarrayobject.h:17: > > > >> In file included from > >> > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/ndarraytypes.h:1760: > > > >> > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: > >> warning: "Using deprecated NumPy API, disable it by " > >> "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-W#warnings] > > > >> #warning "Using deprecated NumPy API, disable it by " \ > > > >> ^ > > > >> 1 warning generated. > > > >> c++ -bundle -undefined dynamic_lookup -arch x86_64 -arch i386 -Wl,-F. > >> build/temp.macosx-10.10-intel-2.7/3rd/libsvm/svm.o > >> build/temp.macosx-10.10-intel-2.7/mvpa2/clfs/libsvmc/svmc_wrap.o -o > >> mvpa2/clfs/libsvmc/_svmc.so -bundle > > > >> touch build-stamp > > > > actually those are all just warnings and it generated bindings just > > fine. > > > > > > So what happens if you run in that directory: > > > > MVPA_DEBUG=EXT.* PYTHONPATH=$PWD python -c 'from mvpa2.clfs.svm import > LinearCSVMC; print(LinearCSVMC)' > > > > -- > > Yaroslav O. Halchenko, Ph.D. > > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > > Research Scientist, Psychological and Brain Sciences Dept. > > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > > WWW: http://www.linkedin.com/in/yarik > > > > _______________________________________________ > > Pkg-ExpPsy-PyMVPA mailing list > > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > > > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201421210014 at mail.bnu.edu.cn Fri Jul 17 09:27:42 2015 From: 201421210014 at mail.bnu.edu.cn (=?UTF-8?B?5a2U5Luk5Yab?=) Date: Fri, 17 Jul 2015 17:27:42 +0800 (GMT+08:00) Subject: [pymvpa] =?utf-8?q?Multiple_datasets_in_one_hdf5_file?= Message-ID: Hi? My data is consist of 8 runs , I used the function h5save() to combination them ,so errors occurred But how to organization the multi runs of data to be one file(.nii)? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: hyperalignment.png Type: image/png Size: 8445 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mydata.png Type: image/png Size: 7845 bytes Desc: not available URL: From n.n.oosterhof at googlemail.com Fri Jul 17 09:54:33 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Fri, 17 Jul 2015 11:54:33 +0200 Subject: [pymvpa] Multiple datasets in one hdf5 file In-Reply-To: References: Message-ID: <1E14EB41-7CEE-4684-9A4F-37A081D0C758@googlemail.com> > On 17 Jul 2015, at 11:27, ??? <201421210014 at mail.bnu.edu.cn> wrote: > > My data is consist of 8 runs , > I used the function h5save() to combination them ,so errors occurred What errors occurred? What are you trying to achieve? Note that the amount of information that you provide is very minimal, and insufficient for me to give better suggestions. Please provide more details. You may want to read ?How to ask questions the smart way? [1]. > But how to organization the multi runs of data to be one file(.nii)? Do you want to combine data from multiple nifti files into a single nifti file? If so, you could use fmri_dataset to load each nifti file, then stack them using vstack, and save the result in a nifti file using map2fmri. [1] http://www.catb.org/esr/faqs/smart-questions.html From n.n.oosterhof at googlemail.com Sun Jul 19 13:44:23 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Sun, 19 Jul 2015 15:44:23 +0200 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: References: Message-ID: > On 15 Jul 2015, at 18:46, Feilong Ma wrote: > > I had a similar problem while installing PyMVPA on Mac OS (10.10.4). I think the problem is related to this line: > https://github.com/PyMVPA/PyMVPA/blob/master/mvpa2/clfs/libsvmc/_svm.py#L22 > > When I tried to run this line in ipython > from mvpa2.clfs.libsvmc._svmc import C_SVC, NU_SVC, ONE_CLASS, EPSILON_SVR > What I got is: > ImportError: cannot import name C_SVC I get the same error. Briefly (see below for details), it seems due to a change in SWIG, with later versions giving issues. When running "python setup.py build_ext? and copying over the .o and .so files from the build directory to PyMVPA?s root directory (across the corresponding subdirectories), the following reproduces the error directly: python -c "from mvpa2.clfs.libsvmc._svmc import C_SVC? Strangely enough, the following works for the failing PyMVPA installation (but not for the working one): python -c "from mvpa2.clfs.libsvmc._svmc import C_SVC_swigconstant? Digging a bit further, the mvpa2/clfs/libsvmc/svmc.py file differs between my ?working? (generated using SWIG 3.0.2) and ?failing? (SWIG 3.0.6) PyMVPA setup. One difference is that the working version has contents such as C_SVC = _svmc.C_SVC whereas the failing version has extra lines that includes ?swigconstant? _svmc.C_SVC_swigconstant(_svmc) C_SVC = _svmc.C_SVC (For completeness I?m including the full content of both versions below. ) Tracing this back further, I compiled swig from source, both for the latest version on github and for version 3.0.0 (version 3.0.2 gave an error when compiling). When using 3.0.0, the import works fine; with 3.0.6 or the latest (3.0.7 development) it breaks. > > I guess the problem might be related to compiling LibSVM. I vaguely remember there was some error messages with CLANG blah blah. I installed GCC 5.1 and get the same problem as when using CLANG. To summarize, the following worked for me to get libsvm to work on OS X Yosemite: - clone swig from https://github.com/swig/swig, then ?git checkout -tag tags/tags/rel-3.0.0? - in the swig directory, run ?autoconf && ./configure && make && sudo make install? (although it gives an error when installing the man-pages due to missing yodl2man, the binaries are installed fine). This requires autoconf, automake and libconf. - in the PyMVPA directory, run "python setup.py build_ext? - copy the .so and .o files from the build directory to the PyMVPA root directory, for example in the PyMVPA root directory do "for ext in .so .o; do for i in `find build -iname "*${ext}"`; do j=`echo $i | cut -f3- -d/`; cp $i $j; done; done? If anyone can confirm that using an earlier version of SWIG fixes the problem, that would be great. In that case I can also raise the issue with the developers. (Below: contents of mvpa2/clfs/libsvmc/svmc.py for working and failing libsvm in PyMVPA) ################ # *Failing* mvpa2/clfs/libsvmc/svmc.py ################ # This file was automatically generated by SWIG (http://www.swig.org). # Version 3.0.6 # # Do not make changes to this file unless you know what you are doing--modify # the SWIG interface file instead. from sys import version_info if version_info >= (2, 6, 0): def swig_import_helper(): from os.path import dirname import imp fp = None try: fp, pathname, description = imp.find_module('_svmc', [dirname(__file__)]) except ImportError: import _svmc return _svmc if fp is not None: try: _mod = imp.load_module('_svmc', fp, pathname, description) finally: fp.close() return _mod _svmc = swig_import_helper() del swig_import_helper else: import _svmc del version_info try: _swig_property = property except NameError: pass # Python < 2.2 doesn't have 'property'. def _swig_setattr_nondynamic(self, class_type, name, value, static=1): if (name == "thisown"): return self.this.own(value) if (name == "this"): if type(value).__name__ == 'SwigPyObject': self.__dict__[name] = value return method = class_type.__swig_setmethods__.get(name, None) if method: return method(self, value) if (not static): if _newclass: object.__setattr__(self, name, value) else: self.__dict__[name] = value else: raise AttributeError("You cannot add attributes to %s" % self) def _swig_setattr(self, class_type, name, value): return _swig_setattr_nondynamic(self, class_type, name, value, 0) def _swig_getattr_nondynamic(self, class_type, name, static=1): if (name == "thisown"): return self.this.own() method = class_type.__swig_getmethods__.get(name, None) if method: return method(self) if (not static): return object.__getattr__(self, name) else: raise AttributeError(name) def _swig_getattr(self, class_type, name): return _swig_getattr_nondynamic(self, class_type, name, 0) def _swig_repr(self): try: strthis = "proxy of " + self.this.__repr__() except: strthis = "" return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,) try: _object = object _newclass = 1 except AttributeError: class _object: pass _newclass = 0 _svmc.__version___swigconstant(_svmc) __version__ = _svmc.__version__ _svmc.C_SVC_swigconstant(_svmc) C_SVC = _svmc.C_SVC _svmc.NU_SVC_swigconstant(_svmc) NU_SVC = _svmc.NU_SVC _svmc.ONE_CLASS_swigconstant(_svmc) ONE_CLASS = _svmc.ONE_CLASS _svmc.EPSILON_SVR_swigconstant(_svmc) EPSILON_SVR = _svmc.EPSILON_SVR _svmc.NU_SVR_swigconstant(_svmc) NU_SVR = _svmc.NU_SVR _svmc.LINEAR_swigconstant(_svmc) LINEAR = _svmc.LINEAR _svmc.POLY_swigconstant(_svmc) POLY = _svmc.POLY _svmc.RBF_swigconstant(_svmc) RBF = _svmc.RBF _svmc.SIGMOID_swigconstant(_svmc) SIGMOID = _svmc.SIGMOID _svmc.PRECOMPUTED_swigconstant(_svmc) PRECOMPUTED = _svmc.PRECOMPUTED class svm_parameter(_object): __swig_setmethods__ = {} __setattr__ = lambda self, name, value: _swig_setattr(self, svm_parameter, name, value) __swig_getmethods__ = {} __getattr__ = lambda self, name: _swig_getattr(self, svm_parameter, name) __repr__ = _swig_repr __swig_setmethods__["svm_type"] = _svmc.svm_parameter_svm_type_set __swig_getmethods__["svm_type"] = _svmc.svm_parameter_svm_type_get if _newclass: svm_type = _swig_property(_svmc.svm_parameter_svm_type_get, _svmc.svm_parameter_svm_type_set) __swig_setmethods__["kernel_type"] = _svmc.svm_parameter_kernel_type_set __swig_getmethods__["kernel_type"] = _svmc.svm_parameter_kernel_type_get if _newclass: kernel_type = _swig_property(_svmc.svm_parameter_kernel_type_get, _svmc.svm_parameter_kernel_type_set) __swig_setmethods__["degree"] = _svmc.svm_parameter_degree_set __swig_getmethods__["degree"] = _svmc.svm_parameter_degree_get if _newclass: degree = _swig_property(_svmc.svm_parameter_degree_get, _svmc.svm_parameter_degree_set) __swig_setmethods__["gamma"] = _svmc.svm_parameter_gamma_set __swig_getmethods__["gamma"] = _svmc.svm_parameter_gamma_get if _newclass: gamma = _swig_property(_svmc.svm_parameter_gamma_get, _svmc.svm_parameter_gamma_set) __swig_setmethods__["coef0"] = _svmc.svm_parameter_coef0_set __swig_getmethods__["coef0"] = _svmc.svm_parameter_coef0_get if _newclass: coef0 = _swig_property(_svmc.svm_parameter_coef0_get, _svmc.svm_parameter_coef0_set) __swig_setmethods__["cache_size"] = _svmc.svm_parameter_cache_size_set __swig_getmethods__["cache_size"] = _svmc.svm_parameter_cache_size_get if _newclass: cache_size = _swig_property(_svmc.svm_parameter_cache_size_get, _svmc.svm_parameter_cache_size_set) __swig_setmethods__["eps"] = _svmc.svm_parameter_eps_set __swig_getmethods__["eps"] = _svmc.svm_parameter_eps_get if _newclass: eps = _swig_property(_svmc.svm_parameter_eps_get, _svmc.svm_parameter_eps_set) __swig_setmethods__["C"] = _svmc.svm_parameter_C_set __swig_getmethods__["C"] = _svmc.svm_parameter_C_get if _newclass: C = _swig_property(_svmc.svm_parameter_C_get, _svmc.svm_parameter_C_set) __swig_setmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_set __swig_getmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_get if _newclass: nr_weight = _swig_property(_svmc.svm_parameter_nr_weight_get, _svmc.svm_parameter_nr_weight_set) __swig_setmethods__["weight_label"] = _svmc.svm_parameter_weight_label_set __swig_getmethods__["weight_label"] = _svmc.svm_parameter_weight_label_get if _newclass: weight_label = _swig_property(_svmc.svm_parameter_weight_label_get, _svmc.svm_parameter_weight_label_set) __swig_setmethods__["weight"] = _svmc.svm_parameter_weight_set __swig_getmethods__["weight"] = _svmc.svm_parameter_weight_get if _newclass: weight = _swig_property(_svmc.svm_parameter_weight_get, _svmc.svm_parameter_weight_set) __swig_setmethods__["nu"] = _svmc.svm_parameter_nu_set __swig_getmethods__["nu"] = _svmc.svm_parameter_nu_get if _newclass: nu = _swig_property(_svmc.svm_parameter_nu_get, _svmc.svm_parameter_nu_set) __swig_setmethods__["p"] = _svmc.svm_parameter_p_set __swig_getmethods__["p"] = _svmc.svm_parameter_p_get if _newclass: p = _swig_property(_svmc.svm_parameter_p_get, _svmc.svm_parameter_p_set) __swig_setmethods__["shrinking"] = _svmc.svm_parameter_shrinking_set __swig_getmethods__["shrinking"] = _svmc.svm_parameter_shrinking_get if _newclass: shrinking = _swig_property(_svmc.svm_parameter_shrinking_get, _svmc.svm_parameter_shrinking_set) __swig_setmethods__["probability"] = _svmc.svm_parameter_probability_set __swig_getmethods__["probability"] = _svmc.svm_parameter_probability_get if _newclass: probability = _swig_property(_svmc.svm_parameter_probability_get, _svmc.svm_parameter_probability_set) def __init__(self): this = _svmc.new_svm_parameter() try: self.this.append(this) except: self.this = this __swig_destroy__ = _svmc.delete_svm_parameter __del__ = lambda self: None svm_parameter_swigregister = _svmc.svm_parameter_swigregister svm_parameter_swigregister(svm_parameter) class svm_problem(_object): __swig_setmethods__ = {} __setattr__ = lambda self, name, value: _swig_setattr(self, svm_problem, name, value) __swig_getmethods__ = {} __getattr__ = lambda self, name: _swig_getattr(self, svm_problem, name) __repr__ = _swig_repr __swig_setmethods__["l"] = _svmc.svm_problem_l_set __swig_getmethods__["l"] = _svmc.svm_problem_l_get if _newclass: l = _swig_property(_svmc.svm_problem_l_get, _svmc.svm_problem_l_set) __swig_setmethods__["y"] = _svmc.svm_problem_y_set __swig_getmethods__["y"] = _svmc.svm_problem_y_get if _newclass: y = _swig_property(_svmc.svm_problem_y_get, _svmc.svm_problem_y_set) __swig_setmethods__["x"] = _svmc.svm_problem_x_set __swig_getmethods__["x"] = _svmc.svm_problem_x_get if _newclass: x = _swig_property(_svmc.svm_problem_x_get, _svmc.svm_problem_x_set) def __init__(self): this = _svmc.new_svm_problem() try: self.this.append(this) except: self.this = this __swig_destroy__ = _svmc.delete_svm_problem __del__ = lambda self: None svm_problem_swigregister = _svmc.svm_problem_swigregister svm_problem_swigregister(svm_problem) class svm_model(_object): __swig_setmethods__ = {} __setattr__ = lambda self, name, value: _swig_setattr(self, svm_model, name, value) __swig_getmethods__ = {} __getattr__ = lambda self, name: _swig_getattr(self, svm_model, name) __repr__ = _swig_repr __swig_setmethods__["param"] = _svmc.svm_model_param_set __swig_getmethods__["param"] = _svmc.svm_model_param_get if _newclass: param = _swig_property(_svmc.svm_model_param_get, _svmc.svm_model_param_set) __swig_setmethods__["nr_class"] = _svmc.svm_model_nr_class_set __swig_getmethods__["nr_class"] = _svmc.svm_model_nr_class_get if _newclass: nr_class = _swig_property(_svmc.svm_model_nr_class_get, _svmc.svm_model_nr_class_set) __swig_setmethods__["l"] = _svmc.svm_model_l_set __swig_getmethods__["l"] = _svmc.svm_model_l_get if _newclass: l = _swig_property(_svmc.svm_model_l_get, _svmc.svm_model_l_set) __swig_setmethods__["SV"] = _svmc.svm_model_SV_set __swig_getmethods__["SV"] = _svmc.svm_model_SV_get if _newclass: SV = _swig_property(_svmc.svm_model_SV_get, _svmc.svm_model_SV_set) __swig_setmethods__["sv_coef"] = _svmc.svm_model_sv_coef_set __swig_getmethods__["sv_coef"] = _svmc.svm_model_sv_coef_get if _newclass: sv_coef = _swig_property(_svmc.svm_model_sv_coef_get, _svmc.svm_model_sv_coef_set) __swig_setmethods__["rho"] = _svmc.svm_model_rho_set __swig_getmethods__["rho"] = _svmc.svm_model_rho_get if _newclass: rho = _swig_property(_svmc.svm_model_rho_get, _svmc.svm_model_rho_set) __swig_setmethods__["probA"] = _svmc.svm_model_probA_set __swig_getmethods__["probA"] = _svmc.svm_model_probA_get if _newclass: probA = _swig_property(_svmc.svm_model_probA_get, _svmc.svm_model_probA_set) __swig_setmethods__["probB"] = _svmc.svm_model_probB_set __swig_getmethods__["probB"] = _svmc.svm_model_probB_get if _newclass: probB = _swig_property(_svmc.svm_model_probB_get, _svmc.svm_model_probB_set) __swig_setmethods__["label"] = _svmc.svm_model_label_set __swig_getmethods__["label"] = _svmc.svm_model_label_get if _newclass: label = _swig_property(_svmc.svm_model_label_get, _svmc.svm_model_label_set) __swig_setmethods__["nSV"] = _svmc.svm_model_nSV_set __swig_getmethods__["nSV"] = _svmc.svm_model_nSV_get if _newclass: nSV = _swig_property(_svmc.svm_model_nSV_get, _svmc.svm_model_nSV_set) __swig_setmethods__["free_sv"] = _svmc.svm_model_free_sv_set __swig_getmethods__["free_sv"] = _svmc.svm_model_free_sv_get if _newclass: free_sv = _swig_property(_svmc.svm_model_free_sv_get, _svmc.svm_model_free_sv_set) def __init__(self): this = _svmc.new_svm_model() try: self.this.append(this) except: self.this = this __swig_destroy__ = _svmc.delete_svm_model __del__ = lambda self: None svm_model_swigregister = _svmc.svm_model_swigregister svm_model_swigregister(svm_model) def svm_set_verbosity(verbosity_flag): return _svmc.svm_set_verbosity(verbosity_flag) svm_set_verbosity = _svmc.svm_set_verbosity def svm_train(prob, param): return _svmc.svm_train(prob, param) svm_train = _svmc.svm_train def svm_cross_validation(prob, param, nr_fold, target): return _svmc.svm_cross_validation(prob, param, nr_fold, target) svm_cross_validation = _svmc.svm_cross_validation def svm_save_model(model_file_name, model): return _svmc.svm_save_model(model_file_name, model) svm_save_model = _svmc.svm_save_model def svm_load_model(model_file_name): return _svmc.svm_load_model(model_file_name) svm_load_model = _svmc.svm_load_model def svm_get_svm_type(model): return _svmc.svm_get_svm_type(model) svm_get_svm_type = _svmc.svm_get_svm_type def svm_get_nr_class(model): return _svmc.svm_get_nr_class(model) svm_get_nr_class = _svmc.svm_get_nr_class def svm_get_labels(model, label): return _svmc.svm_get_labels(model, label) svm_get_labels = _svmc.svm_get_labels def svm_get_svr_probability(model): return _svmc.svm_get_svr_probability(model) svm_get_svr_probability = _svmc.svm_get_svr_probability def svm_predict_values(model, x, decvalue): return _svmc.svm_predict_values(model, x, decvalue) svm_predict_values = _svmc.svm_predict_values def svm_predict(model, x): return _svmc.svm_predict(model, x) svm_predict = _svmc.svm_predict def svm_predict_probability(model, x, prob_estimates): return _svmc.svm_predict_probability(model, x, prob_estimates) svm_predict_probability = _svmc.svm_predict_probability def svm_check_parameter(prob, param): return _svmc.svm_check_parameter(prob, param) svm_check_parameter = _svmc.svm_check_parameter def svm_check_probability_model(model): return _svmc.svm_check_probability_model(model) svm_check_probability_model = _svmc.svm_check_probability_model def svm_node_matrix2numpy_array(matrix, rows, cols): return _svmc.svm_node_matrix2numpy_array(matrix, rows, cols) svm_node_matrix2numpy_array = _svmc.svm_node_matrix2numpy_array def doubleppcarray2numpy_array(data, rows, cols): return _svmc.doubleppcarray2numpy_array(data, rows, cols) doubleppcarray2numpy_array = _svmc.doubleppcarray2numpy_array def new_int(nelements): return _svmc.new_int(nelements) new_int = _svmc.new_int def delete_int(ary): return _svmc.delete_int(ary) delete_int = _svmc.delete_int def int_getitem(ary, index): return _svmc.int_getitem(ary, index) int_getitem = _svmc.int_getitem def int_setitem(ary, index, value): return _svmc.int_setitem(ary, index, value) int_setitem = _svmc.int_setitem def new_double(nelements): return _svmc.new_double(nelements) new_double = _svmc.new_double def delete_double(ary): return _svmc.delete_double(ary) delete_double = _svmc.delete_double def double_getitem(ary, index): return _svmc.double_getitem(ary, index) double_getitem = _svmc.double_getitem def double_setitem(ary, index, value): return _svmc.double_setitem(ary, index, value) double_setitem = _svmc.double_setitem def svm_node_array(size): return _svmc.svm_node_array(size) svm_node_array = _svmc.svm_node_array def svm_node_array_set(*args): return _svmc.svm_node_array_set(*args) svm_node_array_set = _svmc.svm_node_array_set def svm_node_array_destroy(array): return _svmc.svm_node_array_destroy(array) svm_node_array_destroy = _svmc.svm_node_array_destroy def svm_node_matrix(size): return _svmc.svm_node_matrix(size) svm_node_matrix = _svmc.svm_node_matrix def svm_node_matrix_set(matrix, i, array): return _svmc.svm_node_matrix_set(matrix, i, array) svm_node_matrix_set = _svmc.svm_node_matrix_set def svm_node_matrix_destroy(matrix): return _svmc.svm_node_matrix_destroy(matrix) svm_node_matrix_destroy = _svmc.svm_node_matrix_destroy def svm_destroy_model_helper(model_ptr): return _svmc.svm_destroy_model_helper(model_ptr) svm_destroy_model_helper = _svmc.svm_destroy_model_helper # This file is compatible with both classic and new-style classes. ################ # *Working* mvpa2/clfs/libsvmc/svmc.py ################ # This file was automatically generated by SWIG (http://www.swig.org). # Version 3.0.2 # # Do not make changes to this file unless you know what you are doing--modify # the SWIG interface file instead. from sys import version_info if version_info >= (2,6,0): def swig_import_helper(): from os.path import dirname import imp fp = None try: fp, pathname, description = imp.find_module('_svmc', [dirname(__file__)]) except ImportError: import _svmc return _svmc if fp is not None: try: _mod = imp.load_module('_svmc', fp, pathname, description) finally: fp.close() return _mod _svmc = swig_import_helper() del swig_import_helper else: import _svmc del version_info try: _swig_property = property except NameError: pass # Python < 2.2 doesn't have 'property'. def _swig_setattr_nondynamic(self,class_type,name,value,static=1): if (name == "thisown"): return self.this.own(value) if (name == "this"): if type(value).__name__ == 'SwigPyObject': self.__dict__[name] = value return method = class_type.__swig_setmethods__.get(name,None) if method: return method(self,value) if (not static): self.__dict__[name] = value else: raise AttributeError("You cannot add attributes to %s" % self) def _swig_setattr(self,class_type,name,value): return _swig_setattr_nondynamic(self,class_type,name,value,0) def _swig_getattr(self,class_type,name): if (name == "thisown"): return self.this.own() method = class_type.__swig_getmethods__.get(name,None) if method: return method(self) raise AttributeError(name) def _swig_repr(self): try: strthis = "proxy of " + self.this.__repr__() except: strthis = "" return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,) try: _object = object _newclass = 1 except AttributeError: class _object : pass _newclass = 0 __version__ = _svmc.__version__ C_SVC = _svmc.C_SVC NU_SVC = _svmc.NU_SVC ONE_CLASS = _svmc.ONE_CLASS EPSILON_SVR = _svmc.EPSILON_SVR NU_SVR = _svmc.NU_SVR LINEAR = _svmc.LINEAR POLY = _svmc.POLY RBF = _svmc.RBF SIGMOID = _svmc.SIGMOID PRECOMPUTED = _svmc.PRECOMPUTED class svm_parameter(_object): __swig_setmethods__ = {} __setattr__ = lambda self, name, value: _swig_setattr(self, svm_parameter, name, value) __swig_getmethods__ = {} __getattr__ = lambda self, name: _swig_getattr(self, svm_parameter, name) __repr__ = _swig_repr __swig_setmethods__["svm_type"] = _svmc.svm_parameter_svm_type_set __swig_getmethods__["svm_type"] = _svmc.svm_parameter_svm_type_get if _newclass:svm_type = _swig_property(_svmc.svm_parameter_svm_type_get, _svmc.svm_parameter_svm_type_set) __swig_setmethods__["kernel_type"] = _svmc.svm_parameter_kernel_type_set __swig_getmethods__["kernel_type"] = _svmc.svm_parameter_kernel_type_get if _newclass:kernel_type = _swig_property(_svmc.svm_parameter_kernel_type_get, _svmc.svm_parameter_kernel_type_set) __swig_setmethods__["degree"] = _svmc.svm_parameter_degree_set __swig_getmethods__["degree"] = _svmc.svm_parameter_degree_get if _newclass:degree = _swig_property(_svmc.svm_parameter_degree_get, _svmc.svm_parameter_degree_set) __swig_setmethods__["gamma"] = _svmc.svm_parameter_gamma_set __swig_getmethods__["gamma"] = _svmc.svm_parameter_gamma_get if _newclass:gamma = _swig_property(_svmc.svm_parameter_gamma_get, _svmc.svm_parameter_gamma_set) __swig_setmethods__["coef0"] = _svmc.svm_parameter_coef0_set __swig_getmethods__["coef0"] = _svmc.svm_parameter_coef0_get if _newclass:coef0 = _swig_property(_svmc.svm_parameter_coef0_get, _svmc.svm_parameter_coef0_set) __swig_setmethods__["cache_size"] = _svmc.svm_parameter_cache_size_set __swig_getmethods__["cache_size"] = _svmc.svm_parameter_cache_size_get if _newclass:cache_size = _swig_property(_svmc.svm_parameter_cache_size_get, _svmc.svm_parameter_cache_size_set) __swig_setmethods__["eps"] = _svmc.svm_parameter_eps_set __swig_getmethods__["eps"] = _svmc.svm_parameter_eps_get if _newclass:eps = _swig_property(_svmc.svm_parameter_eps_get, _svmc.svm_parameter_eps_set) __swig_setmethods__["C"] = _svmc.svm_parameter_C_set __swig_getmethods__["C"] = _svmc.svm_parameter_C_get if _newclass:C = _swig_property(_svmc.svm_parameter_C_get, _svmc.svm_parameter_C_set) __swig_setmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_set __swig_getmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_get if _newclass:nr_weight = _swig_property(_svmc.svm_parameter_nr_weight_get, _svmc.svm_parameter_nr_weight_set) __swig_setmethods__["weight_label"] = _svmc.svm_parameter_weight_label_set __swig_getmethods__["weight_label"] = _svmc.svm_parameter_weight_label_get if _newclass:weight_label = _swig_property(_svmc.svm_parameter_weight_label_get, _svmc.svm_parameter_weight_label_set) __swig_setmethods__["weight"] = _svmc.svm_parameter_weight_set __swig_getmethods__["weight"] = _svmc.svm_parameter_weight_get if _newclass:weight = _swig_property(_svmc.svm_parameter_weight_get, _svmc.svm_parameter_weight_set) __swig_setmethods__["nu"] = _svmc.svm_parameter_nu_set __swig_getmethods__["nu"] = _svmc.svm_parameter_nu_get if _newclass:nu = _swig_property(_svmc.svm_parameter_nu_get, _svmc.svm_parameter_nu_set) __swig_setmethods__["p"] = _svmc.svm_parameter_p_set __swig_getmethods__["p"] = _svmc.svm_parameter_p_get if _newclass:p = _swig_property(_svmc.svm_parameter_p_get, _svmc.svm_parameter_p_set) __swig_setmethods__["shrinking"] = _svmc.svm_parameter_shrinking_set __swig_getmethods__["shrinking"] = _svmc.svm_parameter_shrinking_get if _newclass:shrinking = _swig_property(_svmc.svm_parameter_shrinking_get, _svmc.svm_parameter_shrinking_set) __swig_setmethods__["probability"] = _svmc.svm_parameter_probability_set __swig_getmethods__["probability"] = _svmc.svm_parameter_probability_get if _newclass:probability = _swig_property(_svmc.svm_parameter_probability_get, _svmc.svm_parameter_probability_set) def __init__(self): this = _svmc.new_svm_parameter() try: self.this.append(this) except: self.this = this __swig_destroy__ = _svmc.delete_svm_parameter __del__ = lambda self : None; svm_parameter_swigregister = _svmc.svm_parameter_swigregister svm_parameter_swigregister(svm_parameter) class svm_problem(_object): __swig_setmethods__ = {} __setattr__ = lambda self, name, value: _swig_setattr(self, svm_problem, name, value) __swig_getmethods__ = {} __getattr__ = lambda self, name: _swig_getattr(self, svm_problem, name) __repr__ = _swig_repr __swig_setmethods__["l"] = _svmc.svm_problem_l_set __swig_getmethods__["l"] = _svmc.svm_problem_l_get if _newclass:l = _swig_property(_svmc.svm_problem_l_get, _svmc.svm_problem_l_set) __swig_setmethods__["y"] = _svmc.svm_problem_y_set __swig_getmethods__["y"] = _svmc.svm_problem_y_get if _newclass:y = _swig_property(_svmc.svm_problem_y_get, _svmc.svm_problem_y_set) __swig_setmethods__["x"] = _svmc.svm_problem_x_set __swig_getmethods__["x"] = _svmc.svm_problem_x_get if _newclass:x = _swig_property(_svmc.svm_problem_x_get, _svmc.svm_problem_x_set) def __init__(self): this = _svmc.new_svm_problem() try: self.this.append(this) except: self.this = this __swig_destroy__ = _svmc.delete_svm_problem __del__ = lambda self : None; svm_problem_swigregister = _svmc.svm_problem_swigregister svm_problem_swigregister(svm_problem) class svm_model(_object): __swig_setmethods__ = {} __setattr__ = lambda self, name, value: _swig_setattr(self, svm_model, name, value) __swig_getmethods__ = {} __getattr__ = lambda self, name: _swig_getattr(self, svm_model, name) __repr__ = _swig_repr __swig_setmethods__["param"] = _svmc.svm_model_param_set __swig_getmethods__["param"] = _svmc.svm_model_param_get if _newclass:param = _swig_property(_svmc.svm_model_param_get, _svmc.svm_model_param_set) __swig_setmethods__["nr_class"] = _svmc.svm_model_nr_class_set __swig_getmethods__["nr_class"] = _svmc.svm_model_nr_class_get if _newclass:nr_class = _swig_property(_svmc.svm_model_nr_class_get, _svmc.svm_model_nr_class_set) __swig_setmethods__["l"] = _svmc.svm_model_l_set __swig_getmethods__["l"] = _svmc.svm_model_l_get if _newclass:l = _swig_property(_svmc.svm_model_l_get, _svmc.svm_model_l_set) __swig_setmethods__["SV"] = _svmc.svm_model_SV_set __swig_getmethods__["SV"] = _svmc.svm_model_SV_get if _newclass:SV = _swig_property(_svmc.svm_model_SV_get, _svmc.svm_model_SV_set) __swig_setmethods__["sv_coef"] = _svmc.svm_model_sv_coef_set __swig_getmethods__["sv_coef"] = _svmc.svm_model_sv_coef_get if _newclass:sv_coef = _swig_property(_svmc.svm_model_sv_coef_get, _svmc.svm_model_sv_coef_set) __swig_setmethods__["rho"] = _svmc.svm_model_rho_set __swig_getmethods__["rho"] = _svmc.svm_model_rho_get if _newclass:rho = _swig_property(_svmc.svm_model_rho_get, _svmc.svm_model_rho_set) __swig_setmethods__["probA"] = _svmc.svm_model_probA_set __swig_getmethods__["probA"] = _svmc.svm_model_probA_get if _newclass:probA = _swig_property(_svmc.svm_model_probA_get, _svmc.svm_model_probA_set) __swig_setmethods__["probB"] = _svmc.svm_model_probB_set __swig_getmethods__["probB"] = _svmc.svm_model_probB_get if _newclass:probB = _swig_property(_svmc.svm_model_probB_get, _svmc.svm_model_probB_set) __swig_setmethods__["label"] = _svmc.svm_model_label_set __swig_getmethods__["label"] = _svmc.svm_model_label_get if _newclass:label = _swig_property(_svmc.svm_model_label_get, _svmc.svm_model_label_set) __swig_setmethods__["nSV"] = _svmc.svm_model_nSV_set __swig_getmethods__["nSV"] = _svmc.svm_model_nSV_get if _newclass:nSV = _swig_property(_svmc.svm_model_nSV_get, _svmc.svm_model_nSV_set) __swig_setmethods__["free_sv"] = _svmc.svm_model_free_sv_set __swig_getmethods__["free_sv"] = _svmc.svm_model_free_sv_get if _newclass:free_sv = _swig_property(_svmc.svm_model_free_sv_get, _svmc.svm_model_free_sv_set) def __init__(self): this = _svmc.new_svm_model() try: self.this.append(this) except: self.this = this __swig_destroy__ = _svmc.delete_svm_model __del__ = lambda self : None; svm_model_swigregister = _svmc.svm_model_swigregister svm_model_swigregister(svm_model) def svm_set_verbosity(*args): return _svmc.svm_set_verbosity(*args) svm_set_verbosity = _svmc.svm_set_verbosity def svm_train(*args): return _svmc.svm_train(*args) svm_train = _svmc.svm_train def svm_cross_validation(*args): return _svmc.svm_cross_validation(*args) svm_cross_validation = _svmc.svm_cross_validation def svm_save_model(*args): return _svmc.svm_save_model(*args) svm_save_model = _svmc.svm_save_model def svm_load_model(*args): return _svmc.svm_load_model(*args) svm_load_model = _svmc.svm_load_model def svm_get_svm_type(*args): return _svmc.svm_get_svm_type(*args) svm_get_svm_type = _svmc.svm_get_svm_type def svm_get_nr_class(*args): return _svmc.svm_get_nr_class(*args) svm_get_nr_class = _svmc.svm_get_nr_class def svm_get_labels(*args): return _svmc.svm_get_labels(*args) svm_get_labels = _svmc.svm_get_labels def svm_get_svr_probability(*args): return _svmc.svm_get_svr_probability(*args) svm_get_svr_probability = _svmc.svm_get_svr_probability def svm_predict_values(*args): return _svmc.svm_predict_values(*args) svm_predict_values = _svmc.svm_predict_values def svm_predict(*args): return _svmc.svm_predict(*args) svm_predict = _svmc.svm_predict def svm_predict_probability(*args): return _svmc.svm_predict_probability(*args) svm_predict_probability = _svmc.svm_predict_probability def svm_check_parameter(*args): return _svmc.svm_check_parameter(*args) svm_check_parameter = _svmc.svm_check_parameter def svm_check_probability_model(*args): return _svmc.svm_check_probability_model(*args) svm_check_probability_model = _svmc.svm_check_probability_model def svm_node_matrix2numpy_array(*args): return _svmc.svm_node_matrix2numpy_array(*args) svm_node_matrix2numpy_array = _svmc.svm_node_matrix2numpy_array def doubleppcarray2numpy_array(*args): return _svmc.doubleppcarray2numpy_array(*args) doubleppcarray2numpy_array = _svmc.doubleppcarray2numpy_array def new_int(*args): return _svmc.new_int(*args) new_int = _svmc.new_int def delete_int(*args): return _svmc.delete_int(*args) delete_int = _svmc.delete_int def int_getitem(*args): return _svmc.int_getitem(*args) int_getitem = _svmc.int_getitem def int_setitem(*args): return _svmc.int_setitem(*args) int_setitem = _svmc.int_setitem def new_double(*args): return _svmc.new_double(*args) new_double = _svmc.new_double def delete_double(*args): return _svmc.delete_double(*args) delete_double = _svmc.delete_double def double_getitem(*args): return _svmc.double_getitem(*args) double_getitem = _svmc.double_getitem def double_setitem(*args): return _svmc.double_setitem(*args) double_setitem = _svmc.double_setitem def svm_node_array(*args): return _svmc.svm_node_array(*args) svm_node_array = _svmc.svm_node_array def svm_node_array_set(*args): return _svmc.svm_node_array_set(*args) svm_node_array_set = _svmc.svm_node_array_set def svm_node_array_destroy(*args): return _svmc.svm_node_array_destroy(*args) svm_node_array_destroy = _svmc.svm_node_array_destroy def svm_node_matrix(*args): return _svmc.svm_node_matrix(*args) svm_node_matrix = _svmc.svm_node_matrix def svm_node_matrix_set(*args): return _svmc.svm_node_matrix_set(*args) svm_node_matrix_set = _svmc.svm_node_matrix_set def svm_node_matrix_destroy(*args): return _svmc.svm_node_matrix_destroy(*args) svm_node_matrix_destroy = _svmc.svm_node_matrix_destroy def svm_destroy_model_helper(*args): return _svmc.svm_destroy_model_helper(*args) svm_destroy_model_helper = _svmc.svm_destroy_model_helper # This file is compatible with both classic and new-style classes. From debian at onerussian.com Mon Jul 20 14:43:16 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Mon, 20 Jul 2015 10:43:16 -0400 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: References: Message-ID: <20150720144316.GO28964@onerussian.com> On Sun, 19 Jul 2015, Nick Oosterhof wrote: > > I had a similar problem while installing PyMVPA on Mac OS (10.10.4). I think the problem is related to this line: > > https://github.com/PyMVPA/PyMVPA/blob/master/mvpa2/clfs/libsvmc/_svm.py#L22 > > When I tried to run this line in ipython > > from mvpa2.clfs.libsvmc._svmc import C_SVC, NU_SVC, ONE_CLASS, EPSILON_SVR > > What I got is: > > ImportError: cannot import name C_SVC > I get the same error. Briefly (see below for details), it seems due to a change in SWIG, with later versions giving issues. wow -- thank you Nick for this thorough investigation. -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From billbrod at gmail.com Mon Jul 20 16:04:27 2015 From: billbrod at gmail.com (Bill Broderick) Date: Mon, 20 Jul 2015 12:04:27 -0400 Subject: [pymvpa] Labels from permutation testing Message-ID: Hi all, I feel like this should be relatively simple, but I can't figure out how to do it. Is it possible to get at the labels generated by AttributePermutator? I would like to see what the individual permutations look like, to make sure it's doing what I think it is, but other than saving the whole dataset generated by CrossValidation, I can't see a way to do it. I'm trying to build a null distribution like the following, so I can save each permutation, each searchlight separately (with how long the permutation testing has been taking, I want to make sure there's constant output in case something crashes and so I can monitor its progress, so I'm not using MCNullDist). for i in searchlights: for j in permutations: permutator = AttributePermutator('targets',limit={'partitions':1},count=1) nf = NFoldPartitioner(attr=partition_attr,cvtype=leave_x_out,count=fold_num,selection_strategy=fold_select_strategy) null_cv = CrossValidation(clf,ChainNode([nf,permutator],space=nf.get_space()),enable_ca='datasets',pass_attr=[('ca.datasets','fa')]) sl_null = sphere_searchlight(null_cv,radius=3,center_ids=[i]) null_dist.append(sl_null(ds)) null_dist=hstack(null_dist) So I would like to be able to check the permuted labels in order to double-check that everything is working as I'd like. Thanks, Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From 201421210014 at mail.bnu.edu.cn Tue Jul 21 06:32:27 2015 From: 201421210014 at mail.bnu.edu.cn (=?UTF-8?B?5a2U5Luk5Yab?=) Date: Tue, 21 Jul 2015 14:32:27 +0800 (GMT+08:00) Subject: [pymvpa] =?utf-8?q?Multiple_datasets_in_one_hdf5_file?= Message-ID: Hi : The accessory are my experiment data and the file of attributes(The lable of each timepoint,TR=2s) Can you help me to make the single file like the ''hyperalignment_tutorial_data.hdf5.gz'' I attemppted to use dcm2niigui.exe to transform every single subjects' 3D data to a single 4D file , then use h5save to integration all subjects' data. >>> print ds_all[0] , , > But when I input : >>>nruns = len(ds_all[0].UC) >>> nruns 1 so I can't do the cross validation >>> wsc_results = [cv(sd) for sd in ds_all] Traceback (most recent call last): File "", line 1, in File "/usr/lib/pymodules/python2.7/mvpa2/base/learner.py", line 239, in __call__ return super(Learner, self).__call__(ds) File "/usr/lib/pymodules/python2.7/mvpa2/base/node.py", line 84, in __call__ result = self._call(ds) File "/usr/lib/pymodules/python2.7/mvpa2/measures/base.py", line 472, in _call return super(CrossValidation, self)._call(ds) File "/usr/lib/pymodules/python2.7/mvpa2/measures/base.py", line 301, in _call result = node(sds) File "/usr/lib/pymodules/python2.7/mvpa2/base/learner.py", line 239, in __call__ return super(Learner, self).__call__(ds) File "/usr/lib/pymodules/python2.7/mvpa2/base/node.py", line 84, in __call__ result = self._call(ds) File "/usr/lib/pymodules/python2.7/mvpa2/measures/base.py", line 559, in _call % (ds.sa[splitter.get_space()].unique)) ValueError: Got empty training dataset from splitting in TransferMeasure. Unique values of input split attribute are: [2]) I will appreciate if you can help me to make the file and show me how to solve the problems. ???? data.zip (1785.24M, 2015?8?5? 10:59?? ) ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: attributes.txt URL: From n.n.oosterhof at googlemail.com Tue Jul 21 09:56:55 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Tue, 21 Jul 2015 11:56:55 +0200 Subject: [pymvpa] Multiple datasets in one hdf5 file In-Reply-To: References: Message-ID: > On 21 Jul 2015, at 08:32, ??? <201421210014 at mail.bnu.edu.cn> wrote: > > The accessory are my experiment data and the file of attributes(The lable of each timepoint,TR=2s) > Can you help me to make the single file like the ''hyperalignment_tutorial_data.hdf5.gz'' > I attemppted to use dcm2niigui.exe to transform every single subjects' 3D data to a single 4D file In what format is the data you are trying to use? NIFTI or DICOM? If DICOM, you would have to convert it to a neuroimaging format, preferably NIFTI (dcm2niigui.exe may be able to do that). You can read NIFTI files in PyMVPA using ?fmri_dataset". ?vstack? can be used to join the volumes from several datasets into a single large dataset. > , then use h5save to integration all subjects' data. h5save will not integrate the subjects data; it will just save the data to a file, so that you can load it from that file later (using h5load). > > >>> print ds_all[0] > , , > > But when I input : > >>>nruns = len(ds_all[0].UC) > >>> nruns > 1 > so I can't do the cross validation Indeed, if all chunks have the same value, you cannot do cross-validation. With fMRI data, typically the chunks are assigned based on the acquisition run, so that data from run K has the corresponding chunks value set to K. Thus, in order to use cross validation, you would have to set .sa.chunks appropriately. From jbaub at bu.edu Wed Jul 22 18:11:09 2015 From: jbaub at bu.edu (John Baublitz) Date: Wed, 22 Jul 2015 14:11:09 -0400 Subject: [pymvpa] Surface searchlight taking 6 to 8 hours Message-ID: Hi all, I have been battling with a surface searchlight that has been taking 6 to 8 hours for a small dataset. It outputs a usable analysis but the time it takes is concerning given that our lab is looking to use even higher resolution fMRI datasets in the future. I profiled the searchlight call and it looks like approximately 90% of those hours is spent mapping in the function from feature IDs to linear voxel IDs (the function feature_id2linear_voxel_ids). I looked into the source code and it appears that it is using the in keyword on a list which has to search through every element of the list for each iteration of the list comprehension and then calls that function for each feature. This might account for the slowdown. I'm wondering if there is a way to work around this or speed it up. Thanks, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From mafeilong at gmail.com Wed Jul 22 18:45:16 2015 From: mafeilong at gmail.com (Feilong Ma) Date: Wed, 22 Jul 2015 18:45:16 +0000 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release (Nick Oosterhof) In-Reply-To: References: Message-ID: Hi Nick, Switching to an earlier version of SWIG works for me. I had problems with SWIG 3.0.5, but when I switched to SWIG 3.0.4 the problem was solved. I installed SWIG using Homebrew, which should work in the same way as installing from source. The error message I talked about with CLANG still appears while running `python setup.py build_ext`. I guess it's not related to this issue. The message is: #### ['clang', '-fno-strict-aliasing', '-fno-common', '-dynamic', '-g', '-O2', '-DNDEBUG', '-g', '-fwrapv', '-O3', '-Wall', '-Wstrict-prototypes'] ####### Missing compiler_cxx fix for UnixCCompiler Best, Feilong On Sun, Jul 19, 2015 at 9:44 AM < pkg-exppsy-pymvpa-request at lists.alioth.debian.org> wrote: > Send Pkg-ExpPsy-PyMVPA mailing list submissions to > pkg-exppsy-pymvpa at lists.alioth.debian.org > > To subscribe or unsubscribe via the World Wide Web, visit > > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > > or, via email, send a message with subject or body 'help' to > pkg-exppsy-pymvpa-request at lists.alioth.debian.org > > You can reach the person managing the list at > pkg-exppsy-pymvpa-owner at lists.alioth.debian.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Pkg-ExpPsy-PyMVPA digest..." > Today's Topics: > > 1. Re: LinearCSVMC not found in PyMVPA-upstream-2.4.0 release > (Nick Oosterhof) > > > > ---------- Forwarded message ---------- > From: Nick Oosterhof > To: Development and support of PyMVPA < > pkg-exppsy-pymvpa at lists.alioth.debian.org> > Cc: > Date: Sun, 19 Jul 2015 15:44:23 +0200 > Subject: Re: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 > release > > > On 15 Jul 2015, at 18:46, Feilong Ma wrote: > > > > I had a similar problem while installing PyMVPA on Mac OS (10.10.4). I > think the problem is related to this line: > > > https://github.com/PyMVPA/PyMVPA/blob/master/mvpa2/clfs/libsvmc/_svm.py#L22 > > > > When I tried to run this line in ipython > > from mvpa2.clfs.libsvmc._svmc import C_SVC, NU_SVC, ONE_CLASS, > EPSILON_SVR > > What I got is: > > ImportError: cannot import name C_SVC > > I get the same error. Briefly (see below for details), it seems due to a > change in SWIG, with later versions giving issues. > > When running "python setup.py build_ext? and copying over the .o and .so > files from the build directory to PyMVPA?s root directory (across the > corresponding subdirectories), the following reproduces the error directly: > > python -c "from mvpa2.clfs.libsvmc._svmc import C_SVC? > > Strangely enough, the following works for the failing PyMVPA installation > (but not for the working one): > > python -c "from mvpa2.clfs.libsvmc._svmc import C_SVC_swigconstant? > > Digging a bit further, the mvpa2/clfs/libsvmc/svmc.py file differs between > my ?working? (generated using SWIG 3.0.2) and ?failing? (SWIG 3.0.6) PyMVPA > setup. One difference is that the working version has contents such as > > C_SVC = _svmc.C_SVC > > whereas the failing version has extra lines that includes ?swigconstant? > > _svmc.C_SVC_swigconstant(_svmc) > C_SVC = _svmc.C_SVC > > (For completeness I?m including the full content of both versions below. ) > > Tracing this back further, I compiled swig from source, both for the > latest version on github and for version 3.0.0 (version 3.0.2 gave an error > when compiling). When using 3.0.0, the import works fine; with 3.0.6 or the > latest (3.0.7 development) it breaks. > > > > > I guess the problem might be related to compiling LibSVM. I vaguely > remember there was some error messages with CLANG blah blah. > > I installed GCC 5.1 and get the same problem as when using CLANG. > > To summarize, the following worked for me to get libsvm to work on OS X > Yosemite: > > - clone swig from https://github.com/swig/swig, then ?git checkout -tag > tags/tags/rel-3.0.0? > - in the swig directory, run ?autoconf && ./configure && make && sudo make > install? (although it gives an error when installing the man-pages due to > missing yodl2man, the binaries are installed fine). This requires autoconf, > automake and libconf. > - in the PyMVPA directory, run "python setup.py build_ext? > - copy the .so and .o files from the build directory to the PyMVPA root > directory, for example in the PyMVPA root directory do "for ext in .so .o; > do for i in `find build -iname "*${ext}"`; do j=`echo $i | cut -f3- -d/`; > cp $i $j; done; done? > > If anyone can confirm that using an earlier version of SWIG fixes the > problem, that would be great. In that case I can also raise the issue with > the developers. > > > > (Below: contents of mvpa2/clfs/libsvmc/svmc.py for working and failing > libsvm in PyMVPA) > > ################ > # *Failing* mvpa2/clfs/libsvmc/svmc.py > ################ > > # This file was automatically generated by SWIG (http://www.swig.org). > # Version 3.0.6 > # > # Do not make changes to this file unless you know what you are > doing--modify > # the SWIG interface file instead. > > > > > > from sys import version_info > if version_info >= (2, 6, 0): > def swig_import_helper(): > from os.path import dirname > import imp > fp = None > try: > fp, pathname, description = imp.find_module('_svmc', > [dirname(__file__)]) > except ImportError: > import _svmc > return _svmc > if fp is not None: > try: > _mod = imp.load_module('_svmc', fp, pathname, description) > finally: > fp.close() > return _mod > _svmc = swig_import_helper() > del swig_import_helper > else: > import _svmc > del version_info > try: > _swig_property = property > except NameError: > pass # Python < 2.2 doesn't have 'property'. > > > def _swig_setattr_nondynamic(self, class_type, name, value, static=1): > if (name == "thisown"): > return self.this.own(value) > if (name == "this"): > if type(value).__name__ == 'SwigPyObject': > self.__dict__[name] = value > return > method = class_type.__swig_setmethods__.get(name, None) > if method: > return method(self, value) > if (not static): > if _newclass: > object.__setattr__(self, name, value) > else: > self.__dict__[name] = value > else: > raise AttributeError("You cannot add attributes to %s" % self) > > > def _swig_setattr(self, class_type, name, value): > return _swig_setattr_nondynamic(self, class_type, name, value, 0) > > > def _swig_getattr_nondynamic(self, class_type, name, static=1): > if (name == "thisown"): > return self.this.own() > method = class_type.__swig_getmethods__.get(name, None) > if method: > return method(self) > if (not static): > return object.__getattr__(self, name) > else: > raise AttributeError(name) > > def _swig_getattr(self, class_type, name): > return _swig_getattr_nondynamic(self, class_type, name, 0) > > > def _swig_repr(self): > try: > strthis = "proxy of " + self.this.__repr__() > except: > strthis = "" > return "<%s.%s; %s >" % (self.__class__.__module__, > self.__class__.__name__, strthis,) > > try: > _object = object > _newclass = 1 > except AttributeError: > class _object: > pass > _newclass = 0 > > > > _svmc.__version___swigconstant(_svmc) > __version__ = _svmc.__version__ > > _svmc.C_SVC_swigconstant(_svmc) > C_SVC = _svmc.C_SVC > > _svmc.NU_SVC_swigconstant(_svmc) > NU_SVC = _svmc.NU_SVC > > _svmc.ONE_CLASS_swigconstant(_svmc) > ONE_CLASS = _svmc.ONE_CLASS > > _svmc.EPSILON_SVR_swigconstant(_svmc) > EPSILON_SVR = _svmc.EPSILON_SVR > > _svmc.NU_SVR_swigconstant(_svmc) > NU_SVR = _svmc.NU_SVR > > _svmc.LINEAR_swigconstant(_svmc) > LINEAR = _svmc.LINEAR > > _svmc.POLY_swigconstant(_svmc) > POLY = _svmc.POLY > > _svmc.RBF_swigconstant(_svmc) > RBF = _svmc.RBF > > _svmc.SIGMOID_swigconstant(_svmc) > SIGMOID = _svmc.SIGMOID > > _svmc.PRECOMPUTED_swigconstant(_svmc) > PRECOMPUTED = _svmc.PRECOMPUTED > class svm_parameter(_object): > __swig_setmethods__ = {} > __setattr__ = lambda self, name, value: _swig_setattr(self, > svm_parameter, name, value) > __swig_getmethods__ = {} > __getattr__ = lambda self, name: _swig_getattr(self, svm_parameter, > name) > __repr__ = _swig_repr > __swig_setmethods__["svm_type"] = _svmc.svm_parameter_svm_type_set > __swig_getmethods__["svm_type"] = _svmc.svm_parameter_svm_type_get > if _newclass: > svm_type = _swig_property(_svmc.svm_parameter_svm_type_get, > _svmc.svm_parameter_svm_type_set) > __swig_setmethods__["kernel_type"] = > _svmc.svm_parameter_kernel_type_set > __swig_getmethods__["kernel_type"] = > _svmc.svm_parameter_kernel_type_get > if _newclass: > kernel_type = _swig_property(_svmc.svm_parameter_kernel_type_get, > _svmc.svm_parameter_kernel_type_set) > __swig_setmethods__["degree"] = _svmc.svm_parameter_degree_set > __swig_getmethods__["degree"] = _svmc.svm_parameter_degree_get > if _newclass: > degree = _swig_property(_svmc.svm_parameter_degree_get, > _svmc.svm_parameter_degree_set) > __swig_setmethods__["gamma"] = _svmc.svm_parameter_gamma_set > __swig_getmethods__["gamma"] = _svmc.svm_parameter_gamma_get > if _newclass: > gamma = _swig_property(_svmc.svm_parameter_gamma_get, > _svmc.svm_parameter_gamma_set) > __swig_setmethods__["coef0"] = _svmc.svm_parameter_coef0_set > __swig_getmethods__["coef0"] = _svmc.svm_parameter_coef0_get > if _newclass: > coef0 = _swig_property(_svmc.svm_parameter_coef0_get, > _svmc.svm_parameter_coef0_set) > __swig_setmethods__["cache_size"] = _svmc.svm_parameter_cache_size_set > __swig_getmethods__["cache_size"] = _svmc.svm_parameter_cache_size_get > if _newclass: > cache_size = _swig_property(_svmc.svm_parameter_cache_size_get, > _svmc.svm_parameter_cache_size_set) > __swig_setmethods__["eps"] = _svmc.svm_parameter_eps_set > __swig_getmethods__["eps"] = _svmc.svm_parameter_eps_get > if _newclass: > eps = _swig_property(_svmc.svm_parameter_eps_get, > _svmc.svm_parameter_eps_set) > __swig_setmethods__["C"] = _svmc.svm_parameter_C_set > __swig_getmethods__["C"] = _svmc.svm_parameter_C_get > if _newclass: > C = _swig_property(_svmc.svm_parameter_C_get, > _svmc.svm_parameter_C_set) > __swig_setmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_set > __swig_getmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_get > if _newclass: > nr_weight = _swig_property(_svmc.svm_parameter_nr_weight_get, > _svmc.svm_parameter_nr_weight_set) > __swig_setmethods__["weight_label"] = > _svmc.svm_parameter_weight_label_set > __swig_getmethods__["weight_label"] = > _svmc.svm_parameter_weight_label_get > if _newclass: > weight_label = > _swig_property(_svmc.svm_parameter_weight_label_get, > _svmc.svm_parameter_weight_label_set) > __swig_setmethods__["weight"] = _svmc.svm_parameter_weight_set > __swig_getmethods__["weight"] = _svmc.svm_parameter_weight_get > if _newclass: > weight = _swig_property(_svmc.svm_parameter_weight_get, > _svmc.svm_parameter_weight_set) > __swig_setmethods__["nu"] = _svmc.svm_parameter_nu_set > __swig_getmethods__["nu"] = _svmc.svm_parameter_nu_get > if _newclass: > nu = _swig_property(_svmc.svm_parameter_nu_get, > _svmc.svm_parameter_nu_set) > __swig_setmethods__["p"] = _svmc.svm_parameter_p_set > __swig_getmethods__["p"] = _svmc.svm_parameter_p_get > if _newclass: > p = _swig_property(_svmc.svm_parameter_p_get, > _svmc.svm_parameter_p_set) > __swig_setmethods__["shrinking"] = _svmc.svm_parameter_shrinking_set > __swig_getmethods__["shrinking"] = _svmc.svm_parameter_shrinking_get > if _newclass: > shrinking = _swig_property(_svmc.svm_parameter_shrinking_get, > _svmc.svm_parameter_shrinking_set) > __swig_setmethods__["probability"] = > _svmc.svm_parameter_probability_set > __swig_getmethods__["probability"] = > _svmc.svm_parameter_probability_get > if _newclass: > probability = _swig_property(_svmc.svm_parameter_probability_get, > _svmc.svm_parameter_probability_set) > > def __init__(self): > this = _svmc.new_svm_parameter() > try: > self.this.append(this) > except: > self.this = this > __swig_destroy__ = _svmc.delete_svm_parameter > __del__ = lambda self: None > svm_parameter_swigregister = _svmc.svm_parameter_swigregister > svm_parameter_swigregister(svm_parameter) > > class svm_problem(_object): > __swig_setmethods__ = {} > __setattr__ = lambda self, name, value: _swig_setattr(self, > svm_problem, name, value) > __swig_getmethods__ = {} > __getattr__ = lambda self, name: _swig_getattr(self, svm_problem, name) > __repr__ = _swig_repr > __swig_setmethods__["l"] = _svmc.svm_problem_l_set > __swig_getmethods__["l"] = _svmc.svm_problem_l_get > if _newclass: > l = _swig_property(_svmc.svm_problem_l_get, > _svmc.svm_problem_l_set) > __swig_setmethods__["y"] = _svmc.svm_problem_y_set > __swig_getmethods__["y"] = _svmc.svm_problem_y_get > if _newclass: > y = _swig_property(_svmc.svm_problem_y_get, > _svmc.svm_problem_y_set) > __swig_setmethods__["x"] = _svmc.svm_problem_x_set > __swig_getmethods__["x"] = _svmc.svm_problem_x_get > if _newclass: > x = _swig_property(_svmc.svm_problem_x_get, > _svmc.svm_problem_x_set) > > def __init__(self): > this = _svmc.new_svm_problem() > try: > self.this.append(this) > except: > self.this = this > __swig_destroy__ = _svmc.delete_svm_problem > __del__ = lambda self: None > svm_problem_swigregister = _svmc.svm_problem_swigregister > svm_problem_swigregister(svm_problem) > > class svm_model(_object): > __swig_setmethods__ = {} > __setattr__ = lambda self, name, value: _swig_setattr(self, svm_model, > name, value) > __swig_getmethods__ = {} > __getattr__ = lambda self, name: _swig_getattr(self, svm_model, name) > __repr__ = _swig_repr > __swig_setmethods__["param"] = _svmc.svm_model_param_set > __swig_getmethods__["param"] = _svmc.svm_model_param_get > if _newclass: > param = _swig_property(_svmc.svm_model_param_get, > _svmc.svm_model_param_set) > __swig_setmethods__["nr_class"] = _svmc.svm_model_nr_class_set > __swig_getmethods__["nr_class"] = _svmc.svm_model_nr_class_get > if _newclass: > nr_class = _swig_property(_svmc.svm_model_nr_class_get, > _svmc.svm_model_nr_class_set) > __swig_setmethods__["l"] = _svmc.svm_model_l_set > __swig_getmethods__["l"] = _svmc.svm_model_l_get > if _newclass: > l = _swig_property(_svmc.svm_model_l_get, _svmc.svm_model_l_set) > __swig_setmethods__["SV"] = _svmc.svm_model_SV_set > __swig_getmethods__["SV"] = _svmc.svm_model_SV_get > if _newclass: > SV = _swig_property(_svmc.svm_model_SV_get, _svmc.svm_model_SV_set) > __swig_setmethods__["sv_coef"] = _svmc.svm_model_sv_coef_set > __swig_getmethods__["sv_coef"] = _svmc.svm_model_sv_coef_get > if _newclass: > sv_coef = _swig_property(_svmc.svm_model_sv_coef_get, > _svmc.svm_model_sv_coef_set) > __swig_setmethods__["rho"] = _svmc.svm_model_rho_set > __swig_getmethods__["rho"] = _svmc.svm_model_rho_get > if _newclass: > rho = _swig_property(_svmc.svm_model_rho_get, > _svmc.svm_model_rho_set) > __swig_setmethods__["probA"] = _svmc.svm_model_probA_set > __swig_getmethods__["probA"] = _svmc.svm_model_probA_get > if _newclass: > probA = _swig_property(_svmc.svm_model_probA_get, > _svmc.svm_model_probA_set) > __swig_setmethods__["probB"] = _svmc.svm_model_probB_set > __swig_getmethods__["probB"] = _svmc.svm_model_probB_get > if _newclass: > probB = _swig_property(_svmc.svm_model_probB_get, > _svmc.svm_model_probB_set) > __swig_setmethods__["label"] = _svmc.svm_model_label_set > __swig_getmethods__["label"] = _svmc.svm_model_label_get > if _newclass: > label = _swig_property(_svmc.svm_model_label_get, > _svmc.svm_model_label_set) > __swig_setmethods__["nSV"] = _svmc.svm_model_nSV_set > __swig_getmethods__["nSV"] = _svmc.svm_model_nSV_get > if _newclass: > nSV = _swig_property(_svmc.svm_model_nSV_get, > _svmc.svm_model_nSV_set) > __swig_setmethods__["free_sv"] = _svmc.svm_model_free_sv_set > __swig_getmethods__["free_sv"] = _svmc.svm_model_free_sv_get > if _newclass: > free_sv = _swig_property(_svmc.svm_model_free_sv_get, > _svmc.svm_model_free_sv_set) > > def __init__(self): > this = _svmc.new_svm_model() > try: > self.this.append(this) > except: > self.this = this > __swig_destroy__ = _svmc.delete_svm_model > __del__ = lambda self: None > svm_model_swigregister = _svmc.svm_model_swigregister > svm_model_swigregister(svm_model) > > > def svm_set_verbosity(verbosity_flag): > return _svmc.svm_set_verbosity(verbosity_flag) > svm_set_verbosity = _svmc.svm_set_verbosity > > def svm_train(prob, param): > return _svmc.svm_train(prob, param) > svm_train = _svmc.svm_train > > def svm_cross_validation(prob, param, nr_fold, target): > return _svmc.svm_cross_validation(prob, param, nr_fold, target) > svm_cross_validation = _svmc.svm_cross_validation > > def svm_save_model(model_file_name, model): > return _svmc.svm_save_model(model_file_name, model) > svm_save_model = _svmc.svm_save_model > > def svm_load_model(model_file_name): > return _svmc.svm_load_model(model_file_name) > svm_load_model = _svmc.svm_load_model > > def svm_get_svm_type(model): > return _svmc.svm_get_svm_type(model) > svm_get_svm_type = _svmc.svm_get_svm_type > > def svm_get_nr_class(model): > return _svmc.svm_get_nr_class(model) > svm_get_nr_class = _svmc.svm_get_nr_class > > def svm_get_labels(model, label): > return _svmc.svm_get_labels(model, label) > svm_get_labels = _svmc.svm_get_labels > > def svm_get_svr_probability(model): > return _svmc.svm_get_svr_probability(model) > svm_get_svr_probability = _svmc.svm_get_svr_probability > > def svm_predict_values(model, x, decvalue): > return _svmc.svm_predict_values(model, x, decvalue) > svm_predict_values = _svmc.svm_predict_values > > def svm_predict(model, x): > return _svmc.svm_predict(model, x) > svm_predict = _svmc.svm_predict > > def svm_predict_probability(model, x, prob_estimates): > return _svmc.svm_predict_probability(model, x, prob_estimates) > svm_predict_probability = _svmc.svm_predict_probability > > def svm_check_parameter(prob, param): > return _svmc.svm_check_parameter(prob, param) > svm_check_parameter = _svmc.svm_check_parameter > > def svm_check_probability_model(model): > return _svmc.svm_check_probability_model(model) > svm_check_probability_model = _svmc.svm_check_probability_model > > def svm_node_matrix2numpy_array(matrix, rows, cols): > return _svmc.svm_node_matrix2numpy_array(matrix, rows, cols) > svm_node_matrix2numpy_array = _svmc.svm_node_matrix2numpy_array > > def doubleppcarray2numpy_array(data, rows, cols): > return _svmc.doubleppcarray2numpy_array(data, rows, cols) > doubleppcarray2numpy_array = _svmc.doubleppcarray2numpy_array > > def new_int(nelements): > return _svmc.new_int(nelements) > new_int = _svmc.new_int > > def delete_int(ary): > return _svmc.delete_int(ary) > delete_int = _svmc.delete_int > > def int_getitem(ary, index): > return _svmc.int_getitem(ary, index) > int_getitem = _svmc.int_getitem > > def int_setitem(ary, index, value): > return _svmc.int_setitem(ary, index, value) > int_setitem = _svmc.int_setitem > > def new_double(nelements): > return _svmc.new_double(nelements) > new_double = _svmc.new_double > > def delete_double(ary): > return _svmc.delete_double(ary) > delete_double = _svmc.delete_double > > def double_getitem(ary, index): > return _svmc.double_getitem(ary, index) > double_getitem = _svmc.double_getitem > > def double_setitem(ary, index, value): > return _svmc.double_setitem(ary, index, value) > double_setitem = _svmc.double_setitem > > def svm_node_array(size): > return _svmc.svm_node_array(size) > svm_node_array = _svmc.svm_node_array > > def svm_node_array_set(*args): > return _svmc.svm_node_array_set(*args) > svm_node_array_set = _svmc.svm_node_array_set > > def svm_node_array_destroy(array): > return _svmc.svm_node_array_destroy(array) > svm_node_array_destroy = _svmc.svm_node_array_destroy > > def svm_node_matrix(size): > return _svmc.svm_node_matrix(size) > svm_node_matrix = _svmc.svm_node_matrix > > def svm_node_matrix_set(matrix, i, array): > return _svmc.svm_node_matrix_set(matrix, i, array) > svm_node_matrix_set = _svmc.svm_node_matrix_set > > def svm_node_matrix_destroy(matrix): > return _svmc.svm_node_matrix_destroy(matrix) > svm_node_matrix_destroy = _svmc.svm_node_matrix_destroy > > def svm_destroy_model_helper(model_ptr): > return _svmc.svm_destroy_model_helper(model_ptr) > svm_destroy_model_helper = _svmc.svm_destroy_model_helper > # This file is compatible with both classic and new-style classes. > > > > > > ################ > # *Working* mvpa2/clfs/libsvmc/svmc.py > ################ > > > # This file was automatically generated by SWIG (http://www.swig.org). > # Version 3.0.2 > # > # Do not make changes to this file unless you know what you are > doing--modify > # the SWIG interface file instead. > > > > > > from sys import version_info > if version_info >= (2,6,0): > def swig_import_helper(): > from os.path import dirname > import imp > fp = None > try: > fp, pathname, description = imp.find_module('_svmc', > [dirname(__file__)]) > except ImportError: > import _svmc > return _svmc > if fp is not None: > try: > _mod = imp.load_module('_svmc', fp, pathname, description) > finally: > fp.close() > return _mod > _svmc = swig_import_helper() > del swig_import_helper > else: > import _svmc > del version_info > try: > _swig_property = property > except NameError: > pass # Python < 2.2 doesn't have 'property'. > def _swig_setattr_nondynamic(self,class_type,name,value,static=1): > if (name == "thisown"): return self.this.own(value) > if (name == "this"): > if type(value).__name__ == 'SwigPyObject': > self.__dict__[name] = value > return > method = class_type.__swig_setmethods__.get(name,None) > if method: return method(self,value) > if (not static): > self.__dict__[name] = value > else: > raise AttributeError("You cannot add attributes to %s" % self) > > def _swig_setattr(self,class_type,name,value): > return _swig_setattr_nondynamic(self,class_type,name,value,0) > > def _swig_getattr(self,class_type,name): > if (name == "thisown"): return self.this.own() > method = class_type.__swig_getmethods__.get(name,None) > if method: return method(self) > raise AttributeError(name) > > def _swig_repr(self): > try: strthis = "proxy of " + self.this.__repr__() > except: strthis = "" > return "<%s.%s; %s >" % (self.__class__.__module__, > self.__class__.__name__, strthis,) > > try: > _object = object > _newclass = 1 > except AttributeError: > class _object : pass > _newclass = 0 > > > __version__ = _svmc.__version__ > C_SVC = _svmc.C_SVC > NU_SVC = _svmc.NU_SVC > ONE_CLASS = _svmc.ONE_CLASS > EPSILON_SVR = _svmc.EPSILON_SVR > NU_SVR = _svmc.NU_SVR > LINEAR = _svmc.LINEAR > POLY = _svmc.POLY > RBF = _svmc.RBF > SIGMOID = _svmc.SIGMOID > PRECOMPUTED = _svmc.PRECOMPUTED > class svm_parameter(_object): > __swig_setmethods__ = {} > __setattr__ = lambda self, name, value: _swig_setattr(self, > svm_parameter, name, value) > __swig_getmethods__ = {} > __getattr__ = lambda self, name: _swig_getattr(self, svm_parameter, > name) > __repr__ = _swig_repr > __swig_setmethods__["svm_type"] = _svmc.svm_parameter_svm_type_set > __swig_getmethods__["svm_type"] = _svmc.svm_parameter_svm_type_get > if _newclass:svm_type = > _swig_property(_svmc.svm_parameter_svm_type_get, > _svmc.svm_parameter_svm_type_set) > __swig_setmethods__["kernel_type"] = > _svmc.svm_parameter_kernel_type_set > __swig_getmethods__["kernel_type"] = > _svmc.svm_parameter_kernel_type_get > if _newclass:kernel_type = > _swig_property(_svmc.svm_parameter_kernel_type_get, > _svmc.svm_parameter_kernel_type_set) > __swig_setmethods__["degree"] = _svmc.svm_parameter_degree_set > __swig_getmethods__["degree"] = _svmc.svm_parameter_degree_get > if _newclass:degree = _swig_property(_svmc.svm_parameter_degree_get, > _svmc.svm_parameter_degree_set) > __swig_setmethods__["gamma"] = _svmc.svm_parameter_gamma_set > __swig_getmethods__["gamma"] = _svmc.svm_parameter_gamma_get > if _newclass:gamma = _swig_property(_svmc.svm_parameter_gamma_get, > _svmc.svm_parameter_gamma_set) > __swig_setmethods__["coef0"] = _svmc.svm_parameter_coef0_set > __swig_getmethods__["coef0"] = _svmc.svm_parameter_coef0_get > if _newclass:coef0 = _swig_property(_svmc.svm_parameter_coef0_get, > _svmc.svm_parameter_coef0_set) > __swig_setmethods__["cache_size"] = _svmc.svm_parameter_cache_size_set > __swig_getmethods__["cache_size"] = _svmc.svm_parameter_cache_size_get > if _newclass:cache_size = > _swig_property(_svmc.svm_parameter_cache_size_get, > _svmc.svm_parameter_cache_size_set) > __swig_setmethods__["eps"] = _svmc.svm_parameter_eps_set > __swig_getmethods__["eps"] = _svmc.svm_parameter_eps_get > if _newclass:eps = _swig_property(_svmc.svm_parameter_eps_get, > _svmc.svm_parameter_eps_set) > __swig_setmethods__["C"] = _svmc.svm_parameter_C_set > __swig_getmethods__["C"] = _svmc.svm_parameter_C_get > if _newclass:C = _swig_property(_svmc.svm_parameter_C_get, > _svmc.svm_parameter_C_set) > __swig_setmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_set > __swig_getmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_get > if _newclass:nr_weight = > _swig_property(_svmc.svm_parameter_nr_weight_get, > _svmc.svm_parameter_nr_weight_set) > __swig_setmethods__["weight_label"] = > _svmc.svm_parameter_weight_label_set > __swig_getmethods__["weight_label"] = > _svmc.svm_parameter_weight_label_get > if _newclass:weight_label = > _swig_property(_svmc.svm_parameter_weight_label_get, > _svmc.svm_parameter_weight_label_set) > __swig_setmethods__["weight"] = _svmc.svm_parameter_weight_set > __swig_getmethods__["weight"] = _svmc.svm_parameter_weight_get > if _newclass:weight = _swig_property(_svmc.svm_parameter_weight_get, > _svmc.svm_parameter_weight_set) > __swig_setmethods__["nu"] = _svmc.svm_parameter_nu_set > __swig_getmethods__["nu"] = _svmc.svm_parameter_nu_get > if _newclass:nu = _swig_property(_svmc.svm_parameter_nu_get, > _svmc.svm_parameter_nu_set) > __swig_setmethods__["p"] = _svmc.svm_parameter_p_set > __swig_getmethods__["p"] = _svmc.svm_parameter_p_get > if _newclass:p = _swig_property(_svmc.svm_parameter_p_get, > _svmc.svm_parameter_p_set) > __swig_setmethods__["shrinking"] = _svmc.svm_parameter_shrinking_set > __swig_getmethods__["shrinking"] = _svmc.svm_parameter_shrinking_get > if _newclass:shrinking = > _swig_property(_svmc.svm_parameter_shrinking_get, > _svmc.svm_parameter_shrinking_set) > __swig_setmethods__["probability"] = > _svmc.svm_parameter_probability_set > __swig_getmethods__["probability"] = > _svmc.svm_parameter_probability_get > if _newclass:probability = > _swig_property(_svmc.svm_parameter_probability_get, > _svmc.svm_parameter_probability_set) > def __init__(self): > this = _svmc.new_svm_parameter() > try: self.this.append(this) > except: self.this = this > __swig_destroy__ = _svmc.delete_svm_parameter > __del__ = lambda self : None; > svm_parameter_swigregister = _svmc.svm_parameter_swigregister > svm_parameter_swigregister(svm_parameter) > > class svm_problem(_object): > __swig_setmethods__ = {} > __setattr__ = lambda self, name, value: _swig_setattr(self, > svm_problem, name, value) > __swig_getmethods__ = {} > __getattr__ = lambda self, name: _swig_getattr(self, svm_problem, name) > __repr__ = _swig_repr > __swig_setmethods__["l"] = _svmc.svm_problem_l_set > __swig_getmethods__["l"] = _svmc.svm_problem_l_get > if _newclass:l = _swig_property(_svmc.svm_problem_l_get, > _svmc.svm_problem_l_set) > __swig_setmethods__["y"] = _svmc.svm_problem_y_set > __swig_getmethods__["y"] = _svmc.svm_problem_y_get > if _newclass:y = _swig_property(_svmc.svm_problem_y_get, > _svmc.svm_problem_y_set) > __swig_setmethods__["x"] = _svmc.svm_problem_x_set > __swig_getmethods__["x"] = _svmc.svm_problem_x_get > if _newclass:x = _swig_property(_svmc.svm_problem_x_get, > _svmc.svm_problem_x_set) > def __init__(self): > this = _svmc.new_svm_problem() > try: self.this.append(this) > except: self.this = this > __swig_destroy__ = _svmc.delete_svm_problem > __del__ = lambda self : None; > svm_problem_swigregister = _svmc.svm_problem_swigregister > svm_problem_swigregister(svm_problem) > > class svm_model(_object): > __swig_setmethods__ = {} > __setattr__ = lambda self, name, value: _swig_setattr(self, svm_model, > name, value) > __swig_getmethods__ = {} > __getattr__ = lambda self, name: _swig_getattr(self, svm_model, name) > __repr__ = _swig_repr > __swig_setmethods__["param"] = _svmc.svm_model_param_set > __swig_getmethods__["param"] = _svmc.svm_model_param_get > if _newclass:param = _swig_property(_svmc.svm_model_param_get, > _svmc.svm_model_param_set) > __swig_setmethods__["nr_class"] = _svmc.svm_model_nr_class_set > __swig_getmethods__["nr_class"] = _svmc.svm_model_nr_class_get > if _newclass:nr_class = _swig_property(_svmc.svm_model_nr_class_get, > _svmc.svm_model_nr_class_set) > __swig_setmethods__["l"] = _svmc.svm_model_l_set > __swig_getmethods__["l"] = _svmc.svm_model_l_get > if _newclass:l = _swig_property(_svmc.svm_model_l_get, > _svmc.svm_model_l_set) > __swig_setmethods__["SV"] = _svmc.svm_model_SV_set > __swig_getmethods__["SV"] = _svmc.svm_model_SV_get > if _newclass:SV = _swig_property(_svmc.svm_model_SV_get, > _svmc.svm_model_SV_set) > __swig_setmethods__["sv_coef"] = _svmc.svm_model_sv_coef_set > __swig_getmethods__["sv_coef"] = _svmc.svm_model_sv_coef_get > if _newclass:sv_coef = _swig_property(_svmc.svm_model_sv_coef_get, > _svmc.svm_model_sv_coef_set) > __swig_setmethods__["rho"] = _svmc.svm_model_rho_set > __swig_getmethods__["rho"] = _svmc.svm_model_rho_get > if _newclass:rho = _swig_property(_svmc.svm_model_rho_get, > _svmc.svm_model_rho_set) > __swig_setmethods__["probA"] = _svmc.svm_model_probA_set > __swig_getmethods__["probA"] = _svmc.svm_model_probA_get > if _newclass:probA = _swig_property(_svmc.svm_model_probA_get, > _svmc.svm_model_probA_set) > __swig_setmethods__["probB"] = _svmc.svm_model_probB_set > __swig_getmethods__["probB"] = _svmc.svm_model_probB_get > if _newclass:probB = _swig_property(_svmc.svm_model_probB_get, > _svmc.svm_model_probB_set) > __swig_setmethods__["label"] = _svmc.svm_model_label_set > __swig_getmethods__["label"] = _svmc.svm_model_label_get > if _newclass:label = _swig_property(_svmc.svm_model_label_get, > _svmc.svm_model_label_set) > __swig_setmethods__["nSV"] = _svmc.svm_model_nSV_set > __swig_getmethods__["nSV"] = _svmc.svm_model_nSV_get > if _newclass:nSV = _swig_property(_svmc.svm_model_nSV_get, > _svmc.svm_model_nSV_set) > __swig_setmethods__["free_sv"] = _svmc.svm_model_free_sv_set > __swig_getmethods__["free_sv"] = _svmc.svm_model_free_sv_get > if _newclass:free_sv = _swig_property(_svmc.svm_model_free_sv_get, > _svmc.svm_model_free_sv_set) > def __init__(self): > this = _svmc.new_svm_model() > try: self.this.append(this) > except: self.this = this > __swig_destroy__ = _svmc.delete_svm_model > __del__ = lambda self : None; > svm_model_swigregister = _svmc.svm_model_swigregister > svm_model_swigregister(svm_model) > > > def svm_set_verbosity(*args): > return _svmc.svm_set_verbosity(*args) > svm_set_verbosity = _svmc.svm_set_verbosity > > def svm_train(*args): > return _svmc.svm_train(*args) > svm_train = _svmc.svm_train > > def svm_cross_validation(*args): > return _svmc.svm_cross_validation(*args) > svm_cross_validation = _svmc.svm_cross_validation > > def svm_save_model(*args): > return _svmc.svm_save_model(*args) > svm_save_model = _svmc.svm_save_model > > def svm_load_model(*args): > return _svmc.svm_load_model(*args) > svm_load_model = _svmc.svm_load_model > > def svm_get_svm_type(*args): > return _svmc.svm_get_svm_type(*args) > svm_get_svm_type = _svmc.svm_get_svm_type > > def svm_get_nr_class(*args): > return _svmc.svm_get_nr_class(*args) > svm_get_nr_class = _svmc.svm_get_nr_class > > def svm_get_labels(*args): > return _svmc.svm_get_labels(*args) > svm_get_labels = _svmc.svm_get_labels > > def svm_get_svr_probability(*args): > return _svmc.svm_get_svr_probability(*args) > svm_get_svr_probability = _svmc.svm_get_svr_probability > > def svm_predict_values(*args): > return _svmc.svm_predict_values(*args) > svm_predict_values = _svmc.svm_predict_values > > def svm_predict(*args): > return _svmc.svm_predict(*args) > svm_predict = _svmc.svm_predict > > def svm_predict_probability(*args): > return _svmc.svm_predict_probability(*args) > svm_predict_probability = _svmc.svm_predict_probability > > def svm_check_parameter(*args): > return _svmc.svm_check_parameter(*args) > svm_check_parameter = _svmc.svm_check_parameter > > def svm_check_probability_model(*args): > return _svmc.svm_check_probability_model(*args) > svm_check_probability_model = _svmc.svm_check_probability_model > > def svm_node_matrix2numpy_array(*args): > return _svmc.svm_node_matrix2numpy_array(*args) > svm_node_matrix2numpy_array = _svmc.svm_node_matrix2numpy_array > > def doubleppcarray2numpy_array(*args): > return _svmc.doubleppcarray2numpy_array(*args) > doubleppcarray2numpy_array = _svmc.doubleppcarray2numpy_array > > def new_int(*args): > return _svmc.new_int(*args) > new_int = _svmc.new_int > > def delete_int(*args): > return _svmc.delete_int(*args) > delete_int = _svmc.delete_int > > def int_getitem(*args): > return _svmc.int_getitem(*args) > int_getitem = _svmc.int_getitem > > def int_setitem(*args): > return _svmc.int_setitem(*args) > int_setitem = _svmc.int_setitem > > def new_double(*args): > return _svmc.new_double(*args) > new_double = _svmc.new_double > > def delete_double(*args): > return _svmc.delete_double(*args) > delete_double = _svmc.delete_double > > def double_getitem(*args): > return _svmc.double_getitem(*args) > double_getitem = _svmc.double_getitem > > def double_setitem(*args): > return _svmc.double_setitem(*args) > double_setitem = _svmc.double_setitem > > def svm_node_array(*args): > return _svmc.svm_node_array(*args) > svm_node_array = _svmc.svm_node_array > > def svm_node_array_set(*args): > return _svmc.svm_node_array_set(*args) > svm_node_array_set = _svmc.svm_node_array_set > > def svm_node_array_destroy(*args): > return _svmc.svm_node_array_destroy(*args) > svm_node_array_destroy = _svmc.svm_node_array_destroy > > def svm_node_matrix(*args): > return _svmc.svm_node_matrix(*args) > svm_node_matrix = _svmc.svm_node_matrix > > def svm_node_matrix_set(*args): > return _svmc.svm_node_matrix_set(*args) > svm_node_matrix_set = _svmc.svm_node_matrix_set > > def svm_node_matrix_destroy(*args): > return _svmc.svm_node_matrix_destroy(*args) > svm_node_matrix_destroy = _svmc.svm_node_matrix_destroy > > def svm_destroy_model_helper(*args): > return _svmc.svm_destroy_model_helper(*args) > svm_destroy_model_helper = _svmc.svm_destroy_model_helper > # This file is compatible with both classic and new-style classes. > > > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.n.oosterhof at googlemail.com Thu Jul 23 08:41:00 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Thu, 23 Jul 2015 10:41:00 +0200 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release (Nick Oosterhof) In-Reply-To: References: Message-ID: Hi Feilong, > On 22 Jul 2015, at 20:45, Feilong Ma wrote: > > Switching to an earlier version of SWIG works for me. I had problems with SWIG 3.0.5, but when I switched to SWIG 3.0.4 the problem was solved. I installed SWIG using Homebrew, which should work in the same way as installing from source. Thanks a lot for trying this out, confirming that the SWIG version seems to be the issue, and narrowing down when the change in SWIG was introduced that breaks compiling the SVM functionality in PyMVPA. > The error message I talked about with CLANG still appears while running `python setup.py build_ext`. I guess it's not related to this issue. The message is: > #### ['clang', '-fno-strict-aliasing', '-fno-common', '-dynamic', '-g', '-O2', '-DNDEBUG', '-g', '-fwrapv', '-O3', '-Wall', '-Wstrict-prototypes'] ####### > Missing compiler_cxx fix for UnixCCompiler I forgot to mention this in my earlier post, but I got the same error message. So for SWIG, I wonder if this is a mac-related issue, or also applies to Linux installations. I may try and find some time to try this out. best, Nick From n.n.oosterhof at googlemail.com Thu Jul 23 10:45:42 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Thu, 23 Jul 2015 12:45:42 +0200 Subject: [pymvpa] Surface searchlight taking 6 to 8 hours In-Reply-To: References: Message-ID: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> > On 22 Jul 2015, at 20:11, John Baublitz wrote: > > I have been battling with a surface searchlight that has been taking 6 to 8 hours for a small dataset. It outputs a usable analysis but the time it takes is concerning given that our lab is looking to use even higher resolution fMRI datasets in the future. I profiled the searchlight call and it looks like approximately 90% of those hours is spent mapping in the function from feature IDs to linear voxel IDs (the function feature_id2linear_voxel_ids). From mvpa2.misc.surfing.queryengine, you are using the SurfaceVoxelsQueryEngine, not the SurfaceVerticesQueryEngine? Only the former should be using the feature_id2linear_voxel_ids function. (When instantiating a query engine through disc_surface_queryengine, the Vertices variant is the default; the Voxels variant is used then output_modality=?volume?). For the typical surface-based analysis, the output is a surface-based dataset, and the SurfaceVerticesQueryEngine is used for that. When using the SurfaceVoxelsQueryEngine, the output is a volumetric dataset. > I looked into the source code and it appears that it is using the in keyword on a list which has to search through every element of the list for each iteration of the list comprehension and then calls that function for each feature. This might account for the slowdown. I'm wondering if there is a way to work around this or speed it up. When using the SurfaceVoxelsQueryEngine, the euclidean distance between each node (on the surface) and each voxel (in the volume) is computed. My guess is that this is responsible for the slow-down. This could probably be made faster by dividing the 3D space into blocks and assigning nodes and vertices to each block, and then compute distances between nodes and voxels only within each block and across neighbouring ones. (a somewhat similar approach is taken in mvpa2.support.nibabel.Surface.map_to_high_resolution_surf). But that would take some time to implement and test. How important is this feature for you? Is there a particular reason why you would want the output to be a volumetric, not surface-based, dataset? From michael.browning at ndcn.ox.ac.uk Thu Jul 23 12:56:37 2015 From: michael.browning at ndcn.ox.ac.uk (Michael Browning) Date: Thu, 23 Jul 2015 12:56:37 +0000 Subject: [pymvpa] Altering the weights of classes in binary SVM classifier In-Reply-To: References: Message-ID: Hi, I have been using a linear SVM in a between subject design in which I am trying to classify patients as responders or non-responders to a particular treatment. The input to the classifier are beta images (one per patient) from an fMRI task. The target of the classifier is response status of the patient (coded as 0 or 1). My sample is not balanced (there happens to have been 22 responders and 13 non-responders) and is not particularly large. I would like, if possible, to use all the data and adjust the classifier to the unbalanced set rather than selecting a subset of the responders. I've seen recommendations for SVMs in unbalanced data suggesting that the weights of the outcome can be adjusted to reflect the sample size (essentially the weights of each class can be set as 1/(total number in class)). I've tried to do this in pyMVPA using the following code: wts=[ 1/numnonresp, 1/numresp] wts_labels=[0,1] clf = LinearCSVMC(weight=wts, weight_label=wts_labels) I then embed the classifier in a crossvalidation call which includes a feature selector. The code runs without error but the performance of the classifier does not alter (at all) regardless of the weights I use (e.g. using weights of [0 100000000000] or whatever. I'm concerned that I have not set this up correctly, and that the weights are not being incorporated into the SVM. I'd appreciate any advice about what I am doing wrong, or even if there is any diagnostic approach I can use to assess whether the SVM is using the weight appropriately. Thanks Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbaub at bu.edu Thu Jul 23 14:47:41 2015 From: jbaub at bu.edu (John Baublitz) Date: Thu, 23 Jul 2015 10:47:41 -0400 Subject: [pymvpa] Surface searchlight taking 6 to 8 hours In-Reply-To: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> References: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> Message-ID: Thank you for the quick response. I tried outputting a surface file file before using both niml.write() and surf.write() as my lab would prefer to visualize the results on the surface. I mentioned this in a previous email and was told that I should be using niml.write() and visualize using SUMA. I decided against this because not only would it fail to open with our version of SUMA (I can include the error if that would be helpful) but I have found no evidence that .dset files are compatible with FreeSurfer. My lab has a hard requirement that whatever we are outputting from the analysis must be able to be visualized in FreeSurfer. Is there any way to output a FreeSurfer-compatible surface file using PyMVPA? If not, is there a utility to convert from SUMA surface files to FreeSurfer surface files included in PyMVPA? On Thu, Jul 23, 2015 at 6:45 AM, Nick Oosterhof < n.n.oosterhof at googlemail.com> wrote: > > > On 22 Jul 2015, at 20:11, John Baublitz wrote: > > > > I have been battling with a surface searchlight that has been taking 6 > to 8 hours for a small dataset. It outputs a usable analysis but the time > it takes is concerning given that our lab is looking to use even higher > resolution fMRI datasets in the future. I profiled the searchlight call and > it looks like approximately 90% of those hours is spent mapping in the > function from feature IDs to linear voxel IDs (the function > feature_id2linear_voxel_ids). > > From mvpa2.misc.surfing.queryengine, you are using the > SurfaceVoxelsQueryEngine, not the SurfaceVerticesQueryEngine? Only the > former should be using the feature_id2linear_voxel_ids function. > > (When instantiating a query engine through disc_surface_queryengine, the > Vertices variant is the default; the Voxels variant is used then > output_modality=?volume?). > > For the typical surface-based analysis, the output is a surface-based > dataset, and the SurfaceVerticesQueryEngine is used for that. When using > the SurfaceVoxelsQueryEngine, the output is a volumetric dataset. > > > I looked into the source code and it appears that it is using the in > keyword on a list which has to search through every element of the list for > each iteration of the list comprehension and then calls that function for > each feature. This might account for the slowdown. I'm wondering if there > is a way to work around this or speed it up. > > When using the SurfaceVoxelsQueryEngine, the euclidean distance between > each node (on the surface) and each voxel (in the volume) is computed. My > guess is that this is responsible for the slow-down. This could probably be > made faster by dividing the 3D space into blocks and assigning nodes and > vertices to each block, and then compute distances between nodes and voxels > only within each block and across neighbouring ones. (a somewhat similar > approach is taken in > mvpa2.support.nibabel.Surface.map_to_high_resolution_surf). But that would > take some time to implement and test. How important is this feature for > you? Is there a particular reason why you would want the output to be a > volumetric, not surface-based, dataset? > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa -------------- next part -------------- An HTML attachment was scrubbed... URL: From effigies at bu.edu Thu Jul 23 14:46:46 2015 From: effigies at bu.edu (Christopher J Markiewicz) Date: Thu, 23 Jul 2015 10:46:46 -0400 Subject: [pymvpa] Surface searchlight taking 6 to 8 hours In-Reply-To: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> References: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> Message-ID: <55B0FE56.30101@bu.edu> On 07/23/2015 06:45 AM, Nick Oosterhof wrote: > >> On 22 Jul 2015, at 20:11, John Baublitz wrote: >> >> I have been battling with a surface searchlight that has been taking 6 to 8 hours for a small dataset. It outputs a usable analysis but the time it takes is concerning given that our lab is looking to use even higher resolution fMRI datasets in the future. I profiled the searchlight call and it looks like approximately 90% of those hours is spent mapping in the function from feature IDs to linear voxel IDs (the function feature_id2linear_voxel_ids). > > From mvpa2.misc.surfing.queryengine, you are using the SurfaceVoxelsQueryEngine, not the SurfaceVerticesQueryEngine? Only the former should be using the feature_id2linear_voxel_ids function. > > (When instantiating a query engine through disc_surface_queryengine, the Vertices variant is the default; the Voxels variant is used then output_modality=?volume?). > > For the typical surface-based analysis, the output is a surface-based dataset, and the SurfaceVerticesQueryEngine is used for that. When using the SurfaceVoxelsQueryEngine, the output is a volumetric dataset. > >> I looked into the source code and it appears that it is using the in keyword on a list which has to search through every element of the list for each iteration of the list comprehension and then calls that function for each feature. This might account for the slowdown. I'm wondering if there is a way to work around this or speed it up. > > When using the SurfaceVoxelsQueryEngine, the euclidean distance between each node (on the surface) and each voxel (in the volume) is computed. My guess is that this is responsible for the slow-down. This could probably be made faster by dividing the 3D space into blocks and assigning nodes and vertices to each block, and then compute distances between nodes and voxels only within each block and across neighbouring ones. (a somewhat similar approach is taken in mvpa2.support.nibabel.Surface.map_to_high_resolution_surf). But that would take some time to implement and test. How important is this feature for you? Is there a particular reason why you would want the output to be a volumetric, not surface-based, dataset? > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > Nick, To clarify, are you saying that using SurfaceVerticesQueryEngine runs the classifiers (or other measure) on sets of vertices, not sets of voxels? I'm not familiar enough with AFNI surfaces, but the ratio of vertices to intersecting voxels in FreeSurfer is about 6:1. If a searchlight is a set of vertices, how is the implicit resampling accounted for? Sorry if this is explained in documentation. I have my own FreeSurfer-based implementation that I've been using that uses the surface only to generate sets of voxels, so I haven't been keeping close tabs on how PyMVPA's AFNI-based one works. Also, if mapping vertices to voxel IDs is a serious bottleneck, you can have a look at my query engine (https://github.com/effigies/PyMVPA/blob/qnl_surf_searchlight/mvpa2/misc/neighborhood.py#L383). It uses FreeSurfer vertex map volumes (see: mri_surf2vol --vtxvol), where each voxel contains the ID of the vertex nearest its center. Maybe AFNI has something similar? -- Christopher J Markiewicz Ph.D. Candidate, Quantitative Neuroscience Laboratory Boston University -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From n.n.oosterhof at googlemail.com Thu Jul 23 15:38:43 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Thu, 23 Jul 2015 17:38:43 +0200 Subject: [pymvpa] Surface searchlight taking 6 to 8 hours In-Reply-To: References: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> Message-ID: <674EB3D9-FBA9-4F54-A72C-96C283F1DBA0@googlemail.com> > On 23 Jul 2015, at 16:47, John Baublitz wrote: > > Thank you for the quick response. I tried outputting a surface file file before using both niml.write() and surf.write() as my lab would prefer to visualize the results on the surface. I mentioned this in a previous email and was told that I should be using niml.write() and visualize using SUMA. I decided against this because not only would it fail to open with our version of SUMA (I can include the error if that would be helpful) Indeed, that would be helpful. > but I have found no evidence that .dset files are compatible with FreeSurfer. My lab has a hard requirement that whatever we are outputting from the analysis must be able to be visualized in FreeSurfer. Is there any way to output a FreeSurfer-compatible surface file using PyMVPA? Not at the moment, but it would be nice to support GIFTI. Currently surface anatomy can be exported as GIFTI, but there is no support currently for functional files. I?ve added an issue [1], so it may be added in the future. > If not, is there a utility to convert from SUMA surface files to FreeSurfer surface files included in PyMVPA? ConvertDset (included with AFNI) can convert between NIML and GIFTI, and mris_convert (included with Freesufer) can convert between GIFTI and a variety of other file formats used in FreeSurfer. [1] https://github.com/PyMVPA/PyMVPA/issues/347 From jbaub at bu.edu Thu Jul 23 15:43:47 2015 From: jbaub at bu.edu (John Baublitz) Date: Thu, 23 Jul 2015 11:43:47 -0400 Subject: [pymvpa] Surface searchlight taking 6 to 8 hours In-Reply-To: <674EB3D9-FBA9-4F54-A72C-96C283F1DBA0@googlemail.com> References: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> <674EB3D9-FBA9-4F54-A72C-96C283F1DBA0@googlemail.com> Message-ID: The error upon executing "suma -spec sl.dset" is: Error SUMA_Read_SpecFile: Your spec file contains uncommented gibberish: wrote: > > > On 23 Jul 2015, at 16:47, John Baublitz wrote: > > > > Thank you for the quick response. I tried outputting a surface file file > before using both niml.write() and surf.write() as my lab would prefer to > visualize the results on the surface. I mentioned this in a previous email > and was told that I should be using niml.write() and visualize using SUMA. > I decided against this because not only would it fail to open with our > version of SUMA (I can include the error if that would be helpful) > > Indeed, that would be helpful. > > > but I have found no evidence that .dset files are compatible with > FreeSurfer. My lab has a hard requirement that whatever we are outputting > from the analysis must be able to be visualized in FreeSurfer. Is there any > way to output a FreeSurfer-compatible surface file using PyMVPA? > > Not at the moment, but it would be nice to support GIFTI. Currently > surface anatomy can be exported as GIFTI, but there is no support currently > for functional files. I?ve added an issue [1], so it may be added in the > future. > > > If not, is there a utility to convert from SUMA surface files to > FreeSurfer surface files included in PyMVPA? > > ConvertDset (included with AFNI) can convert between NIML and GIFTI, and > mris_convert (included with Freesufer) can convert between GIFTI and a > variety of other file formats used in FreeSurfer. > > > [1] https://github.com/PyMVPA/PyMVPA/issues/347 > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.n.oosterhof at googlemail.com Thu Jul 23 15:48:52 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Thu, 23 Jul 2015 17:48:52 +0200 Subject: [pymvpa] Surface searchlight taking 6 to 8 hours In-Reply-To: <55B0FE56.30101@bu.edu> References: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> <55B0FE56.30101@bu.edu> Message-ID: <536838DD-5554-4556-9AE1-255EC5A00B20@googlemail.com> > On 23 Jul 2015, at 16:46, Christopher J Markiewicz wrote: > > To clarify, are you saying that using SurfaceVerticesQueryEngine runs > the classifiers (or other measure) on sets of vertices, not sets of > voxels? No, the *input* for classification (or other measure) is from voxels (without interpolation); the output (such as classification accuracy) is assigned to nodes. Distances are measured along the cortical surface, meaning that the shape of each searchlight region (in voxel space) resembles that of a curved cylinder with the top and bottom part lying on the pial and white surfaces, and the side connecting those two surfaces. > I'm not familiar enough with AFNI surfaces, but the ratio of > vertices to intersecting voxels in FreeSurfer is about 6:1. If a > searchlight is a set of vertices, how is the implicit resampling > accounted for? As above, there is no resampling of data. All unique voxels contained in the ?curved cylinder? searchlight are used for classification. > > Also, if mapping vertices to voxel IDs is a serious bottleneck, you can > have a look at my query engine > (https://github.com/effigies/PyMVPA/blob/qnl_surf_searchlight/mvpa2/misc/neighborhood.py#L383). > It uses FreeSurfer vertex map volumes (see: mri_surf2vol --vtxvol), > where each voxel contains the ID of the vertex nearest its center. Maybe > AFNI has something similar? Thanks for the reference. It is possible that AFNI has something similar, but in PyMVPA we try to be independent from AFNI is possible (the pymvpa2-prep-afni-surf script is a clear exception). But a similar approach could possible be used to speed up the mapping between voxels and nearest nodes. From n.n.oosterhof at googlemail.com Thu Jul 23 15:54:13 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Thu, 23 Jul 2015 17:54:13 +0200 Subject: [pymvpa] Surface searchlight taking 6 to 8 hours In-Reply-To: References: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> <674EB3D9-FBA9-4F54-A72C-96C283F1DBA0@googlemail.com> Message-ID: > On 23 Jul 2015, at 17:43, John Baublitz wrote: > > The error upon executing "suma -spec sl.dset" is: > > Error SUMA_Read_SpecFile: Your spec file contains uncommented gibberish: > Please deal with it. > Error SUMA_Engine: Error in SUMA_Read_SpecFile. You cannot use NIML .dset files as SUMA .spec files. If the anatomical surface file (with node coordinates and face indices) is stored in a file my_surface.asc (some other extensions are supported, including GIFTI), you can view that surface in SUMA using: suma -i my_surface.asc and then, in the SUMA viewer object-controller window (ctrl+s), click ?load set? to select a NIML .dset file. A .spec file defines file names and other properties for a set of anatomical surfaces that can be shown in SUMA. An example of how a .spec file is organised is lh_ico16_al.spec included in the tutorial_data_surf*.gz [1]. [1] http://data.pymvpa.org/datasets/tutorial_data/ From n.n.oosterhof at googlemail.com Tue Jul 28 12:00:22 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Tue, 28 Jul 2015 14:00:22 +0200 Subject: [pymvpa] Surface searchlight taking 6 to 8 hours In-Reply-To: <674EB3D9-FBA9-4F54-A72C-96C283F1DBA0@googlemail.com> References: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> <674EB3D9-FBA9-4F54-A72C-96C283F1DBA0@googlemail.com> Message-ID: > On 23 Jul 2015, at 17:38, Nick Oosterhof wrote: > > >> is there a utility to convert from SUMA surface files to FreeSurfer surface files included in PyMVPA? > > ConvertDset (included with AFNI) can convert between NIML and GIFTI, and mris_convert (included with Freesufer) can convert between GIFTI and a variety of other file formats used in FreeSurfer. With the latest code on github [1] there is now basic support for GIFTI datasets in PyMVPA [2]. [1] https://github.com/PyMVPA/PyMVPA [2] https://github.com/PyMVPA/PyMVPA/commit/05ebdda025401148425a7894b3a14ea73b932dfc From jbaub at bu.edu Wed Jul 29 18:57:33 2015 From: jbaub at bu.edu (John Baublitz) Date: Wed, 29 Jul 2015 14:57:33 -0400 Subject: [pymvpa] Surface searchlight taking 6 to 8 hours In-Reply-To: References: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> <674EB3D9-FBA9-4F54-A72C-96C283F1DBA0@googlemail.com> Message-ID: Thank you very much for the support. Unfortunately I have tried using this GIFTI file that it outputs with FreeSurfer as an overlay and surface and it throws errors for all FreeSurfer utils and even AFNI utils. FreeSurfer mris_convert outputs: mriseadGIFTIfile: mris is NULL! found when parsing file f_mvpa_rh.func.gii This seems to indicate that it is not saving it as a surface file. Likewise AFNI's gifti_tool outputs: ** failed to find coordinate/triangle structs How exactly is the data being stored in the GIFTI file? It seems that it is not saving it as triangles and coordinates even based on the code you linked to in the github commit given that the NIFTI intent codes are neither NIFTI_INTENT_POINTSET nor NIFTI_INTENT_TRIANGLE by default. I've also run into a problem where the dataset that I've loaded has no intent codes and unfortunately it appears that this means that the NIFTI intent code is set to NIFTI_INTENT_NONE. Is there any way to work around these problems? On Jul 28, 2015 8:01 AM, "Nick Oosterhof" wrote: > > > On 23 Jul 2015, at 17:38, Nick Oosterhof > wrote: > > > > > >> is there a utility to convert from SUMA surface files to FreeSurfer > surface files included in PyMVPA? > > > > ConvertDset (included with AFNI) can convert between NIML and GIFTI, and > mris_convert (included with Freesufer) can convert between GIFTI and a > variety of other file formats used in FreeSurfer. > > With the latest code on github [1] there is now basic support for GIFTI > datasets in PyMVPA [2]. > > [1] https://github.com/PyMVPA/PyMVPA > [2] > https://github.com/PyMVPA/PyMVPA/commit/05ebdda025401148425a7894b3a14ea73b932dfc > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.n.oosterhof at googlemail.com Thu Jul 30 09:27:27 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Thu, 30 Jul 2015 11:27:27 +0200 Subject: [pymvpa] Surface searchlight taking 6 to 8 hours In-Reply-To: References: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> <674EB3D9-FBA9-4F54-A72C-96C283F1DBA0@googlemail.com> Message-ID: > On 29 Jul 2015, at 20:57, John Baublitz wrote: > > Thank you very much for the support. Unfortunately I have tried using this GIFTI file that it outputs with FreeSurfer as an overlay and surface Both at the same time? > and it throws errors for all FreeSurfer utils and even AFNI utils. FreeSurfer mris_convert outputs: > > mriseadGIFTIfile: mris is NULL! found when parsing file f_mvpa_rh.func.gii > > This seems to indicate that it is not saving it as a surface file. Likewise AFNI's gifti_tool outputs: > > ** failed to find coordinate/triangle structs > > How exactly is the data being stored in the GIFTI file? It seems that it is not saving it as triangles and coordinates even based on the code you linked to in the github commit given that the NIFTI intent codes are neither NIFTI_INTENT_POINTSET nor NIFTI_INTENT_TRIANGLE by default. For your current purposes (visualizing surface-based data), consider there are two types of "surface" GIFTI files: 1) "functional" node-data, where each node is associated with the same number of values. Examples are time series data or statistical maps. Typical extensions are .func.gii or .time.gii. 2) "anatomical" surfaces, that have coordinates in 3D space (with NIFTI_INTENT_POINTSET) and node indices in face information (with NIFTI_INTENT_TRIANGLE). The typical extension is surf.gii. In PyMVPA: (1) "functional" surface data is handled through mvpa2.datasets.gifti. Data is stored in a Dataset instance. (2) "anatomical" surfaces are handled through mvpa2.support.nibabel.surf (for GIFTI, mvpa2.support.nibabel.surf_gifti). Vertex coordinates and face indices are stored in a Surface instance (from mvpa2.support.nibabel.surf) (I'm aware that documentation about this distinction can be improved in PyMVPA). > I've also run into a problem where the dataset that I've loaded has no intent codes and unfortunately it appears that this means that the NIFTI intent code is set to NIFTI_INTENT_NONE. Why is that a problem? What are you trying to achieve? If the dataset has no intent, then NIFTI_INTENT_NONE seems valid to me, as the GIFTI standard describes this as "Data intent not specified". From boly.melanie at gmail.com Thu Jul 30 15:15:55 2015 From: boly.melanie at gmail.com (Melanie Boly) Date: Thu, 30 Jul 2015 10:15:55 -0500 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: References: Message-ID: Dear Nick, Thank you so much for your comments; trying to apply your recommendation, but missing one simple step: what is the 'PyMVPA root directory'? Thanks so much, Melanie On Sun, Jul 19, 2015 at 8:44 AM, Nick Oosterhof wrote: > >> On 15 Jul 2015, at 18:46, Feilong Ma wrote: >> >> I had a similar problem while installing PyMVPA on Mac OS (10.10.4). I think the problem is related to this line: >> https://github.com/PyMVPA/PyMVPA/blob/master/mvpa2/clfs/libsvmc/_svm.py#L22 >> >> When I tried to run this line in ipython >> from mvpa2.clfs.libsvmc._svmc import C_SVC, NU_SVC, ONE_CLASS, EPSILON_SVR >> What I got is: >> ImportError: cannot import name C_SVC > > I get the same error. Briefly (see below for details), it seems due to a change in SWIG, with later versions giving issues. > > When running "python setup.py build_ext? and copying over the .o and .so files from the build directory to PyMVPA?s root directory (across the corresponding subdirectories), the following reproduces the error directly: > > python -c "from mvpa2.clfs.libsvmc._svmc import C_SVC? > > Strangely enough, the following works for the failing PyMVPA installation (but not for the working one): > > python -c "from mvpa2.clfs.libsvmc._svmc import C_SVC_swigconstant? > > Digging a bit further, the mvpa2/clfs/libsvmc/svmc.py file differs between my ?working? (generated using SWIG 3.0.2) and ?failing? (SWIG 3.0.6) PyMVPA setup. One difference is that the working version has contents such as > > C_SVC = _svmc.C_SVC > > whereas the failing version has extra lines that includes ?swigconstant? > > _svmc.C_SVC_swigconstant(_svmc) > C_SVC = _svmc.C_SVC > > (For completeness I?m including the full content of both versions below. ) > > Tracing this back further, I compiled swig from source, both for the latest version on github and for version 3.0.0 (version 3.0.2 gave an error when compiling). When using 3.0.0, the import works fine; with 3.0.6 or the latest (3.0.7 development) it breaks. > >> >> I guess the problem might be related to compiling LibSVM. I vaguely remember there was some error messages with CLANG blah blah. > > I installed GCC 5.1 and get the same problem as when using CLANG. > > To summarize, the following worked for me to get libsvm to work on OS X Yosemite: > > - clone swig from https://github.com/swig/swig, then ?git checkout -tag tags/tags/rel-3.0.0? > - in the swig directory, run ?autoconf && ./configure && make && sudo make install? (although it gives an error when installing the man-pages due to missing yodl2man, the binaries are installed fine). This requires autoconf, automake and libconf. > - in the PyMVPA directory, run "python setup.py build_ext? > - copy the .so and .o files from the build directory to the PyMVPA root directory, for example in the PyMVPA root directory do "for ext in .so .o; do for i in `find build -iname "*${ext}"`; do j=`echo $i | cut -f3- -d/`; cp $i $j; done; done? > > If anyone can confirm that using an earlier version of SWIG fixes the problem, that would be great. In that case I can also raise the issue with the developers. > > > > (Below: contents of mvpa2/clfs/libsvmc/svmc.py for working and failing libsvm in PyMVPA) > > ################ > # *Failing* mvpa2/clfs/libsvmc/svmc.py > ################ > > # This file was automatically generated by SWIG (http://www.swig.org). > # Version 3.0.6 > # > # Do not make changes to this file unless you know what you are doing--modify > # the SWIG interface file instead. > > > > > > from sys import version_info > if version_info >= (2, 6, 0): > def swig_import_helper(): > from os.path import dirname > import imp > fp = None > try: > fp, pathname, description = imp.find_module('_svmc', [dirname(__file__)]) > except ImportError: > import _svmc > return _svmc > if fp is not None: > try: > _mod = imp.load_module('_svmc', fp, pathname, description) > finally: > fp.close() > return _mod > _svmc = swig_import_helper() > del swig_import_helper > else: > import _svmc > del version_info > try: > _swig_property = property > except NameError: > pass # Python < 2.2 doesn't have 'property'. > > > def _swig_setattr_nondynamic(self, class_type, name, value, static=1): > if (name == "thisown"): > return self.this.own(value) > if (name == "this"): > if type(value).__name__ == 'SwigPyObject': > self.__dict__[name] = value > return > method = class_type.__swig_setmethods__.get(name, None) > if method: > return method(self, value) > if (not static): > if _newclass: > object.__setattr__(self, name, value) > else: > self.__dict__[name] = value > else: > raise AttributeError("You cannot add attributes to %s" % self) > > > def _swig_setattr(self, class_type, name, value): > return _swig_setattr_nondynamic(self, class_type, name, value, 0) > > > def _swig_getattr_nondynamic(self, class_type, name, static=1): > if (name == "thisown"): > return self.this.own() > method = class_type.__swig_getmethods__.get(name, None) > if method: > return method(self) > if (not static): > return object.__getattr__(self, name) > else: > raise AttributeError(name) > > def _swig_getattr(self, class_type, name): > return _swig_getattr_nondynamic(self, class_type, name, 0) > > > def _swig_repr(self): > try: > strthis = "proxy of " + self.this.__repr__() > except: > strthis = "" > return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,) > > try: > _object = object > _newclass = 1 > except AttributeError: > class _object: > pass > _newclass = 0 > > > > _svmc.__version___swigconstant(_svmc) > __version__ = _svmc.__version__ > > _svmc.C_SVC_swigconstant(_svmc) > C_SVC = _svmc.C_SVC > > _svmc.NU_SVC_swigconstant(_svmc) > NU_SVC = _svmc.NU_SVC > > _svmc.ONE_CLASS_swigconstant(_svmc) > ONE_CLASS = _svmc.ONE_CLASS > > _svmc.EPSILON_SVR_swigconstant(_svmc) > EPSILON_SVR = _svmc.EPSILON_SVR > > _svmc.NU_SVR_swigconstant(_svmc) > NU_SVR = _svmc.NU_SVR > > _svmc.LINEAR_swigconstant(_svmc) > LINEAR = _svmc.LINEAR > > _svmc.POLY_swigconstant(_svmc) > POLY = _svmc.POLY > > _svmc.RBF_swigconstant(_svmc) > RBF = _svmc.RBF > > _svmc.SIGMOID_swigconstant(_svmc) > SIGMOID = _svmc.SIGMOID > > _svmc.PRECOMPUTED_swigconstant(_svmc) > PRECOMPUTED = _svmc.PRECOMPUTED > class svm_parameter(_object): > __swig_setmethods__ = {} > __setattr__ = lambda self, name, value: _swig_setattr(self, svm_parameter, name, value) > __swig_getmethods__ = {} > __getattr__ = lambda self, name: _swig_getattr(self, svm_parameter, name) > __repr__ = _swig_repr > __swig_setmethods__["svm_type"] = _svmc.svm_parameter_svm_type_set > __swig_getmethods__["svm_type"] = _svmc.svm_parameter_svm_type_get > if _newclass: > svm_type = _swig_property(_svmc.svm_parameter_svm_type_get, _svmc.svm_parameter_svm_type_set) > __swig_setmethods__["kernel_type"] = _svmc.svm_parameter_kernel_type_set > __swig_getmethods__["kernel_type"] = _svmc.svm_parameter_kernel_type_get > if _newclass: > kernel_type = _swig_property(_svmc.svm_parameter_kernel_type_get, _svmc.svm_parameter_kernel_type_set) > __swig_setmethods__["degree"] = _svmc.svm_parameter_degree_set > __swig_getmethods__["degree"] = _svmc.svm_parameter_degree_get > if _newclass: > degree = _swig_property(_svmc.svm_parameter_degree_get, _svmc.svm_parameter_degree_set) > __swig_setmethods__["gamma"] = _svmc.svm_parameter_gamma_set > __swig_getmethods__["gamma"] = _svmc.svm_parameter_gamma_get > if _newclass: > gamma = _swig_property(_svmc.svm_parameter_gamma_get, _svmc.svm_parameter_gamma_set) > __swig_setmethods__["coef0"] = _svmc.svm_parameter_coef0_set > __swig_getmethods__["coef0"] = _svmc.svm_parameter_coef0_get > if _newclass: > coef0 = _swig_property(_svmc.svm_parameter_coef0_get, _svmc.svm_parameter_coef0_set) > __swig_setmethods__["cache_size"] = _svmc.svm_parameter_cache_size_set > __swig_getmethods__["cache_size"] = _svmc.svm_parameter_cache_size_get > if _newclass: > cache_size = _swig_property(_svmc.svm_parameter_cache_size_get, _svmc.svm_parameter_cache_size_set) > __swig_setmethods__["eps"] = _svmc.svm_parameter_eps_set > __swig_getmethods__["eps"] = _svmc.svm_parameter_eps_get > if _newclass: > eps = _swig_property(_svmc.svm_parameter_eps_get, _svmc.svm_parameter_eps_set) > __swig_setmethods__["C"] = _svmc.svm_parameter_C_set > __swig_getmethods__["C"] = _svmc.svm_parameter_C_get > if _newclass: > C = _swig_property(_svmc.svm_parameter_C_get, _svmc.svm_parameter_C_set) > __swig_setmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_set > __swig_getmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_get > if _newclass: > nr_weight = _swig_property(_svmc.svm_parameter_nr_weight_get, _svmc.svm_parameter_nr_weight_set) > __swig_setmethods__["weight_label"] = _svmc.svm_parameter_weight_label_set > __swig_getmethods__["weight_label"] = _svmc.svm_parameter_weight_label_get > if _newclass: > weight_label = _swig_property(_svmc.svm_parameter_weight_label_get, _svmc.svm_parameter_weight_label_set) > __swig_setmethods__["weight"] = _svmc.svm_parameter_weight_set > __swig_getmethods__["weight"] = _svmc.svm_parameter_weight_get > if _newclass: > weight = _swig_property(_svmc.svm_parameter_weight_get, _svmc.svm_parameter_weight_set) > __swig_setmethods__["nu"] = _svmc.svm_parameter_nu_set > __swig_getmethods__["nu"] = _svmc.svm_parameter_nu_get > if _newclass: > nu = _swig_property(_svmc.svm_parameter_nu_get, _svmc.svm_parameter_nu_set) > __swig_setmethods__["p"] = _svmc.svm_parameter_p_set > __swig_getmethods__["p"] = _svmc.svm_parameter_p_get > if _newclass: > p = _swig_property(_svmc.svm_parameter_p_get, _svmc.svm_parameter_p_set) > __swig_setmethods__["shrinking"] = _svmc.svm_parameter_shrinking_set > __swig_getmethods__["shrinking"] = _svmc.svm_parameter_shrinking_get > if _newclass: > shrinking = _swig_property(_svmc.svm_parameter_shrinking_get, _svmc.svm_parameter_shrinking_set) > __swig_setmethods__["probability"] = _svmc.svm_parameter_probability_set > __swig_getmethods__["probability"] = _svmc.svm_parameter_probability_get > if _newclass: > probability = _swig_property(_svmc.svm_parameter_probability_get, _svmc.svm_parameter_probability_set) > > def __init__(self): > this = _svmc.new_svm_parameter() > try: > self.this.append(this) > except: > self.this = this > __swig_destroy__ = _svmc.delete_svm_parameter > __del__ = lambda self: None > svm_parameter_swigregister = _svmc.svm_parameter_swigregister > svm_parameter_swigregister(svm_parameter) > > class svm_problem(_object): > __swig_setmethods__ = {} > __setattr__ = lambda self, name, value: _swig_setattr(self, svm_problem, name, value) > __swig_getmethods__ = {} > __getattr__ = lambda self, name: _swig_getattr(self, svm_problem, name) > __repr__ = _swig_repr > __swig_setmethods__["l"] = _svmc.svm_problem_l_set > __swig_getmethods__["l"] = _svmc.svm_problem_l_get > if _newclass: > l = _swig_property(_svmc.svm_problem_l_get, _svmc.svm_problem_l_set) > __swig_setmethods__["y"] = _svmc.svm_problem_y_set > __swig_getmethods__["y"] = _svmc.svm_problem_y_get > if _newclass: > y = _swig_property(_svmc.svm_problem_y_get, _svmc.svm_problem_y_set) > __swig_setmethods__["x"] = _svmc.svm_problem_x_set > __swig_getmethods__["x"] = _svmc.svm_problem_x_get > if _newclass: > x = _swig_property(_svmc.svm_problem_x_get, _svmc.svm_problem_x_set) > > def __init__(self): > this = _svmc.new_svm_problem() > try: > self.this.append(this) > except: > self.this = this > __swig_destroy__ = _svmc.delete_svm_problem > __del__ = lambda self: None > svm_problem_swigregister = _svmc.svm_problem_swigregister > svm_problem_swigregister(svm_problem) > > class svm_model(_object): > __swig_setmethods__ = {} > __setattr__ = lambda self, name, value: _swig_setattr(self, svm_model, name, value) > __swig_getmethods__ = {} > __getattr__ = lambda self, name: _swig_getattr(self, svm_model, name) > __repr__ = _swig_repr > __swig_setmethods__["param"] = _svmc.svm_model_param_set > __swig_getmethods__["param"] = _svmc.svm_model_param_get > if _newclass: > param = _swig_property(_svmc.svm_model_param_get, _svmc.svm_model_param_set) > __swig_setmethods__["nr_class"] = _svmc.svm_model_nr_class_set > __swig_getmethods__["nr_class"] = _svmc.svm_model_nr_class_get > if _newclass: > nr_class = _swig_property(_svmc.svm_model_nr_class_get, _svmc.svm_model_nr_class_set) > __swig_setmethods__["l"] = _svmc.svm_model_l_set > __swig_getmethods__["l"] = _svmc.svm_model_l_get > if _newclass: > l = _swig_property(_svmc.svm_model_l_get, _svmc.svm_model_l_set) > __swig_setmethods__["SV"] = _svmc.svm_model_SV_set > __swig_getmethods__["SV"] = _svmc.svm_model_SV_get > if _newclass: > SV = _swig_property(_svmc.svm_model_SV_get, _svmc.svm_model_SV_set) > __swig_setmethods__["sv_coef"] = _svmc.svm_model_sv_coef_set > __swig_getmethods__["sv_coef"] = _svmc.svm_model_sv_coef_get > if _newclass: > sv_coef = _swig_property(_svmc.svm_model_sv_coef_get, _svmc.svm_model_sv_coef_set) > __swig_setmethods__["rho"] = _svmc.svm_model_rho_set > __swig_getmethods__["rho"] = _svmc.svm_model_rho_get > if _newclass: > rho = _swig_property(_svmc.svm_model_rho_get, _svmc.svm_model_rho_set) > __swig_setmethods__["probA"] = _svmc.svm_model_probA_set > __swig_getmethods__["probA"] = _svmc.svm_model_probA_get > if _newclass: > probA = _swig_property(_svmc.svm_model_probA_get, _svmc.svm_model_probA_set) > __swig_setmethods__["probB"] = _svmc.svm_model_probB_set > __swig_getmethods__["probB"] = _svmc.svm_model_probB_get > if _newclass: > probB = _swig_property(_svmc.svm_model_probB_get, _svmc.svm_model_probB_set) > __swig_setmethods__["label"] = _svmc.svm_model_label_set > __swig_getmethods__["label"] = _svmc.svm_model_label_get > if _newclass: > label = _swig_property(_svmc.svm_model_label_get, _svmc.svm_model_label_set) > __swig_setmethods__["nSV"] = _svmc.svm_model_nSV_set > __swig_getmethods__["nSV"] = _svmc.svm_model_nSV_get > if _newclass: > nSV = _swig_property(_svmc.svm_model_nSV_get, _svmc.svm_model_nSV_set) > __swig_setmethods__["free_sv"] = _svmc.svm_model_free_sv_set > __swig_getmethods__["free_sv"] = _svmc.svm_model_free_sv_get > if _newclass: > free_sv = _swig_property(_svmc.svm_model_free_sv_get, _svmc.svm_model_free_sv_set) > > def __init__(self): > this = _svmc.new_svm_model() > try: > self.this.append(this) > except: > self.this = this > __swig_destroy__ = _svmc.delete_svm_model > __del__ = lambda self: None > svm_model_swigregister = _svmc.svm_model_swigregister > svm_model_swigregister(svm_model) > > > def svm_set_verbosity(verbosity_flag): > return _svmc.svm_set_verbosity(verbosity_flag) > svm_set_verbosity = _svmc.svm_set_verbosity > > def svm_train(prob, param): > return _svmc.svm_train(prob, param) > svm_train = _svmc.svm_train > > def svm_cross_validation(prob, param, nr_fold, target): > return _svmc.svm_cross_validation(prob, param, nr_fold, target) > svm_cross_validation = _svmc.svm_cross_validation > > def svm_save_model(model_file_name, model): > return _svmc.svm_save_model(model_file_name, model) > svm_save_model = _svmc.svm_save_model > > def svm_load_model(model_file_name): > return _svmc.svm_load_model(model_file_name) > svm_load_model = _svmc.svm_load_model > > def svm_get_svm_type(model): > return _svmc.svm_get_svm_type(model) > svm_get_svm_type = _svmc.svm_get_svm_type > > def svm_get_nr_class(model): > return _svmc.svm_get_nr_class(model) > svm_get_nr_class = _svmc.svm_get_nr_class > > def svm_get_labels(model, label): > return _svmc.svm_get_labels(model, label) > svm_get_labels = _svmc.svm_get_labels > > def svm_get_svr_probability(model): > return _svmc.svm_get_svr_probability(model) > svm_get_svr_probability = _svmc.svm_get_svr_probability > > def svm_predict_values(model, x, decvalue): > return _svmc.svm_predict_values(model, x, decvalue) > svm_predict_values = _svmc.svm_predict_values > > def svm_predict(model, x): > return _svmc.svm_predict(model, x) > svm_predict = _svmc.svm_predict > > def svm_predict_probability(model, x, prob_estimates): > return _svmc.svm_predict_probability(model, x, prob_estimates) > svm_predict_probability = _svmc.svm_predict_probability > > def svm_check_parameter(prob, param): > return _svmc.svm_check_parameter(prob, param) > svm_check_parameter = _svmc.svm_check_parameter > > def svm_check_probability_model(model): > return _svmc.svm_check_probability_model(model) > svm_check_probability_model = _svmc.svm_check_probability_model > > def svm_node_matrix2numpy_array(matrix, rows, cols): > return _svmc.svm_node_matrix2numpy_array(matrix, rows, cols) > svm_node_matrix2numpy_array = _svmc.svm_node_matrix2numpy_array > > def doubleppcarray2numpy_array(data, rows, cols): > return _svmc.doubleppcarray2numpy_array(data, rows, cols) > doubleppcarray2numpy_array = _svmc.doubleppcarray2numpy_array > > def new_int(nelements): > return _svmc.new_int(nelements) > new_int = _svmc.new_int > > def delete_int(ary): > return _svmc.delete_int(ary) > delete_int = _svmc.delete_int > > def int_getitem(ary, index): > return _svmc.int_getitem(ary, index) > int_getitem = _svmc.int_getitem > > def int_setitem(ary, index, value): > return _svmc.int_setitem(ary, index, value) > int_setitem = _svmc.int_setitem > > def new_double(nelements): > return _svmc.new_double(nelements) > new_double = _svmc.new_double > > def delete_double(ary): > return _svmc.delete_double(ary) > delete_double = _svmc.delete_double > > def double_getitem(ary, index): > return _svmc.double_getitem(ary, index) > double_getitem = _svmc.double_getitem > > def double_setitem(ary, index, value): > return _svmc.double_setitem(ary, index, value) > double_setitem = _svmc.double_setitem > > def svm_node_array(size): > return _svmc.svm_node_array(size) > svm_node_array = _svmc.svm_node_array > > def svm_node_array_set(*args): > return _svmc.svm_node_array_set(*args) > svm_node_array_set = _svmc.svm_node_array_set > > def svm_node_array_destroy(array): > return _svmc.svm_node_array_destroy(array) > svm_node_array_destroy = _svmc.svm_node_array_destroy > > def svm_node_matrix(size): > return _svmc.svm_node_matrix(size) > svm_node_matrix = _svmc.svm_node_matrix > > def svm_node_matrix_set(matrix, i, array): > return _svmc.svm_node_matrix_set(matrix, i, array) > svm_node_matrix_set = _svmc.svm_node_matrix_set > > def svm_node_matrix_destroy(matrix): > return _svmc.svm_node_matrix_destroy(matrix) > svm_node_matrix_destroy = _svmc.svm_node_matrix_destroy > > def svm_destroy_model_helper(model_ptr): > return _svmc.svm_destroy_model_helper(model_ptr) > svm_destroy_model_helper = _svmc.svm_destroy_model_helper > # This file is compatible with both classic and new-style classes. > > > > > > ################ > # *Working* mvpa2/clfs/libsvmc/svmc.py > ################ > > > # This file was automatically generated by SWIG (http://www.swig.org). > # Version 3.0.2 > # > # Do not make changes to this file unless you know what you are doing--modify > # the SWIG interface file instead. > > > > > > from sys import version_info > if version_info >= (2,6,0): > def swig_import_helper(): > from os.path import dirname > import imp > fp = None > try: > fp, pathname, description = imp.find_module('_svmc', [dirname(__file__)]) > except ImportError: > import _svmc > return _svmc > if fp is not None: > try: > _mod = imp.load_module('_svmc', fp, pathname, description) > finally: > fp.close() > return _mod > _svmc = swig_import_helper() > del swig_import_helper > else: > import _svmc > del version_info > try: > _swig_property = property > except NameError: > pass # Python < 2.2 doesn't have 'property'. > def _swig_setattr_nondynamic(self,class_type,name,value,static=1): > if (name == "thisown"): return self.this.own(value) > if (name == "this"): > if type(value).__name__ == 'SwigPyObject': > self.__dict__[name] = value > return > method = class_type.__swig_setmethods__.get(name,None) > if method: return method(self,value) > if (not static): > self.__dict__[name] = value > else: > raise AttributeError("You cannot add attributes to %s" % self) > > def _swig_setattr(self,class_type,name,value): > return _swig_setattr_nondynamic(self,class_type,name,value,0) > > def _swig_getattr(self,class_type,name): > if (name == "thisown"): return self.this.own() > method = class_type.__swig_getmethods__.get(name,None) > if method: return method(self) > raise AttributeError(name) > > def _swig_repr(self): > try: strthis = "proxy of " + self.this.__repr__() > except: strthis = "" > return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,) > > try: > _object = object > _newclass = 1 > except AttributeError: > class _object : pass > _newclass = 0 > > > __version__ = _svmc.__version__ > C_SVC = _svmc.C_SVC > NU_SVC = _svmc.NU_SVC > ONE_CLASS = _svmc.ONE_CLASS > EPSILON_SVR = _svmc.EPSILON_SVR > NU_SVR = _svmc.NU_SVR > LINEAR = _svmc.LINEAR > POLY = _svmc.POLY > RBF = _svmc.RBF > SIGMOID = _svmc.SIGMOID > PRECOMPUTED = _svmc.PRECOMPUTED > class svm_parameter(_object): > __swig_setmethods__ = {} > __setattr__ = lambda self, name, value: _swig_setattr(self, svm_parameter, name, value) > __swig_getmethods__ = {} > __getattr__ = lambda self, name: _swig_getattr(self, svm_parameter, name) > __repr__ = _swig_repr > __swig_setmethods__["svm_type"] = _svmc.svm_parameter_svm_type_set > __swig_getmethods__["svm_type"] = _svmc.svm_parameter_svm_type_get > if _newclass:svm_type = _swig_property(_svmc.svm_parameter_svm_type_get, _svmc.svm_parameter_svm_type_set) > __swig_setmethods__["kernel_type"] = _svmc.svm_parameter_kernel_type_set > __swig_getmethods__["kernel_type"] = _svmc.svm_parameter_kernel_type_get > if _newclass:kernel_type = _swig_property(_svmc.svm_parameter_kernel_type_get, _svmc.svm_parameter_kernel_type_set) > __swig_setmethods__["degree"] = _svmc.svm_parameter_degree_set > __swig_getmethods__["degree"] = _svmc.svm_parameter_degree_get > if _newclass:degree = _swig_property(_svmc.svm_parameter_degree_get, _svmc.svm_parameter_degree_set) > __swig_setmethods__["gamma"] = _svmc.svm_parameter_gamma_set > __swig_getmethods__["gamma"] = _svmc.svm_parameter_gamma_get > if _newclass:gamma = _swig_property(_svmc.svm_parameter_gamma_get, _svmc.svm_parameter_gamma_set) > __swig_setmethods__["coef0"] = _svmc.svm_parameter_coef0_set > __swig_getmethods__["coef0"] = _svmc.svm_parameter_coef0_get > if _newclass:coef0 = _swig_property(_svmc.svm_parameter_coef0_get, _svmc.svm_parameter_coef0_set) > __swig_setmethods__["cache_size"] = _svmc.svm_parameter_cache_size_set > __swig_getmethods__["cache_size"] = _svmc.svm_parameter_cache_size_get > if _newclass:cache_size = _swig_property(_svmc.svm_parameter_cache_size_get, _svmc.svm_parameter_cache_size_set) > __swig_setmethods__["eps"] = _svmc.svm_parameter_eps_set > __swig_getmethods__["eps"] = _svmc.svm_parameter_eps_get > if _newclass:eps = _swig_property(_svmc.svm_parameter_eps_get, _svmc.svm_parameter_eps_set) > __swig_setmethods__["C"] = _svmc.svm_parameter_C_set > __swig_getmethods__["C"] = _svmc.svm_parameter_C_get > if _newclass:C = _swig_property(_svmc.svm_parameter_C_get, _svmc.svm_parameter_C_set) > __swig_setmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_set > __swig_getmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_get > if _newclass:nr_weight = _swig_property(_svmc.svm_parameter_nr_weight_get, _svmc.svm_parameter_nr_weight_set) > __swig_setmethods__["weight_label"] = _svmc.svm_parameter_weight_label_set > __swig_getmethods__["weight_label"] = _svmc.svm_parameter_weight_label_get > if _newclass:weight_label = _swig_property(_svmc.svm_parameter_weight_label_get, _svmc.svm_parameter_weight_label_set) > __swig_setmethods__["weight"] = _svmc.svm_parameter_weight_set > __swig_getmethods__["weight"] = _svmc.svm_parameter_weight_get > if _newclass:weight = _swig_property(_svmc.svm_parameter_weight_get, _svmc.svm_parameter_weight_set) > __swig_setmethods__["nu"] = _svmc.svm_parameter_nu_set > __swig_getmethods__["nu"] = _svmc.svm_parameter_nu_get > if _newclass:nu = _swig_property(_svmc.svm_parameter_nu_get, _svmc.svm_parameter_nu_set) > __swig_setmethods__["p"] = _svmc.svm_parameter_p_set > __swig_getmethods__["p"] = _svmc.svm_parameter_p_get > if _newclass:p = _swig_property(_svmc.svm_parameter_p_get, _svmc.svm_parameter_p_set) > __swig_setmethods__["shrinking"] = _svmc.svm_parameter_shrinking_set > __swig_getmethods__["shrinking"] = _svmc.svm_parameter_shrinking_get > if _newclass:shrinking = _swig_property(_svmc.svm_parameter_shrinking_get, _svmc.svm_parameter_shrinking_set) > __swig_setmethods__["probability"] = _svmc.svm_parameter_probability_set > __swig_getmethods__["probability"] = _svmc.svm_parameter_probability_get > if _newclass:probability = _swig_property(_svmc.svm_parameter_probability_get, _svmc.svm_parameter_probability_set) > def __init__(self): > this = _svmc.new_svm_parameter() > try: self.this.append(this) > except: self.this = this > __swig_destroy__ = _svmc.delete_svm_parameter > __del__ = lambda self : None; > svm_parameter_swigregister = _svmc.svm_parameter_swigregister > svm_parameter_swigregister(svm_parameter) > > class svm_problem(_object): > __swig_setmethods__ = {} > __setattr__ = lambda self, name, value: _swig_setattr(self, svm_problem, name, value) > __swig_getmethods__ = {} > __getattr__ = lambda self, name: _swig_getattr(self, svm_problem, name) > __repr__ = _swig_repr > __swig_setmethods__["l"] = _svmc.svm_problem_l_set > __swig_getmethods__["l"] = _svmc.svm_problem_l_get > if _newclass:l = _swig_property(_svmc.svm_problem_l_get, _svmc.svm_problem_l_set) > __swig_setmethods__["y"] = _svmc.svm_problem_y_set > __swig_getmethods__["y"] = _svmc.svm_problem_y_get > if _newclass:y = _swig_property(_svmc.svm_problem_y_get, _svmc.svm_problem_y_set) > __swig_setmethods__["x"] = _svmc.svm_problem_x_set > __swig_getmethods__["x"] = _svmc.svm_problem_x_get > if _newclass:x = _swig_property(_svmc.svm_problem_x_get, _svmc.svm_problem_x_set) > def __init__(self): > this = _svmc.new_svm_problem() > try: self.this.append(this) > except: self.this = this > __swig_destroy__ = _svmc.delete_svm_problem > __del__ = lambda self : None; > svm_problem_swigregister = _svmc.svm_problem_swigregister > svm_problem_swigregister(svm_problem) > > class svm_model(_object): > __swig_setmethods__ = {} > __setattr__ = lambda self, name, value: _swig_setattr(self, svm_model, name, value) > __swig_getmethods__ = {} > __getattr__ = lambda self, name: _swig_getattr(self, svm_model, name) > __repr__ = _swig_repr > __swig_setmethods__["param"] = _svmc.svm_model_param_set > __swig_getmethods__["param"] = _svmc.svm_model_param_get > if _newclass:param = _swig_property(_svmc.svm_model_param_get, _svmc.svm_model_param_set) > __swig_setmethods__["nr_class"] = _svmc.svm_model_nr_class_set > __swig_getmethods__["nr_class"] = _svmc.svm_model_nr_class_get > if _newclass:nr_class = _swig_property(_svmc.svm_model_nr_class_get, _svmc.svm_model_nr_class_set) > __swig_setmethods__["l"] = _svmc.svm_model_l_set > __swig_getmethods__["l"] = _svmc.svm_model_l_get > if _newclass:l = _swig_property(_svmc.svm_model_l_get, _svmc.svm_model_l_set) > __swig_setmethods__["SV"] = _svmc.svm_model_SV_set > __swig_getmethods__["SV"] = _svmc.svm_model_SV_get > if _newclass:SV = _swig_property(_svmc.svm_model_SV_get, _svmc.svm_model_SV_set) > __swig_setmethods__["sv_coef"] = _svmc.svm_model_sv_coef_set > __swig_getmethods__["sv_coef"] = _svmc.svm_model_sv_coef_get > if _newclass:sv_coef = _swig_property(_svmc.svm_model_sv_coef_get, _svmc.svm_model_sv_coef_set) > __swig_setmethods__["rho"] = _svmc.svm_model_rho_set > __swig_getmethods__["rho"] = _svmc.svm_model_rho_get > if _newclass:rho = _swig_property(_svmc.svm_model_rho_get, _svmc.svm_model_rho_set) > __swig_setmethods__["probA"] = _svmc.svm_model_probA_set > __swig_getmethods__["probA"] = _svmc.svm_model_probA_get > if _newclass:probA = _swig_property(_svmc.svm_model_probA_get, _svmc.svm_model_probA_set) > __swig_setmethods__["probB"] = _svmc.svm_model_probB_set > __swig_getmethods__["probB"] = _svmc.svm_model_probB_get > if _newclass:probB = _swig_property(_svmc.svm_model_probB_get, _svmc.svm_model_probB_set) > __swig_setmethods__["label"] = _svmc.svm_model_label_set > __swig_getmethods__["label"] = _svmc.svm_model_label_get > if _newclass:label = _swig_property(_svmc.svm_model_label_get, _svmc.svm_model_label_set) > __swig_setmethods__["nSV"] = _svmc.svm_model_nSV_set > __swig_getmethods__["nSV"] = _svmc.svm_model_nSV_get > if _newclass:nSV = _swig_property(_svmc.svm_model_nSV_get, _svmc.svm_model_nSV_set) > __swig_setmethods__["free_sv"] = _svmc.svm_model_free_sv_set > __swig_getmethods__["free_sv"] = _svmc.svm_model_free_sv_get > if _newclass:free_sv = _swig_property(_svmc.svm_model_free_sv_get, _svmc.svm_model_free_sv_set) > def __init__(self): > this = _svmc.new_svm_model() > try: self.this.append(this) > except: self.this = this > __swig_destroy__ = _svmc.delete_svm_model > __del__ = lambda self : None; > svm_model_swigregister = _svmc.svm_model_swigregister > svm_model_swigregister(svm_model) > > > def svm_set_verbosity(*args): > return _svmc.svm_set_verbosity(*args) > svm_set_verbosity = _svmc.svm_set_verbosity > > def svm_train(*args): > return _svmc.svm_train(*args) > svm_train = _svmc.svm_train > > def svm_cross_validation(*args): > return _svmc.svm_cross_validation(*args) > svm_cross_validation = _svmc.svm_cross_validation > > def svm_save_model(*args): > return _svmc.svm_save_model(*args) > svm_save_model = _svmc.svm_save_model > > def svm_load_model(*args): > return _svmc.svm_load_model(*args) > svm_load_model = _svmc.svm_load_model > > def svm_get_svm_type(*args): > return _svmc.svm_get_svm_type(*args) > svm_get_svm_type = _svmc.svm_get_svm_type > > def svm_get_nr_class(*args): > return _svmc.svm_get_nr_class(*args) > svm_get_nr_class = _svmc.svm_get_nr_class > > def svm_get_labels(*args): > return _svmc.svm_get_labels(*args) > svm_get_labels = _svmc.svm_get_labels > > def svm_get_svr_probability(*args): > return _svmc.svm_get_svr_probability(*args) > svm_get_svr_probability = _svmc.svm_get_svr_probability > > def svm_predict_values(*args): > return _svmc.svm_predict_values(*args) > svm_predict_values = _svmc.svm_predict_values > > def svm_predict(*args): > return _svmc.svm_predict(*args) > svm_predict = _svmc.svm_predict > > def svm_predict_probability(*args): > return _svmc.svm_predict_probability(*args) > svm_predict_probability = _svmc.svm_predict_probability > > def svm_check_parameter(*args): > return _svmc.svm_check_parameter(*args) > svm_check_parameter = _svmc.svm_check_parameter > > def svm_check_probability_model(*args): > return _svmc.svm_check_probability_model(*args) > svm_check_probability_model = _svmc.svm_check_probability_model > > def svm_node_matrix2numpy_array(*args): > return _svmc.svm_node_matrix2numpy_array(*args) > svm_node_matrix2numpy_array = _svmc.svm_node_matrix2numpy_array > > def doubleppcarray2numpy_array(*args): > return _svmc.doubleppcarray2numpy_array(*args) > doubleppcarray2numpy_array = _svmc.doubleppcarray2numpy_array > > def new_int(*args): > return _svmc.new_int(*args) > new_int = _svmc.new_int > > def delete_int(*args): > return _svmc.delete_int(*args) > delete_int = _svmc.delete_int > > def int_getitem(*args): > return _svmc.int_getitem(*args) > int_getitem = _svmc.int_getitem > > def int_setitem(*args): > return _svmc.int_setitem(*args) > int_setitem = _svmc.int_setitem > > def new_double(*args): > return _svmc.new_double(*args) > new_double = _svmc.new_double > > def delete_double(*args): > return _svmc.delete_double(*args) > delete_double = _svmc.delete_double > > def double_getitem(*args): > return _svmc.double_getitem(*args) > double_getitem = _svmc.double_getitem > > def double_setitem(*args): > return _svmc.double_setitem(*args) > double_setitem = _svmc.double_setitem > > def svm_node_array(*args): > return _svmc.svm_node_array(*args) > svm_node_array = _svmc.svm_node_array > > def svm_node_array_set(*args): > return _svmc.svm_node_array_set(*args) > svm_node_array_set = _svmc.svm_node_array_set > > def svm_node_array_destroy(*args): > return _svmc.svm_node_array_destroy(*args) > svm_node_array_destroy = _svmc.svm_node_array_destroy > > def svm_node_matrix(*args): > return _svmc.svm_node_matrix(*args) > svm_node_matrix = _svmc.svm_node_matrix > > def svm_node_matrix_set(*args): > return _svmc.svm_node_matrix_set(*args) > svm_node_matrix_set = _svmc.svm_node_matrix_set > > def svm_node_matrix_destroy(*args): > return _svmc.svm_node_matrix_destroy(*args) > svm_node_matrix_destroy = _svmc.svm_node_matrix_destroy > > def svm_destroy_model_helper(*args): > return _svmc.svm_destroy_model_helper(*args) > svm_destroy_model_helper = _svmc.svm_destroy_model_helper > # This file is compatible with both classic and new-style classes. > > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa From boly.melanie at gmail.com Thu Jul 30 15:33:27 2015 From: boly.melanie at gmail.com (Melanie Boly) Date: Thu, 30 Jul 2015 10:33:27 -0500 Subject: [pymvpa] LinearCSVMC not found in PyMVPA-upstream-2.4.0 release In-Reply-To: References: Message-ID: <3A49E857-BFB9-4839-B800-8CC6C9EE598B@gmail.com> sok found it and with Swig3.0.4 now it works! thanks so much to everybody, Melanie > On Jul 30, 2015, at 10:15 AM, Melanie Boly wrote: > > Dear Nick, > Thank you so much for your comments; trying to apply your > recommendation, but missing one simple step: what is the 'PyMVPA root > directory'? > Thanks so much, > Melanie > > On Sun, Jul 19, 2015 at 8:44 AM, Nick Oosterhof > wrote: >> >>> On 15 Jul 2015, at 18:46, Feilong Ma wrote: >>> >>> I had a similar problem while installing PyMVPA on Mac OS (10.10.4). I think the problem is related to this line: >>> https://github.com/PyMVPA/PyMVPA/blob/master/mvpa2/clfs/libsvmc/_svm.py#L22 >>> >>> When I tried to run this line in ipython >>> from mvpa2.clfs.libsvmc._svmc import C_SVC, NU_SVC, ONE_CLASS, EPSILON_SVR >>> What I got is: >>> ImportError: cannot import name C_SVC >> >> I get the same error. Briefly (see below for details), it seems due to a change in SWIG, with later versions giving issues. >> >> When running "python setup.py build_ext? and copying over the .o and .so files from the build directory to PyMVPA?s root directory (across the corresponding subdirectories), the following reproduces the error directly: >> >> python -c "from mvpa2.clfs.libsvmc._svmc import C_SVC? >> >> Strangely enough, the following works for the failing PyMVPA installation (but not for the working one): >> >> python -c "from mvpa2.clfs.libsvmc._svmc import C_SVC_swigconstant? >> >> Digging a bit further, the mvpa2/clfs/libsvmc/svmc.py file differs between my ?working? (generated using SWIG 3.0.2) and ?failing? (SWIG 3.0.6) PyMVPA setup. One difference is that the working version has contents such as >> >> C_SVC = _svmc.C_SVC >> >> whereas the failing version has extra lines that includes ?swigconstant? >> >> _svmc.C_SVC_swigconstant(_svmc) >> C_SVC = _svmc.C_SVC >> >> (For completeness I?m including the full content of both versions below. ) >> >> Tracing this back further, I compiled swig from source, both for the latest version on github and for version 3.0.0 (version 3.0.2 gave an error when compiling). When using 3.0.0, the import works fine; with 3.0.6 or the latest (3.0.7 development) it breaks. >> >>> >>> I guess the problem might be related to compiling LibSVM. I vaguely remember there was some error messages with CLANG blah blah. >> >> I installed GCC 5.1 and get the same problem as when using CLANG. >> >> To summarize, the following worked for me to get libsvm to work on OS X Yosemite: >> >> - clone swig from https://github.com/swig/swig, then ?git checkout -tag tags/tags/rel-3.0.0? >> - in the swig directory, run ?autoconf && ./configure && make && sudo make install? (although it gives an error when installing the man-pages due to missing yodl2man, the binaries are installed fine). This requires autoconf, automake and libconf. >> - in the PyMVPA directory, run "python setup.py build_ext? >> - copy the .so and .o files from the build directory to the PyMVPA root directory, for example in the PyMVPA root directory do "for ext in .so .o; do for i in `find build -iname "*${ext}"`; do j=`echo $i | cut -f3- -d/`; cp $i $j; done; done? >> >> If anyone can confirm that using an earlier version of SWIG fixes the problem, that would be great. In that case I can also raise the issue with the developers. >> >> >> >> (Below: contents of mvpa2/clfs/libsvmc/svmc.py for working and failing libsvm in PyMVPA) >> >> ################ >> # *Failing* mvpa2/clfs/libsvmc/svmc.py >> ################ >> >> # This file was automatically generated by SWIG (http://www.swig.org). >> # Version 3.0.6 >> # >> # Do not make changes to this file unless you know what you are doing--modify >> # the SWIG interface file instead. >> >> >> >> >> >> from sys import version_info >> if version_info >= (2, 6, 0): >> def swig_import_helper(): >> from os.path import dirname >> import imp >> fp = None >> try: >> fp, pathname, description = imp.find_module('_svmc', [dirname(__file__)]) >> except ImportError: >> import _svmc >> return _svmc >> if fp is not None: >> try: >> _mod = imp.load_module('_svmc', fp, pathname, description) >> finally: >> fp.close() >> return _mod >> _svmc = swig_import_helper() >> del swig_import_helper >> else: >> import _svmc >> del version_info >> try: >> _swig_property = property >> except NameError: >> pass # Python < 2.2 doesn't have 'property'. >> >> >> def _swig_setattr_nondynamic(self, class_type, name, value, static=1): >> if (name == "thisown"): >> return self.this.own(value) >> if (name == "this"): >> if type(value).__name__ == 'SwigPyObject': >> self.__dict__[name] = value >> return >> method = class_type.__swig_setmethods__.get(name, None) >> if method: >> return method(self, value) >> if (not static): >> if _newclass: >> object.__setattr__(self, name, value) >> else: >> self.__dict__[name] = value >> else: >> raise AttributeError("You cannot add attributes to %s" % self) >> >> >> def _swig_setattr(self, class_type, name, value): >> return _swig_setattr_nondynamic(self, class_type, name, value, 0) >> >> >> def _swig_getattr_nondynamic(self, class_type, name, static=1): >> if (name == "thisown"): >> return self.this.own() >> method = class_type.__swig_getmethods__.get(name, None) >> if method: >> return method(self) >> if (not static): >> return object.__getattr__(self, name) >> else: >> raise AttributeError(name) >> >> def _swig_getattr(self, class_type, name): >> return _swig_getattr_nondynamic(self, class_type, name, 0) >> >> >> def _swig_repr(self): >> try: >> strthis = "proxy of " + self.this.__repr__() >> except: >> strthis = "" >> return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,) >> >> try: >> _object = object >> _newclass = 1 >> except AttributeError: >> class _object: >> pass >> _newclass = 0 >> >> >> >> _svmc.__version___swigconstant(_svmc) >> __version__ = _svmc.__version__ >> >> _svmc.C_SVC_swigconstant(_svmc) >> C_SVC = _svmc.C_SVC >> >> _svmc.NU_SVC_swigconstant(_svmc) >> NU_SVC = _svmc.NU_SVC >> >> _svmc.ONE_CLASS_swigconstant(_svmc) >> ONE_CLASS = _svmc.ONE_CLASS >> >> _svmc.EPSILON_SVR_swigconstant(_svmc) >> EPSILON_SVR = _svmc.EPSILON_SVR >> >> _svmc.NU_SVR_swigconstant(_svmc) >> NU_SVR = _svmc.NU_SVR >> >> _svmc.LINEAR_swigconstant(_svmc) >> LINEAR = _svmc.LINEAR >> >> _svmc.POLY_swigconstant(_svmc) >> POLY = _svmc.POLY >> >> _svmc.RBF_swigconstant(_svmc) >> RBF = _svmc.RBF >> >> _svmc.SIGMOID_swigconstant(_svmc) >> SIGMOID = _svmc.SIGMOID >> >> _svmc.PRECOMPUTED_swigconstant(_svmc) >> PRECOMPUTED = _svmc.PRECOMPUTED >> class svm_parameter(_object): >> __swig_setmethods__ = {} >> __setattr__ = lambda self, name, value: _swig_setattr(self, svm_parameter, name, value) >> __swig_getmethods__ = {} >> __getattr__ = lambda self, name: _swig_getattr(self, svm_parameter, name) >> __repr__ = _swig_repr >> __swig_setmethods__["svm_type"] = _svmc.svm_parameter_svm_type_set >> __swig_getmethods__["svm_type"] = _svmc.svm_parameter_svm_type_get >> if _newclass: >> svm_type = _swig_property(_svmc.svm_parameter_svm_type_get, _svmc.svm_parameter_svm_type_set) >> __swig_setmethods__["kernel_type"] = _svmc.svm_parameter_kernel_type_set >> __swig_getmethods__["kernel_type"] = _svmc.svm_parameter_kernel_type_get >> if _newclass: >> kernel_type = _swig_property(_svmc.svm_parameter_kernel_type_get, _svmc.svm_parameter_kernel_type_set) >> __swig_setmethods__["degree"] = _svmc.svm_parameter_degree_set >> __swig_getmethods__["degree"] = _svmc.svm_parameter_degree_get >> if _newclass: >> degree = _swig_property(_svmc.svm_parameter_degree_get, _svmc.svm_parameter_degree_set) >> __swig_setmethods__["gamma"] = _svmc.svm_parameter_gamma_set >> __swig_getmethods__["gamma"] = _svmc.svm_parameter_gamma_get >> if _newclass: >> gamma = _swig_property(_svmc.svm_parameter_gamma_get, _svmc.svm_parameter_gamma_set) >> __swig_setmethods__["coef0"] = _svmc.svm_parameter_coef0_set >> __swig_getmethods__["coef0"] = _svmc.svm_parameter_coef0_get >> if _newclass: >> coef0 = _swig_property(_svmc.svm_parameter_coef0_get, _svmc.svm_parameter_coef0_set) >> __swig_setmethods__["cache_size"] = _svmc.svm_parameter_cache_size_set >> __swig_getmethods__["cache_size"] = _svmc.svm_parameter_cache_size_get >> if _newclass: >> cache_size = _swig_property(_svmc.svm_parameter_cache_size_get, _svmc.svm_parameter_cache_size_set) >> __swig_setmethods__["eps"] = _svmc.svm_parameter_eps_set >> __swig_getmethods__["eps"] = _svmc.svm_parameter_eps_get >> if _newclass: >> eps = _swig_property(_svmc.svm_parameter_eps_get, _svmc.svm_parameter_eps_set) >> __swig_setmethods__["C"] = _svmc.svm_parameter_C_set >> __swig_getmethods__["C"] = _svmc.svm_parameter_C_get >> if _newclass: >> C = _swig_property(_svmc.svm_parameter_C_get, _svmc.svm_parameter_C_set) >> __swig_setmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_set >> __swig_getmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_get >> if _newclass: >> nr_weight = _swig_property(_svmc.svm_parameter_nr_weight_get, _svmc.svm_parameter_nr_weight_set) >> __swig_setmethods__["weight_label"] = _svmc.svm_parameter_weight_label_set >> __swig_getmethods__["weight_label"] = _svmc.svm_parameter_weight_label_get >> if _newclass: >> weight_label = _swig_property(_svmc.svm_parameter_weight_label_get, _svmc.svm_parameter_weight_label_set) >> __swig_setmethods__["weight"] = _svmc.svm_parameter_weight_set >> __swig_getmethods__["weight"] = _svmc.svm_parameter_weight_get >> if _newclass: >> weight = _swig_property(_svmc.svm_parameter_weight_get, _svmc.svm_parameter_weight_set) >> __swig_setmethods__["nu"] = _svmc.svm_parameter_nu_set >> __swig_getmethods__["nu"] = _svmc.svm_parameter_nu_get >> if _newclass: >> nu = _swig_property(_svmc.svm_parameter_nu_get, _svmc.svm_parameter_nu_set) >> __swig_setmethods__["p"] = _svmc.svm_parameter_p_set >> __swig_getmethods__["p"] = _svmc.svm_parameter_p_get >> if _newclass: >> p = _swig_property(_svmc.svm_parameter_p_get, _svmc.svm_parameter_p_set) >> __swig_setmethods__["shrinking"] = _svmc.svm_parameter_shrinking_set >> __swig_getmethods__["shrinking"] = _svmc.svm_parameter_shrinking_get >> if _newclass: >> shrinking = _swig_property(_svmc.svm_parameter_shrinking_get, _svmc.svm_parameter_shrinking_set) >> __swig_setmethods__["probability"] = _svmc.svm_parameter_probability_set >> __swig_getmethods__["probability"] = _svmc.svm_parameter_probability_get >> if _newclass: >> probability = _swig_property(_svmc.svm_parameter_probability_get, _svmc.svm_parameter_probability_set) >> >> def __init__(self): >> this = _svmc.new_svm_parameter() >> try: >> self.this.append(this) >> except: >> self.this = this >> __swig_destroy__ = _svmc.delete_svm_parameter >> __del__ = lambda self: None >> svm_parameter_swigregister = _svmc.svm_parameter_swigregister >> svm_parameter_swigregister(svm_parameter) >> >> class svm_problem(_object): >> __swig_setmethods__ = {} >> __setattr__ = lambda self, name, value: _swig_setattr(self, svm_problem, name, value) >> __swig_getmethods__ = {} >> __getattr__ = lambda self, name: _swig_getattr(self, svm_problem, name) >> __repr__ = _swig_repr >> __swig_setmethods__["l"] = _svmc.svm_problem_l_set >> __swig_getmethods__["l"] = _svmc.svm_problem_l_get >> if _newclass: >> l = _swig_property(_svmc.svm_problem_l_get, _svmc.svm_problem_l_set) >> __swig_setmethods__["y"] = _svmc.svm_problem_y_set >> __swig_getmethods__["y"] = _svmc.svm_problem_y_get >> if _newclass: >> y = _swig_property(_svmc.svm_problem_y_get, _svmc.svm_problem_y_set) >> __swig_setmethods__["x"] = _svmc.svm_problem_x_set >> __swig_getmethods__["x"] = _svmc.svm_problem_x_get >> if _newclass: >> x = _swig_property(_svmc.svm_problem_x_get, _svmc.svm_problem_x_set) >> >> def __init__(self): >> this = _svmc.new_svm_problem() >> try: >> self.this.append(this) >> except: >> self.this = this >> __swig_destroy__ = _svmc.delete_svm_problem >> __del__ = lambda self: None >> svm_problem_swigregister = _svmc.svm_problem_swigregister >> svm_problem_swigregister(svm_problem) >> >> class svm_model(_object): >> __swig_setmethods__ = {} >> __setattr__ = lambda self, name, value: _swig_setattr(self, svm_model, name, value) >> __swig_getmethods__ = {} >> __getattr__ = lambda self, name: _swig_getattr(self, svm_model, name) >> __repr__ = _swig_repr >> __swig_setmethods__["param"] = _svmc.svm_model_param_set >> __swig_getmethods__["param"] = _svmc.svm_model_param_get >> if _newclass: >> param = _swig_property(_svmc.svm_model_param_get, _svmc.svm_model_param_set) >> __swig_setmethods__["nr_class"] = _svmc.svm_model_nr_class_set >> __swig_getmethods__["nr_class"] = _svmc.svm_model_nr_class_get >> if _newclass: >> nr_class = _swig_property(_svmc.svm_model_nr_class_get, _svmc.svm_model_nr_class_set) >> __swig_setmethods__["l"] = _svmc.svm_model_l_set >> __swig_getmethods__["l"] = _svmc.svm_model_l_get >> if _newclass: >> l = _swig_property(_svmc.svm_model_l_get, _svmc.svm_model_l_set) >> __swig_setmethods__["SV"] = _svmc.svm_model_SV_set >> __swig_getmethods__["SV"] = _svmc.svm_model_SV_get >> if _newclass: >> SV = _swig_property(_svmc.svm_model_SV_get, _svmc.svm_model_SV_set) >> __swig_setmethods__["sv_coef"] = _svmc.svm_model_sv_coef_set >> __swig_getmethods__["sv_coef"] = _svmc.svm_model_sv_coef_get >> if _newclass: >> sv_coef = _swig_property(_svmc.svm_model_sv_coef_get, _svmc.svm_model_sv_coef_set) >> __swig_setmethods__["rho"] = _svmc.svm_model_rho_set >> __swig_getmethods__["rho"] = _svmc.svm_model_rho_get >> if _newclass: >> rho = _swig_property(_svmc.svm_model_rho_get, _svmc.svm_model_rho_set) >> __swig_setmethods__["probA"] = _svmc.svm_model_probA_set >> __swig_getmethods__["probA"] = _svmc.svm_model_probA_get >> if _newclass: >> probA = _swig_property(_svmc.svm_model_probA_get, _svmc.svm_model_probA_set) >> __swig_setmethods__["probB"] = _svmc.svm_model_probB_set >> __swig_getmethods__["probB"] = _svmc.svm_model_probB_get >> if _newclass: >> probB = _swig_property(_svmc.svm_model_probB_get, _svmc.svm_model_probB_set) >> __swig_setmethods__["label"] = _svmc.svm_model_label_set >> __swig_getmethods__["label"] = _svmc.svm_model_label_get >> if _newclass: >> label = _swig_property(_svmc.svm_model_label_get, _svmc.svm_model_label_set) >> __swig_setmethods__["nSV"] = _svmc.svm_model_nSV_set >> __swig_getmethods__["nSV"] = _svmc.svm_model_nSV_get >> if _newclass: >> nSV = _swig_property(_svmc.svm_model_nSV_get, _svmc.svm_model_nSV_set) >> __swig_setmethods__["free_sv"] = _svmc.svm_model_free_sv_set >> __swig_getmethods__["free_sv"] = _svmc.svm_model_free_sv_get >> if _newclass: >> free_sv = _swig_property(_svmc.svm_model_free_sv_get, _svmc.svm_model_free_sv_set) >> >> def __init__(self): >> this = _svmc.new_svm_model() >> try: >> self.this.append(this) >> except: >> self.this = this >> __swig_destroy__ = _svmc.delete_svm_model >> __del__ = lambda self: None >> svm_model_swigregister = _svmc.svm_model_swigregister >> svm_model_swigregister(svm_model) >> >> >> def svm_set_verbosity(verbosity_flag): >> return _svmc.svm_set_verbosity(verbosity_flag) >> svm_set_verbosity = _svmc.svm_set_verbosity >> >> def svm_train(prob, param): >> return _svmc.svm_train(prob, param) >> svm_train = _svmc.svm_train >> >> def svm_cross_validation(prob, param, nr_fold, target): >> return _svmc.svm_cross_validation(prob, param, nr_fold, target) >> svm_cross_validation = _svmc.svm_cross_validation >> >> def svm_save_model(model_file_name, model): >> return _svmc.svm_save_model(model_file_name, model) >> svm_save_model = _svmc.svm_save_model >> >> def svm_load_model(model_file_name): >> return _svmc.svm_load_model(model_file_name) >> svm_load_model = _svmc.svm_load_model >> >> def svm_get_svm_type(model): >> return _svmc.svm_get_svm_type(model) >> svm_get_svm_type = _svmc.svm_get_svm_type >> >> def svm_get_nr_class(model): >> return _svmc.svm_get_nr_class(model) >> svm_get_nr_class = _svmc.svm_get_nr_class >> >> def svm_get_labels(model, label): >> return _svmc.svm_get_labels(model, label) >> svm_get_labels = _svmc.svm_get_labels >> >> def svm_get_svr_probability(model): >> return _svmc.svm_get_svr_probability(model) >> svm_get_svr_probability = _svmc.svm_get_svr_probability >> >> def svm_predict_values(model, x, decvalue): >> return _svmc.svm_predict_values(model, x, decvalue) >> svm_predict_values = _svmc.svm_predict_values >> >> def svm_predict(model, x): >> return _svmc.svm_predict(model, x) >> svm_predict = _svmc.svm_predict >> >> def svm_predict_probability(model, x, prob_estimates): >> return _svmc.svm_predict_probability(model, x, prob_estimates) >> svm_predict_probability = _svmc.svm_predict_probability >> >> def svm_check_parameter(prob, param): >> return _svmc.svm_check_parameter(prob, param) >> svm_check_parameter = _svmc.svm_check_parameter >> >> def svm_check_probability_model(model): >> return _svmc.svm_check_probability_model(model) >> svm_check_probability_model = _svmc.svm_check_probability_model >> >> def svm_node_matrix2numpy_array(matrix, rows, cols): >> return _svmc.svm_node_matrix2numpy_array(matrix, rows, cols) >> svm_node_matrix2numpy_array = _svmc.svm_node_matrix2numpy_array >> >> def doubleppcarray2numpy_array(data, rows, cols): >> return _svmc.doubleppcarray2numpy_array(data, rows, cols) >> doubleppcarray2numpy_array = _svmc.doubleppcarray2numpy_array >> >> def new_int(nelements): >> return _svmc.new_int(nelements) >> new_int = _svmc.new_int >> >> def delete_int(ary): >> return _svmc.delete_int(ary) >> delete_int = _svmc.delete_int >> >> def int_getitem(ary, index): >> return _svmc.int_getitem(ary, index) >> int_getitem = _svmc.int_getitem >> >> def int_setitem(ary, index, value): >> return _svmc.int_setitem(ary, index, value) >> int_setitem = _svmc.int_setitem >> >> def new_double(nelements): >> return _svmc.new_double(nelements) >> new_double = _svmc.new_double >> >> def delete_double(ary): >> return _svmc.delete_double(ary) >> delete_double = _svmc.delete_double >> >> def double_getitem(ary, index): >> return _svmc.double_getitem(ary, index) >> double_getitem = _svmc.double_getitem >> >> def double_setitem(ary, index, value): >> return _svmc.double_setitem(ary, index, value) >> double_setitem = _svmc.double_setitem >> >> def svm_node_array(size): >> return _svmc.svm_node_array(size) >> svm_node_array = _svmc.svm_node_array >> >> def svm_node_array_set(*args): >> return _svmc.svm_node_array_set(*args) >> svm_node_array_set = _svmc.svm_node_array_set >> >> def svm_node_array_destroy(array): >> return _svmc.svm_node_array_destroy(array) >> svm_node_array_destroy = _svmc.svm_node_array_destroy >> >> def svm_node_matrix(size): >> return _svmc.svm_node_matrix(size) >> svm_node_matrix = _svmc.svm_node_matrix >> >> def svm_node_matrix_set(matrix, i, array): >> return _svmc.svm_node_matrix_set(matrix, i, array) >> svm_node_matrix_set = _svmc.svm_node_matrix_set >> >> def svm_node_matrix_destroy(matrix): >> return _svmc.svm_node_matrix_destroy(matrix) >> svm_node_matrix_destroy = _svmc.svm_node_matrix_destroy >> >> def svm_destroy_model_helper(model_ptr): >> return _svmc.svm_destroy_model_helper(model_ptr) >> svm_destroy_model_helper = _svmc.svm_destroy_model_helper >> # This file is compatible with both classic and new-style classes. >> >> >> >> >> >> ################ >> # *Working* mvpa2/clfs/libsvmc/svmc.py >> ################ >> >> >> # This file was automatically generated by SWIG (http://www.swig.org). >> # Version 3.0.2 >> # >> # Do not make changes to this file unless you know what you are doing--modify >> # the SWIG interface file instead. >> >> >> >> >> >> from sys import version_info >> if version_info >= (2,6,0): >> def swig_import_helper(): >> from os.path import dirname >> import imp >> fp = None >> try: >> fp, pathname, description = imp.find_module('_svmc', [dirname(__file__)]) >> except ImportError: >> import _svmc >> return _svmc >> if fp is not None: >> try: >> _mod = imp.load_module('_svmc', fp, pathname, description) >> finally: >> fp.close() >> return _mod >> _svmc = swig_import_helper() >> del swig_import_helper >> else: >> import _svmc >> del version_info >> try: >> _swig_property = property >> except NameError: >> pass # Python < 2.2 doesn't have 'property'. >> def _swig_setattr_nondynamic(self,class_type,name,value,static=1): >> if (name == "thisown"): return self.this.own(value) >> if (name == "this"): >> if type(value).__name__ == 'SwigPyObject': >> self.__dict__[name] = value >> return >> method = class_type.__swig_setmethods__.get(name,None) >> if method: return method(self,value) >> if (not static): >> self.__dict__[name] = value >> else: >> raise AttributeError("You cannot add attributes to %s" % self) >> >> def _swig_setattr(self,class_type,name,value): >> return _swig_setattr_nondynamic(self,class_type,name,value,0) >> >> def _swig_getattr(self,class_type,name): >> if (name == "thisown"): return self.this.own() >> method = class_type.__swig_getmethods__.get(name,None) >> if method: return method(self) >> raise AttributeError(name) >> >> def _swig_repr(self): >> try: strthis = "proxy of " + self.this.__repr__() >> except: strthis = "" >> return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,) >> >> try: >> _object = object >> _newclass = 1 >> except AttributeError: >> class _object : pass >> _newclass = 0 >> >> >> __version__ = _svmc.__version__ >> C_SVC = _svmc.C_SVC >> NU_SVC = _svmc.NU_SVC >> ONE_CLASS = _svmc.ONE_CLASS >> EPSILON_SVR = _svmc.EPSILON_SVR >> NU_SVR = _svmc.NU_SVR >> LINEAR = _svmc.LINEAR >> POLY = _svmc.POLY >> RBF = _svmc.RBF >> SIGMOID = _svmc.SIGMOID >> PRECOMPUTED = _svmc.PRECOMPUTED >> class svm_parameter(_object): >> __swig_setmethods__ = {} >> __setattr__ = lambda self, name, value: _swig_setattr(self, svm_parameter, name, value) >> __swig_getmethods__ = {} >> __getattr__ = lambda self, name: _swig_getattr(self, svm_parameter, name) >> __repr__ = _swig_repr >> __swig_setmethods__["svm_type"] = _svmc.svm_parameter_svm_type_set >> __swig_getmethods__["svm_type"] = _svmc.svm_parameter_svm_type_get >> if _newclass:svm_type = _swig_property(_svmc.svm_parameter_svm_type_get, _svmc.svm_parameter_svm_type_set) >> __swig_setmethods__["kernel_type"] = _svmc.svm_parameter_kernel_type_set >> __swig_getmethods__["kernel_type"] = _svmc.svm_parameter_kernel_type_get >> if _newclass:kernel_type = _swig_property(_svmc.svm_parameter_kernel_type_get, _svmc.svm_parameter_kernel_type_set) >> __swig_setmethods__["degree"] = _svmc.svm_parameter_degree_set >> __swig_getmethods__["degree"] = _svmc.svm_parameter_degree_get >> if _newclass:degree = _swig_property(_svmc.svm_parameter_degree_get, _svmc.svm_parameter_degree_set) >> __swig_setmethods__["gamma"] = _svmc.svm_parameter_gamma_set >> __swig_getmethods__["gamma"] = _svmc.svm_parameter_gamma_get >> if _newclass:gamma = _swig_property(_svmc.svm_parameter_gamma_get, _svmc.svm_parameter_gamma_set) >> __swig_setmethods__["coef0"] = _svmc.svm_parameter_coef0_set >> __swig_getmethods__["coef0"] = _svmc.svm_parameter_coef0_get >> if _newclass:coef0 = _swig_property(_svmc.svm_parameter_coef0_get, _svmc.svm_parameter_coef0_set) >> __swig_setmethods__["cache_size"] = _svmc.svm_parameter_cache_size_set >> __swig_getmethods__["cache_size"] = _svmc.svm_parameter_cache_size_get >> if _newclass:cache_size = _swig_property(_svmc.svm_parameter_cache_size_get, _svmc.svm_parameter_cache_size_set) >> __swig_setmethods__["eps"] = _svmc.svm_parameter_eps_set >> __swig_getmethods__["eps"] = _svmc.svm_parameter_eps_get >> if _newclass:eps = _swig_property(_svmc.svm_parameter_eps_get, _svmc.svm_parameter_eps_set) >> __swig_setmethods__["C"] = _svmc.svm_parameter_C_set >> __swig_getmethods__["C"] = _svmc.svm_parameter_C_get >> if _newclass:C = _swig_property(_svmc.svm_parameter_C_get, _svmc.svm_parameter_C_set) >> __swig_setmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_set >> __swig_getmethods__["nr_weight"] = _svmc.svm_parameter_nr_weight_get >> if _newclass:nr_weight = _swig_property(_svmc.svm_parameter_nr_weight_get, _svmc.svm_parameter_nr_weight_set) >> __swig_setmethods__["weight_label"] = _svmc.svm_parameter_weight_label_set >> __swig_getmethods__["weight_label"] = _svmc.svm_parameter_weight_label_get >> if _newclass:weight_label = _swig_property(_svmc.svm_parameter_weight_label_get, _svmc.svm_parameter_weight_label_set) >> __swig_setmethods__["weight"] = _svmc.svm_parameter_weight_set >> __swig_getmethods__["weight"] = _svmc.svm_parameter_weight_get >> if _newclass:weight = _swig_property(_svmc.svm_parameter_weight_get, _svmc.svm_parameter_weight_set) >> __swig_setmethods__["nu"] = _svmc.svm_parameter_nu_set >> __swig_getmethods__["nu"] = _svmc.svm_parameter_nu_get >> if _newclass:nu = _swig_property(_svmc.svm_parameter_nu_get, _svmc.svm_parameter_nu_set) >> __swig_setmethods__["p"] = _svmc.svm_parameter_p_set >> __swig_getmethods__["p"] = _svmc.svm_parameter_p_get >> if _newclass:p = _swig_property(_svmc.svm_parameter_p_get, _svmc.svm_parameter_p_set) >> __swig_setmethods__["shrinking"] = _svmc.svm_parameter_shrinking_set >> __swig_getmethods__["shrinking"] = _svmc.svm_parameter_shrinking_get >> if _newclass:shrinking = _swig_property(_svmc.svm_parameter_shrinking_get, _svmc.svm_parameter_shrinking_set) >> __swig_setmethods__["probability"] = _svmc.svm_parameter_probability_set >> __swig_getmethods__["probability"] = _svmc.svm_parameter_probability_get >> if _newclass:probability = _swig_property(_svmc.svm_parameter_probability_get, _svmc.svm_parameter_probability_set) >> def __init__(self): >> this = _svmc.new_svm_parameter() >> try: self.this.append(this) >> except: self.this = this >> __swig_destroy__ = _svmc.delete_svm_parameter >> __del__ = lambda self : None; >> svm_parameter_swigregister = _svmc.svm_parameter_swigregister >> svm_parameter_swigregister(svm_parameter) >> >> class svm_problem(_object): >> __swig_setmethods__ = {} >> __setattr__ = lambda self, name, value: _swig_setattr(self, svm_problem, name, value) >> __swig_getmethods__ = {} >> __getattr__ = lambda self, name: _swig_getattr(self, svm_problem, name) >> __repr__ = _swig_repr >> __swig_setmethods__["l"] = _svmc.svm_problem_l_set >> __swig_getmethods__["l"] = _svmc.svm_problem_l_get >> if _newclass:l = _swig_property(_svmc.svm_problem_l_get, _svmc.svm_problem_l_set) >> __swig_setmethods__["y"] = _svmc.svm_problem_y_set >> __swig_getmethods__["y"] = _svmc.svm_problem_y_get >> if _newclass:y = _swig_property(_svmc.svm_problem_y_get, _svmc.svm_problem_y_set) >> __swig_setmethods__["x"] = _svmc.svm_problem_x_set >> __swig_getmethods__["x"] = _svmc.svm_problem_x_get >> if _newclass:x = _swig_property(_svmc.svm_problem_x_get, _svmc.svm_problem_x_set) >> def __init__(self): >> this = _svmc.new_svm_problem() >> try: self.this.append(this) >> except: self.this = this >> __swig_destroy__ = _svmc.delete_svm_problem >> __del__ = lambda self : None; >> svm_problem_swigregister = _svmc.svm_problem_swigregister >> svm_problem_swigregister(svm_problem) >> >> class svm_model(_object): >> __swig_setmethods__ = {} >> __setattr__ = lambda self, name, value: _swig_setattr(self, svm_model, name, value) >> __swig_getmethods__ = {} >> __getattr__ = lambda self, name: _swig_getattr(self, svm_model, name) >> __repr__ = _swig_repr >> __swig_setmethods__["param"] = _svmc.svm_model_param_set >> __swig_getmethods__["param"] = _svmc.svm_model_param_get >> if _newclass:param = _swig_property(_svmc.svm_model_param_get, _svmc.svm_model_param_set) >> __swig_setmethods__["nr_class"] = _svmc.svm_model_nr_class_set >> __swig_getmethods__["nr_class"] = _svmc.svm_model_nr_class_get >> if _newclass:nr_class = _swig_property(_svmc.svm_model_nr_class_get, _svmc.svm_model_nr_class_set) >> __swig_setmethods__["l"] = _svmc.svm_model_l_set >> __swig_getmethods__["l"] = _svmc.svm_model_l_get >> if _newclass:l = _swig_property(_svmc.svm_model_l_get, _svmc.svm_model_l_set) >> __swig_setmethods__["SV"] = _svmc.svm_model_SV_set >> __swig_getmethods__["SV"] = _svmc.svm_model_SV_get >> if _newclass:SV = _swig_property(_svmc.svm_model_SV_get, _svmc.svm_model_SV_set) >> __swig_setmethods__["sv_coef"] = _svmc.svm_model_sv_coef_set >> __swig_getmethods__["sv_coef"] = _svmc.svm_model_sv_coef_get >> if _newclass:sv_coef = _swig_property(_svmc.svm_model_sv_coef_get, _svmc.svm_model_sv_coef_set) >> __swig_setmethods__["rho"] = _svmc.svm_model_rho_set >> __swig_getmethods__["rho"] = _svmc.svm_model_rho_get >> if _newclass:rho = _swig_property(_svmc.svm_model_rho_get, _svmc.svm_model_rho_set) >> __swig_setmethods__["probA"] = _svmc.svm_model_probA_set >> __swig_getmethods__["probA"] = _svmc.svm_model_probA_get >> if _newclass:probA = _swig_property(_svmc.svm_model_probA_get, _svmc.svm_model_probA_set) >> __swig_setmethods__["probB"] = _svmc.svm_model_probB_set >> __swig_getmethods__["probB"] = _svmc.svm_model_probB_get >> if _newclass:probB = _swig_property(_svmc.svm_model_probB_get, _svmc.svm_model_probB_set) >> __swig_setmethods__["label"] = _svmc.svm_model_label_set >> __swig_getmethods__["label"] = _svmc.svm_model_label_get >> if _newclass:label = _swig_property(_svmc.svm_model_label_get, _svmc.svm_model_label_set) >> __swig_setmethods__["nSV"] = _svmc.svm_model_nSV_set >> __swig_getmethods__["nSV"] = _svmc.svm_model_nSV_get >> if _newclass:nSV = _swig_property(_svmc.svm_model_nSV_get, _svmc.svm_model_nSV_set) >> __swig_setmethods__["free_sv"] = _svmc.svm_model_free_sv_set >> __swig_getmethods__["free_sv"] = _svmc.svm_model_free_sv_get >> if _newclass:free_sv = _swig_property(_svmc.svm_model_free_sv_get, _svmc.svm_model_free_sv_set) >> def __init__(self): >> this = _svmc.new_svm_model() >> try: self.this.append(this) >> except: self.this = this >> __swig_destroy__ = _svmc.delete_svm_model >> __del__ = lambda self : None; >> svm_model_swigregister = _svmc.svm_model_swigregister >> svm_model_swigregister(svm_model) >> >> >> def svm_set_verbosity(*args): >> return _svmc.svm_set_verbosity(*args) >> svm_set_verbosity = _svmc.svm_set_verbosity >> >> def svm_train(*args): >> return _svmc.svm_train(*args) >> svm_train = _svmc.svm_train >> >> def svm_cross_validation(*args): >> return _svmc.svm_cross_validation(*args) >> svm_cross_validation = _svmc.svm_cross_validation >> >> def svm_save_model(*args): >> return _svmc.svm_save_model(*args) >> svm_save_model = _svmc.svm_save_model >> >> def svm_load_model(*args): >> return _svmc.svm_load_model(*args) >> svm_load_model = _svmc.svm_load_model >> >> def svm_get_svm_type(*args): >> return _svmc.svm_get_svm_type(*args) >> svm_get_svm_type = _svmc.svm_get_svm_type >> >> def svm_get_nr_class(*args): >> return _svmc.svm_get_nr_class(*args) >> svm_get_nr_class = _svmc.svm_get_nr_class >> >> def svm_get_labels(*args): >> return _svmc.svm_get_labels(*args) >> svm_get_labels = _svmc.svm_get_labels >> >> def svm_get_svr_probability(*args): >> return _svmc.svm_get_svr_probability(*args) >> svm_get_svr_probability = _svmc.svm_get_svr_probability >> >> def svm_predict_values(*args): >> return _svmc.svm_predict_values(*args) >> svm_predict_values = _svmc.svm_predict_values >> >> def svm_predict(*args): >> return _svmc.svm_predict(*args) >> svm_predict = _svmc.svm_predict >> >> def svm_predict_probability(*args): >> return _svmc.svm_predict_probability(*args) >> svm_predict_probability = _svmc.svm_predict_probability >> >> def svm_check_parameter(*args): >> return _svmc.svm_check_parameter(*args) >> svm_check_parameter = _svmc.svm_check_parameter >> >> def svm_check_probability_model(*args): >> return _svmc.svm_check_probability_model(*args) >> svm_check_probability_model = _svmc.svm_check_probability_model >> >> def svm_node_matrix2numpy_array(*args): >> return _svmc.svm_node_matrix2numpy_array(*args) >> svm_node_matrix2numpy_array = _svmc.svm_node_matrix2numpy_array >> >> def doubleppcarray2numpy_array(*args): >> return _svmc.doubleppcarray2numpy_array(*args) >> doubleppcarray2numpy_array = _svmc.doubleppcarray2numpy_array >> >> def new_int(*args): >> return _svmc.new_int(*args) >> new_int = _svmc.new_int >> >> def delete_int(*args): >> return _svmc.delete_int(*args) >> delete_int = _svmc.delete_int >> >> def int_getitem(*args): >> return _svmc.int_getitem(*args) >> int_getitem = _svmc.int_getitem >> >> def int_setitem(*args): >> return _svmc.int_setitem(*args) >> int_setitem = _svmc.int_setitem >> >> def new_double(*args): >> return _svmc.new_double(*args) >> new_double = _svmc.new_double >> >> def delete_double(*args): >> return _svmc.delete_double(*args) >> delete_double = _svmc.delete_double >> >> def double_getitem(*args): >> return _svmc.double_getitem(*args) >> double_getitem = _svmc.double_getitem >> >> def double_setitem(*args): >> return _svmc.double_setitem(*args) >> double_setitem = _svmc.double_setitem >> >> def svm_node_array(*args): >> return _svmc.svm_node_array(*args) >> svm_node_array = _svmc.svm_node_array >> >> def svm_node_array_set(*args): >> return _svmc.svm_node_array_set(*args) >> svm_node_array_set = _svmc.svm_node_array_set >> >> def svm_node_array_destroy(*args): >> return _svmc.svm_node_array_destroy(*args) >> svm_node_array_destroy = _svmc.svm_node_array_destroy >> >> def svm_node_matrix(*args): >> return _svmc.svm_node_matrix(*args) >> svm_node_matrix = _svmc.svm_node_matrix >> >> def svm_node_matrix_set(*args): >> return _svmc.svm_node_matrix_set(*args) >> svm_node_matrix_set = _svmc.svm_node_matrix_set >> >> def svm_node_matrix_destroy(*args): >> return _svmc.svm_node_matrix_destroy(*args) >> svm_node_matrix_destroy = _svmc.svm_node_matrix_destroy >> >> def svm_destroy_model_helper(*args): >> return _svmc.svm_destroy_model_helper(*args) >> svm_destroy_model_helper = _svmc.svm_destroy_model_helper >> # This file is compatible with both classic and new-style classes. >> >> >> _______________________________________________ >> Pkg-ExpPsy-PyMVPA mailing list >> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org >> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa From billbrod at gmail.com Mon Aug 10 20:51:33 2015 From: billbrod at gmail.com (Bill Broderick) Date: Mon, 10 Aug 2015 16:51:33 -0400 Subject: [pymvpa] Permutation testing and Nipype Message-ID: Hi all, I was wondering if anyone on this list has used PyMVPA with Nipype for permutation testing. I'm attempting to do so now, but am running into timing issues (which I'm asking the Nipype folks about here ). Has anyone had any luck getting results in a reasonable time combining the two? If so, how? Thanks, Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Mon Aug 10 21:33:20 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Mon, 10 Aug 2015 17:33:20 -0400 Subject: [pymvpa] Permutation testing and Nipype In-Reply-To: References: Message-ID: <20150810213320.GT28964@onerussian.com> On Mon, 10 Aug 2015, Bill Broderick wrote: > I was wondering if anyone on this list has used PyMVPA with Nipype for > permutation testing. I'm attempting to do so now, but am running into > timing issues (which I'm asking the Nipype folks about here). > Has anyone had any luck getting results in a reasonable time combining the > two? If so, how? it would help to know what/at what level you are permutting etc, and what is that timing issue (does nipype kills tasks if they run "too" long, unlikely)? -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From billbrod at gmail.com Tue Aug 11 16:00:38 2015 From: billbrod at gmail.com (Bill Broderick) Date: Tue, 11 Aug 2015 12:00:38 -0400 Subject: [pymvpa] Permutation testing and Nipype In-Reply-To: <20150810213320.GT28964@onerussian.com> References: <20150810213320.GT28964@onerussian.com> Message-ID: On Mon, Aug 10, 2015 at 5:33 PM, Yaroslav Halchenko wrote: > it would help to know what/at what level you are permutting etc, > and what is that timing issue (does nipype kills tasks if they run "too" > long, unlikely)? I'm running my analysis with leave-one-subject-out cross-validation (so combining all runs for each subject), permuting the labels in the training set in two categories 100 times. I originally was running the whole brain in one job, but found that took too long (didn't get killed by nipype or our SGE cluster, but it was taking too long to be feasible), so I'm using sphere_searchlight's center_ids option to split permutation testing into a a bunch of smaller jobs, each with about 5 searchlights. Here's what my function looks like: clf = LinearCSVMC() repeater = Repeater(count=100) permutator = AttributePermutator('targets',limit={'partitions':1},count=1) nf = NFoldPartitioner(attr='subject') null_cv = CrossValidation(clf,ChainNode([nf,permutator],space=nf.get_space()),errorfx=mean_mismatch_error) distr_est = MCNullDist(repeater,tail='left',measure=null_cv,enable_ca=['dist_samples']) cv = CrossValidation(clf,nf,null_dist=distr_est,pass_attr=[('ca.null_prob','fa',1)],errorfx=mean_mismatch_error) sl = sphere_searchlight(cv,radius=3,center_ids=range(sl_range[0],sl_range[1]),enable_ca='roi_sizes',pass_attr=[('ca.roi_sizes','fa')]) sl_res = sl(ds) null_dist = cv.null_dist.ca.dist_samples where sl_range is a tuple, passed to the function, defining which searchlights to run. In my current set up, the above function is a Nipype MapNode, iterating on sl_range, such that when it reaches this function it creates many versions of this job (currently I'm working with about 5000), each running permutation testing on different searchlights. These are all submitted in parallel to the SGE cluster, which allows users to submit as many jobs as they want but limits them to running jobs at 200-some nodes at a time. When I split this into about 5000 jobs, I ran into an issue with Nipype where each of these jobs would finish running (in about 1.5 hours) but the Nipype master job that spawned them would take a very long time to realize they were done (as in, it would find one an hour), so it never finished and moved on. If I split this into fewer jobs, it doesn't run into this issue, but each job takes a lot longer. So either I could figure out what's going on with Nipype or could just not take as long for permutations. Is that clear? Has anyone run into similar issues or found a way to run the permutation faster? Thanks, Bill From ronimaimon at gmail.com Tue Aug 11 17:18:13 2015 From: ronimaimon at gmail.com (Roni Maimon) Date: Tue, 11 Aug 2015 20:18:13 +0300 Subject: [pymvpa] Searchlight statistical inference Message-ID: Hi all, I'm rather new to pyMVPA and I would love to get your help and feedback. I'm trying do understand the different procedures of statistical inference, I can achieve for whole brain searchlight analysis, using pyMVPA. I started by implementing the inference at the subject level (attaching the code). Is this how I'm supposed to evaluate the p values of the classifications for a single subject? What is the differences between adding the null_dist to the sl level and the cross validation level? My code: clf = LinearCSVMC() splt = NFoldPartitioner(attr='chunks') repeater = Repeater(count=100) permutator = AttributePermutator('targets', limit={'partitions': 1}, count=1) null_cv = CrossValidation(clf, ChainNode([splt, permutator],space=splt.get_space()), postproc=mean_sample()) null_sl = sphere_searchlight(null_cv, radius=3, space='voxel_indices', enable_ca=['roi_sizes']) distr_est = MCNullDist(repeater,tail='left', measure=null_sl, enable_ca=['dist_samples']) cv = CrossValidation(clf,splt, enable_ca=['stats'], postproc=mean_sample() ) sl = sphere_searchlight(cv, radius=3, space='voxel_indices', null_dist=distr_est, enable_ca=['roi_sizes']) ds = glm_dataset.copy(deep=False, sa=['targets','chunks'], fa=['voxel_indices'], a=['mapper']) sl_map = sl(ds) p_values = distr_est.cdf(sl_map.samples) # IS THIS THE RIGHT WAY?? Is there a way to make sure the permutations are exhaustive? In order to make an inference on the group level I understand I can use GroupClusterThreshold. Does anyone have a code sample for that? Do I use the MCNullDist's created at the subject level? Thanks, Roni. -------------- next part -------------- An HTML attachment was scrubbed... URL: From billbrod at gmail.com Tue Aug 11 20:20:47 2015 From: billbrod at gmail.com (Bill Broderick) Date: Tue, 11 Aug 2015 16:20:47 -0400 Subject: [pymvpa] Permutation testing and Nipype In-Reply-To: References: <20150810213320.GT28964@onerussian.com> Message-ID: Okay, so I did a little more investigating of this and I cannot replicate my original problem. Now it's looking like it's taking a long time just because the permutation testing is taking a long time. At the bottom of this message is the script I used for testing the timing. Using python 2.7.6 and PyMVPA version 2.4.0, I time the script as follows: python2.7 -O -m timeit -n 1 -r 1 'import test' 'test.main()' The dataset I'm loading in has 3504 trials that we're using and 29462 voxels. I get the following times: perm_num=1, ids=(0,1) : 161sec perm_num=1, ids=(0,2) : 316sec perm_num=1, ids=(0,3) : 531sec perm_num=1, ids=(0,4) : 687sec perm_num=5, ids=(0,1) : 435sec Which makes me realize that there's no way I can get 100 permutations and 5 searchlights (which is about what I was looking at earlier) in 1.5 hours. I don't know what changed -- going back through my commits I haven't changed any of the relevant code since then; it's possible I made a mistake and accidentally did 10 permutations or something like that. Regardless, this is still taking way too long. Does anyone have any idea how to speed it up? It looks like it's a good idea to have jobs run a bunch of permutations in one function, but split up the searchlights, which is what I'm doing at the moment, but I still need to do something else to speed it up. Thanks, Bill test.py script: def main(perm_num=5,ids=(0,1)): from mvpa2.suite import h5load,LinearCSVMC,Repeater,AttributePermutator,NFoldPartitioner,CrossValidation,ChainNode,MCNullDist,sphere_searchlight ds=h5load('dataset.hdf5') clf=LinearCSVMC() repeater=Repeater(count=perm_num) permutator = AttributePermutator('targets',limit={'partitions':1},count=1) nf = NFoldPartitioner(attr='subject',cvtype=1,count=None,selection_strategy='random') null_cv = CrossValidation(clf,ChainNode([nf,permutator],space=nf.get_space())) distr_est = MCNullDist(repeater,tail='left',measure=null_cv,enable_ca=['dist_samples']) cv = CrossValidation(clf,nf,null_dist=distr_est,pass_attr=[('ca.null_prob','fa',1)]) print 'running...' sl = sphere_searchlight(cv,radius=3,center_ids=range(ids[0],ids[1]),enable_ca='roi_sizes',pass_attr=[('ca.roi_sizes','fa')]) res=sl(ds) On Tue, Aug 11, 2015 at 12:00 PM, Bill Broderick wrote: > On Mon, Aug 10, 2015 at 5:33 PM, Yaroslav Halchenko > wrote: >> it would help to know what/at what level you are permutting etc, >> and what is that timing issue (does nipype kills tasks if they run "too" >> long, unlikely)? > > I'm running my analysis with leave-one-subject-out cross-validation > (so combining all runs for each subject), permuting the labels in the > training set in two categories 100 times. I originally was running the > whole brain in one job, but found that took too long (didn't get > killed by nipype or our SGE cluster, but it was taking too long to be > feasible), so I'm using sphere_searchlight's center_ids option to > split permutation testing into a a bunch of smaller jobs, each with > about 5 searchlights. Here's what my function looks like: > > clf = LinearCSVMC() > repeater = Repeater(count=100) > permutator = AttributePermutator('targets',limit={'partitions':1},count=1) > nf = NFoldPartitioner(attr='subject') > null_cv = CrossValidation(clf,ChainNode([nf,permutator],space=nf.get_space()),errorfx=mean_mismatch_error) > distr_est = > MCNullDist(repeater,tail='left',measure=null_cv,enable_ca=['dist_samples']) > cv = CrossValidation(clf,nf,null_dist=distr_est,pass_attr=[('ca.null_prob','fa',1)],errorfx=mean_mismatch_error) > sl = sphere_searchlight(cv,radius=3,center_ids=range(sl_range[0],sl_range[1]),enable_ca='roi_sizes',pass_attr=[('ca.roi_sizes','fa')]) > sl_res = sl(ds) > null_dist = cv.null_dist.ca.dist_samples > > where sl_range is a tuple, passed to the function, defining which > searchlights to run. In my current set up, the above function is a > Nipype MapNode, iterating on sl_range, such that when it reaches this > function it creates many versions of this job (currently I'm working > with about 5000), each running permutation testing on different > searchlights. These are all submitted in parallel to the SGE cluster, > which allows users to submit as many jobs as they want but limits them > to running jobs at 200-some nodes at a time. > > When I split this into about 5000 jobs, I ran into an issue with > Nipype where each of these jobs would finish running (in about 1.5 > hours) but the Nipype master job that spawned them would take a very > long time to realize they were done (as in, it would find one an > hour), so it never finished and moved on. If I split this into fewer > jobs, it doesn't run into this issue, but each job takes a lot longer. > So either I could figure out what's going on with Nipype or could just > not take as long for permutations. > > Is that clear? Has anyone run into similar issues or found a way to > run the permutation faster? > > Thanks, > Bill From debian at onerussian.com Tue Aug 11 21:13:59 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Tue, 11 Aug 2015 17:13:59 -0400 Subject: [pymvpa] Searchlight statistical inference In-Reply-To: References: Message-ID: <20150811211359.GI21055@onerussian.com> On Tue, 11 Aug 2015, Roni Maimon wrote: > Hi all, Hi Roni, > I started by implementing the inference at the subject level (attaching the > code). Is this how I'm supposed to evaluate the p values of the > classifications for a single subject? What is the differences between > adding the null_dist to the sl level and the cross validation level? if you add it at the CV level (which is also legit) you would need some way to "collect" all of the stats across all the searchlights, e.g. replace output of CV accuracy/error with the p value, or make two samples (if you care to see also effects -- i.e. CV error/accuracy), one of which would be accuracy, another p-value. Also it might be "noisier" since different searchlights then might permute differently. Also I think it will take a bit longer So the cleaner way -- at the searchlight level, where the entire searchlight then estimated with the same permutation at a time. > My code: > clf = LinearCSVMC() > splt = NFoldPartitioner(attr='chunks') > repeater = Repeater(count=100) > permutator = AttributePermutator('targets', limit={'partitions': 1}, > count=1) > null_cv = CrossValidation(clf, ChainNode([splt, > permutator],space=splt.get_space()), > postproc=mean_sample()) > null_sl = sphere_searchlight(null_cv, radius=3, space='voxel_indices', > enable_ca=['roi_sizes']) > distr_est = MCNullDist(repeater,tail='left', measure=null_sl, > enable_ca=['dist_samples']) I see.. you are trying to maintain the same assignment in testing dataset... IMHO (especially since you seems to just use betas from GLM, so I guess a beta per run) it is not necessary. Just make a straightforward permutator in all chunks (but within each chunk) at the level of the searchlight without trying to do that fancy dance (I know -- the one our tutorial tutors atm): permutator = AttributePermutator('targets', count=100, limit='chunks') distr_est = MCNullDist(permutator, tail='left', enable_ca=['dist_samples']) > cv = CrossValidation(clf,splt, > enable_ca=['stats'], postproc=mean_sample() ) > sl = sphere_searchlight(cv, radius=3, space='voxel_indices', > null_dist=distr_est, > enable_ca=['roi_sizes']) > ds = glm_dataset.copy(deep=False, > sa=['targets','chunks'], > fa=['voxel_indices'], > a=['mapper']) > sl_map = sl(ds) > p_values = distr_est.cdf(sl_map.samples) # IS THIS THE RIGHT WAY?? just access sl.ca.null_prob for the p value (you could also use .ca.null_t (if you enable it for searchlight) to get a corresponding t ... actually z-score (i.e. it is not t-score since assumption of the distribution is normal), which would be easier to visualize and comprehend (2.3 must correspond to p=0.01 at a one-sided test ;-)) > Is there a way to make sure the permutations are exhaustive? if they are exhaustive -- you might need more data ;) especially with the searchlight, 100 permutations would just give you p no smaller than ~0.01 (well 1/101 to be precise). with any correction for multiple comparisons (you have many searchlights) you would need "stronger" p's , thus more permutations. > In order to make an inference on the group level I understand I can > use GroupClusterThreshold. Correct > Does anyone have a code sample for that? Do I use the MCNullDist's created > at the subject level? Unfortunately we don't have a full nice example yet, and Michael whom we could blame (d'oh -- thank!) for this functionality is on vacation (CCing though the very original author -- Richard, please correct me if I am wrong anywhere) BUT your code above, with my addition (note me enabling 'dist_samples') is pretty much ready. 1. Just run your searchlights per subject e.g. 20-50 times, collect per each subject distr_est.ca.dist_samples, and place them all into a single dataset where each permutation will be a "sample", while you set sa.chunks to group subjects (i.e. all permutations from subject 1 should have chunks == 1). Call it e.g. perms 2. Create bootstrapping of those permutation results: e.g. clthr = gct.GroupClusterThreshold() # defaults might be adequate ;), otherwise adjust 3. Train it on your collection of clthr.train(perms) which will do all the bootstrapping (would take awhile) 4. Estimate significance/treshold your original map (mean of errors across subjects without permutations) res = clthr(mean_map) 5. look into res.a for all kinds of stats (# of clusters, their locations, significance etc) and then res.fa.clusters_fwe_thresh will contain actual map of clusters indices which passed fwe thresholding. Hope this helps! -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From debian at onerussian.com Tue Aug 11 21:18:27 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Tue, 11 Aug 2015 17:18:27 -0400 Subject: [pymvpa] Permutation testing and Nipype In-Reply-To: References: <20150810213320.GT28964@onerussian.com> Message-ID: <20150811211827.GJ21055@onerussian.com> On Tue, 11 Aug 2015, Bill Broderick wrote: > Okay, so I did a little more investigating of this and I cannot > replicate my original problem. Now it's looking like it's taking a > long time just because the permutation testing is taking a long time. it does! > At the bottom of this message is the script I used for testing the > timing. Using python 2.7.6 and PyMVPA version 2.4.0, I time the script > as follows: > python2.7 -O -m timeit -n 1 -r 1 'import test' 'test.main()' > The dataset I'm loading in has 3504 trials that we're using and 29462 voxels. > I get the following times: > perm_num=1, ids=(0,1) : 161sec > perm_num=1, ids=(0,2) : 316sec > perm_num=1, ids=(0,3) : 531sec > perm_num=1, ids=(0,4) : 687sec > perm_num=5, ids=(0,1) : 435sec > Which makes me realize that there's no way I can get 100 permutations > and 5 searchlights (which is about what I was looking at earlier) in > 1.5 hours. Depends on classifier/searchlight size/# of chunks etc. But indeed -- unlikely ;) > I don't know what changed -- going back through my commits > I haven't changed any of the relevant code since then; it's possible I > made a mistake and accidentally did 10 permutations or something like > that. > Regardless, this is still taking way too long. Does anyone have any > idea how to speed it up? If you are to do statistical assessment though permutation (not e.g. sign flipping technique ;) ), then you would need to wait a bit > It looks like it's a good idea to have jobs > run a bunch of permutations in one function, but split up the > searchlights, which is what I'm doing at the moment, but I still need > to do something else to speed it up. > Thanks, > Bill > test.py script: > def main(perm_num=5,ids=(0,1)): > from mvpa2.suite import > h5load,LinearCSVMC,Repeater,AttributePermutator,NFoldPartitioner,CrossValidation,ChainNode,MCNullDist,sphere_searchlight > ds=h5load('dataset.hdf5') > clf=LinearCSVMC() > repeater=Repeater(count=perm_num) > permutator = AttributePermutator('targets',limit={'partitions':1},count=1) > nf = NFoldPartitioner(attr='subject',cvtype=1,count=None,selection_strategy='random') > null_cv = CrossValidation(clf,ChainNode([nf,permutator],space=nf.get_space())) > distr_est = > MCNullDist(repeater,tail='left',measure=null_cv,enable_ca=['dist_samples']) > cv = CrossValidation(clf,nf,null_dist=distr_est,pass_attr=[('ca.null_prob','fa',1)]) > print 'running...' > sl = sphere_searchlight(cv,radius=3,center_ids=range(ids[0],ids[1]),enable_ca='roi_sizes',pass_attr=[('ca.roi_sizes','fa')]) > res=sl(ds) please see my response to Roni few minutes ago, so just collect up to 50 permutations per subject and then use GroupClusterThreshold to do bootstrapping across subjects' permutation results. -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From dinga92 at gmail.com Tue Aug 11 22:11:06 2015 From: dinga92 at gmail.com (Richard Dinga) Date: Wed, 12 Aug 2015 00:11:06 +0200 Subject: [pymvpa] Searchlight statistical inference In-Reply-To: <20150811211359.GI21055@onerussian.com> References: <20150811211359.GI21055@onerussian.com> Message-ID: Hello, I cannot help you with the inference on subject level, however we do have nice example for the group level inference. It is what we used in pandora data paper http://f1000research.com/articles/4-174/v1 (review pending) to produce figure 4 and table3. Code for the replication of the analysis is available at https://github.com/psychoinformatics-de/paper-f1000_pandora_data Best wishes, Richard On Tue, Aug 11, 2015 at 11:13 PM, Yaroslav Halchenko wrote: > > On Tue, 11 Aug 2015, Roni Maimon wrote: > > > Hi all, > > Hi Roni, > > > I started by implementing the inference at the subject level (attaching > the > > code). Is this how I'm supposed to evaluate the p values of the > > classifications for a single subject? What is the differences between > > adding the null_dist to the sl level and the cross validation level? > > if you add it at the CV level (which is also legit) you would need some > way to "collect" all of the stats across all the searchlights, e.g. > replace output of CV accuracy/error with the p value, or make two > samples (if you care to see also effects -- i.e. CV > error/accuracy), one of which would be accuracy, another p-value. Also > it might be "noisier" since different searchlights then might permute > differently. Also I think it will take a bit longer > > So the cleaner way -- at the searchlight level, where the entire > searchlight then estimated with the same permutation at a time. > > > My code: > > clf = LinearCSVMC() > > splt = NFoldPartitioner(attr='chunks') > > > repeater = Repeater(count=100) > > permutator = AttributePermutator('targets', limit={'partitions': 1}, > > count=1) > > null_cv = CrossValidation(clf, ChainNode([splt, > > permutator],space=splt.get_space()), > > postproc=mean_sample()) > > null_sl = sphere_searchlight(null_cv, radius=3, space='voxel_indices', > > enable_ca=['roi_sizes']) > > distr_est = MCNullDist(repeater,tail='left', measure=null_sl, > > enable_ca=['dist_samples']) > > I see.. you are trying to maintain the same assignment in testing > dataset... > IMHO (especially since you seems to just use betas from GLM, so I guess a > beta > per run) it is not necessary. Just make a straightforward permutator in > all > chunks (but within each chunk) at the level of the searchlight without > trying > to do that fancy dance (I know -- the one our tutorial tutors atm): > > permutator = AttributePermutator('targets', count=100, > limit='chunks') > distr_est = MCNullDist(permutator, tail='left', > enable_ca=['dist_samples']) > > > cv = CrossValidation(clf,splt, > > enable_ca=['stats'], postproc=mean_sample() ) > > sl = sphere_searchlight(cv, radius=3, space='voxel_indices', > > null_dist=distr_est, > > enable_ca=['roi_sizes']) > > ds = glm_dataset.copy(deep=False, > > sa=['targets','chunks'], > > fa=['voxel_indices'], > > a=['mapper']) > > sl_map = sl(ds) > > p_values = distr_est.cdf(sl_map.samples) # IS THIS THE RIGHT WAY?? > > just access sl.ca.null_prob for the p value (you could also use > .ca.null_t > (if you enable it for searchlight) to get a corresponding t ... actually > z-score (i.e. it is not t-score since assumption of the distribution is > normal), which would be easier to visualize and comprehend (2.3 must > correspond > to p=0.01 at a one-sided test ;-)) > > > > Is there a way to make sure the permutations are exhaustive? > > if they are exhaustive -- you might need more data ;) especially with > the searchlight, 100 permutations would just give you p no smaller than > ~0.01 > (well 1/101 to be precise). with any correction for multiple comparisons > (you > have many searchlights) you would need "stronger" p's , thus more > permutations. > > > In order to make an inference on the group level I understand I can > > use GroupClusterThreshold. > > Correct > > > Does anyone have a code sample for that? Do I use the MCNullDist's > created > > at the subject level? > > Unfortunately we don't have a full nice example yet, and Michael whom we > could > blame (d'oh -- thank!) for this functionality is on vacation (CCing though > the > very original author -- Richard, please correct me if I am wrong > anywhere) BUT your code above, with my addition (note me enabling > 'dist_samples') is pretty much ready. > > 1. Just run your searchlights per subject e.g. 20-50 times, collect per > each > subject distr_est.ca.dist_samples, and place them all into a single > dataset > where each permutation will be a "sample", while you set sa.chunks to > group > subjects (i.e. all permutations from subject 1 should have chunks == 1). > Call it e.g. perms > > 2. Create bootstrapping of those permutation results: e.g. > > clthr = gct.GroupClusterThreshold() # defaults might be adequate ;), > otherwise adjust > > 3. Train it on your collection of > > clthr.train(perms) > > which will do all the bootstrapping (would take awhile) > > 4. Estimate significance/treshold your original map (mean of errors across > subjects without permutations) > > res = clthr(mean_map) > > > 5. look into res.a for all kinds of stats (# of clusters, their > locations, significance etc) > and then > > res.fa.clusters_fwe_thresh > > will contain actual map of clusters indices which passed fwe thresholding. > > > Hope this helps! > -- > Yaroslav O. Halchenko, Ph.D. > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Research Scientist, Psychological and Brain Sciences Dept. > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronimaimon at gmail.com Tue Aug 11 23:14:30 2015 From: ronimaimon at gmail.com (Roni Maimon) Date: Tue, 11 Aug 2015 23:14:30 +0000 Subject: [pymvpa] Searchlight statistical inference In-Reply-To: References: Message-ID: Hi, Yaroslav and Richard, thank you so much for the quick and very helpful reply! Though I only received it through the daily summary, so I am sure this is the wrong way to reply. Yaroslav, regarding the permutator "dance", is it necessary in cases where I have several betas in each run? Thanks again for all the help. On Tue, Aug 11, 2015 at 8:18 PM, Roni Maimon wrote: > Hi all, > I'm rather new to pyMVPA and I would love to get your help and feedback. > I'm trying do understand the different procedures of statistical > inference, I can achieve for whole brain searchlight analysis, using pyMVPA. > > I started by implementing the inference at the subject level (attaching > the code). Is this how I'm supposed to evaluate the p values of the > classifications for a single subject? What is the differences between > adding the null_dist to the sl level and the cross validation level? > My code: > clf = LinearCSVMC() > splt = NFoldPartitioner(attr='chunks') > > repeater = Repeater(count=100) > permutator = AttributePermutator('targets', limit={'partitions': 1}, > count=1) > null_cv = CrossValidation(clf, ChainNode([splt, > permutator],space=splt.get_space()), > postproc=mean_sample()) > null_sl = sphere_searchlight(null_cv, radius=3, space='voxel_indices', > enable_ca=['roi_sizes']) > distr_est = MCNullDist(repeater,tail='left', measure=null_sl, > enable_ca=['dist_samples']) > > cv = CrossValidation(clf,splt, > enable_ca=['stats'], postproc=mean_sample() ) > sl = sphere_searchlight(cv, radius=3, space='voxel_indices', > null_dist=distr_est, > enable_ca=['roi_sizes']) > ds = glm_dataset.copy(deep=False, > sa=['targets','chunks'], > fa=['voxel_indices'], > a=['mapper']) > sl_map = sl(ds) > p_values = distr_est.cdf(sl_map.samples) # IS THIS THE RIGHT WAY?? > > Is there a way to make sure the permutations are exhaustive? > In order to make an inference on the group level I understand I can > use GroupClusterThreshold. > Does anyone have a code sample for that? Do I use the MCNullDist's created > at the subject level? > > Thanks, > Roni. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Tue Aug 11 23:39:11 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Tue, 11 Aug 2015 19:39:11 -0400 Subject: [pymvpa] Searchlight statistical inference In-Reply-To: References: Message-ID: <20150811233911.GP21055@onerussian.com> On Tue, 11 Aug 2015, Roni Maimon wrote: > Hi,A > Yaroslav A andA Richard, thank you so much for the quick and very helpful > reply! > Though I only received it through the daily summary, so I am sure this is > the wrong way to reply. > Yaroslav, regarding the permutator "dance", is it necessary in cases where > I have several betas in each run? I would say that the answer is " it is an active area of the research " ;) how many runs/conditions per run do you have? may be you could then use even more aggressive/conservative way of permutation -- swap complete order of trials (I hope they were randomized/different across runs) across runs. -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From ronimaimon at gmail.com Wed Aug 12 00:05:03 2015 From: ronimaimon at gmail.com (Roni Maimon) Date: Wed, 12 Aug 2015 03:05:03 +0300 Subject: [pymvpa] Searchlight statistical inference In-Reply-To: References: Message-ID: So the full design is I have 4 conditions in 8 runs. 5 blocks of each condition in each run. All runs have all the conditions but I'm interested only in two classifications and the differences between these classifications. The order of trials is different across runs. Some recommend I only permute the labels within runs, is this what you're referring to? Is there a quick way to do that in pyMVPA? On Wed, Aug 12, 2015 at 2:14 AM, Roni Maimon wrote: > Hi, > > Yaroslav and Richard, thank you so much for the quick and very helpful > reply! > > Though I only received it through the daily summary, so I am sure this is > the wrong way to reply. > > Yaroslav, regarding the permutator "dance", is it necessary in cases where > I have several betas in each run? > > Thanks again for all the help. > > On Tue, Aug 11, 2015 at 8:18 PM, Roni Maimon wrote: > >> Hi all, >> I'm rather new to pyMVPA and I would love to get your help and feedback. >> I'm trying do understand the different procedures of statistical >> inference, I can achieve for whole brain searchlight analysis, using pyMVPA. >> >> I started by implementing the inference at the subject level (attaching >> the code). Is this how I'm supposed to evaluate the p values of the >> classifications for a single subject? What is the differences between >> adding the null_dist to the sl level and the cross validation level? >> My code: >> clf = LinearCSVMC() >> splt = NFoldPartitioner(attr='chunks') >> >> repeater = Repeater(count=100) >> permutator = AttributePermutator('targets', limit={'partitions': 1}, >> count=1) >> null_cv = CrossValidation(clf, ChainNode([splt, >> permutator],space=splt.get_space()), >> postproc=mean_sample()) >> null_sl = sphere_searchlight(null_cv, radius=3, space='voxel_indices', >> enable_ca=['roi_sizes']) >> distr_est = MCNullDist(repeater,tail='left', measure=null_sl, >> enable_ca=['dist_samples']) >> >> cv = CrossValidation(clf,splt, >> enable_ca=['stats'], postproc=mean_sample() ) >> sl = sphere_searchlight(cv, radius=3, space='voxel_indices', >> null_dist=distr_est, >> enable_ca=['roi_sizes']) >> ds = glm_dataset.copy(deep=False, >> sa=['targets','chunks'], >> fa=['voxel_indices'], >> a=['mapper']) >> sl_map = sl(ds) >> p_values = distr_est.cdf(sl_map.samples) # IS THIS THE RIGHT WAY?? >> >> Is there a way to make sure the permutations are exhaustive? >> In order to make an inference on the group level I understand I can >> use GroupClusterThreshold. >> Does anyone have a code sample for that? Do I use the MCNullDist's >> created at the subject level? >> >> Thanks, >> Roni. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Wed Aug 12 00:41:25 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Tue, 11 Aug 2015 20:41:25 -0400 Subject: [pymvpa] Searchlight statistical inference In-Reply-To: References: Message-ID: <20150812004125.GR21055@onerussian.com> On Wed, 12 Aug 2015, Roni Maimon wrote: > So the full design is I have 4 conditions in 8 runs. 5 blocks of each > condition in each run. > All runs have all the conditions but I'm interested only in two > classifications and the differences between these classifications. > The order of trials is different across runs. > Some recommend I only permute the labels within runs, is this what you're > referring to? it depends on the meaning of "permute the labels": - completely randomly shuffle all the labels, e.g. ababab could become anything of aaabbb, ababab, aabbab, .. that is the default behavior of AttributePermutator -- strategy='simple' - or "reassign" the labels i.e. ababab can become only bababa or ababab in that run (since we have only two labels) that is the strategy='uattrs' in both of above cases, permutations should happen strictly within each chunk/run (that is a must) -- use limit='chunks' . But as you might have noted, complete permutation (strategy='simple') might not pin point deficient designs (i.e. ababab). And then the strategy='chunks' is when you take complete sequence of trials from one run and assign to another. If your trials lost original order when you e.g. extracted betas -- this strategy is not applicable. > Is there a quick way to do that in pyMVPA? see above ;) AttributePermutator is the one doing it 'quickly' ;) -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From ronimaimon at gmail.com Wed Aug 12 14:36:44 2015 From: ronimaimon at gmail.com (Roni Maimon) Date: Wed, 12 Aug 2015 17:36:44 +0300 Subject: [pymvpa] Searchlight statistical inference In-Reply-To: References: Message-ID: Yaroslav, Thank you very much for the input. Richard, in the code you referred to it is stated: "The values mapped onto each voxel represent the mean accuracy across all classification (spheres) a voxel was included in." How is this achieved? I scanned the code and nothing popped out but I must be missing something. Thanks! On Wed, Aug 12, 2015 at 3:05 AM, Roni Maimon wrote: > So the full design is I have 4 conditions in 8 runs. 5 blocks of each > condition in each run. > All runs have all the conditions but I'm interested only in two > classifications and the differences between these classifications. > The order of trials is different across runs. > Some recommend I only permute the labels within runs, is this what you're > referring to? Is there a quick way to do that in pyMVPA? > > On Wed, Aug 12, 2015 at 2:14 AM, Roni Maimon wrote: > >> Hi, >> >> Yaroslav and Richard, thank you so much for the quick and very helpful >> reply! >> >> Though I only received it through the daily summary, so I am sure this is >> the wrong way to reply. >> >> Yaroslav, regarding the permutator "dance", is it necessary in cases >> where I have several betas in each run? >> >> Thanks again for all the help. >> >> On Tue, Aug 11, 2015 at 8:18 PM, Roni Maimon >> wrote: >> >>> Hi all, >>> I'm rather new to pyMVPA and I would love to get your help and feedback. >>> I'm trying do understand the different procedures of statistical >>> inference, I can achieve for whole brain searchlight analysis, using pyMVPA. >>> >>> I started by implementing the inference at the subject level (attaching >>> the code). Is this how I'm supposed to evaluate the p values of the >>> classifications for a single subject? What is the differences between >>> adding the null_dist to the sl level and the cross validation level? >>> My code: >>> clf = LinearCSVMC() >>> splt = NFoldPartitioner(attr='chunks') >>> >>> repeater = Repeater(count=100) >>> permutator = AttributePermutator('targets', limit={'partitions': 1}, >>> count=1) >>> null_cv = CrossValidation(clf, ChainNode([splt, >>> permutator],space=splt.get_space()), >>> postproc=mean_sample()) >>> null_sl = sphere_searchlight(null_cv, radius=3, space='voxel_indices', >>> enable_ca=['roi_sizes']) >>> distr_est = MCNullDist(repeater,tail='left', measure=null_sl, >>> enable_ca=['dist_samples']) >>> >>> cv = CrossValidation(clf,splt, >>> enable_ca=['stats'], postproc=mean_sample() ) >>> sl = sphere_searchlight(cv, radius=3, space='voxel_indices', >>> null_dist=distr_est, >>> enable_ca=['roi_sizes']) >>> ds = glm_dataset.copy(deep=False, >>> sa=['targets','chunks'], >>> fa=['voxel_indices'], >>> a=['mapper']) >>> sl_map = sl(ds) >>> p_values = distr_est.cdf(sl_map.samples) # IS THIS THE RIGHT WAY?? >>> >>> Is there a way to make sure the permutations are exhaustive? >>> In order to make an inference on the group level I understand I can >>> use GroupClusterThreshold. >>> Does anyone have a code sample for that? Do I use the MCNullDist's >>> created at the subject level? >>> >>> Thanks, >>> Roni. >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From billbrod at gmail.com Wed Aug 12 15:49:29 2015 From: billbrod at gmail.com (Bill Broderick) Date: Wed, 12 Aug 2015 11:49:29 -0400 Subject: [pymvpa] Permutation testing and Nipype In-Reply-To: <20150811211827.GJ21055@onerussian.com> References: <20150810213320.GT28964@onerussian.com> <20150811211827.GJ21055@onerussian.com> Message-ID: Hi, Thanks for the response! > please see my response to Roni few minutes ago, so just collect up to 50 > permutations per subject and then use GroupClusterThreshold to do > bootstrapping across subjects' permutation results. I've read over that thread and I like the idea, but I've got a couple quick questions. One, we're doing leave-one-subject-out cross-validation, combining the four runs each subject has, instead of leave-one-run-out (due to balance issues). Would this change anything in your recommendations? I.e., can we still use GroupClusterThreshold the way you recommended for Roni? Two, we're doing regression in addition to classification (using EpsilonSVR); that shouldn't change anything either, right? Finally, you recommend permuting all labels, not just the training set ones. It's unclear to me why that works. Don't you need to train with permuted and test with actual labels to get a null distribution. Or is it okay because Roni's data has one beta per run (whereas we have one value per trial that we're regressing to, so it's not)? Thanks, Bill From dinga92 at gmail.com Wed Aug 12 23:20:03 2015 From: dinga92 at gmail.com (Richard Dinga) Date: Thu, 13 Aug 2015 01:20:03 +0200 Subject: [pymvpa] Searchlight statistical inference In-Reply-To: References: <20150811211359.GI21055@onerussian.com> Message-ID: This is achieved through a 'searchlight' command from command line interface (http://www.pymvpa.org/generated/cmd_searchlight.html), which is a different thing as 'sphere_searchlight' you are calling from python. In the code it is done in dosl.sh script. You can see that preprocessing and cv setup are python files passed as arguments into the command (those files are also included in the repository). If/else clause is there just so we save some space in outputs, since feature and dataset attributes are same for all permutations, they are saved only once for original map and maps created by permutation are saved only as numpy arrays. Those maps are then loaded and combined in the dogrpstats.py. I don't know how to do it outside of cmd, I guess you should use scatter_neighbourhood function somehow and then use a center_ids parameter in the sphere_searchlight It's worth to note that for the analysis it doesn't matter what kind of sl you use, you can use 'sphere_searchlight' without any problems. Sparse SL just saved us weeks of CPU time. > Richard, in the code you referred to it is stated: > "The values mapped onto each voxel represent the mean accuracy across all > classification (spheres) > a voxel was included in." > How is this achieved? I scanned the code and nothing popped out but I must > be missing something. > Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.browning at ndcn.ox.ac.uk Thu Aug 13 08:30:53 2015 From: michael.browning at ndcn.ox.ac.uk (Michael Browning) Date: Thu, 13 Aug 2015 08:30:53 +0000 Subject: [pymvpa] Altering the weights of classes in binary SVM classifier In-Reply-To: References: Message-ID: Hi, Any advice on this would be really appreciated, Thanks Mike From: Pkg-ExpPsy-PyMVPA [mailto:pkg-exppsy-pymvpa-bounces+michael.browning=ndcn.ox.ac.uk at lists.alioth.debian.org] On Behalf Of Michael Browning Sent: 23 July 2015 13:57 To: pkg-exppsy-pymvpa at lists.alioth.debian.org Subject: Re: [pymvpa] Altering the weights of classes in binary SVM classifier Hi, I have been using a linear SVM in a between subject design in which I am trying to classify patients as responders or non-responders to a particular treatment. The input to the classifier are beta images (one per patient) from an fMRI task. The target of the classifier is response status of the patient (coded as 0 or 1). My sample is not balanced (there happens to have been 22 responders and 13 non-responders) and is not particularly large. I would like, if possible, to use all the data and adjust the classifier to the unbalanced set rather than selecting a subset of the responders. I've seen recommendations for SVMs in unbalanced data suggesting that the weights of the outcome can be adjusted to reflect the sample size (essentially the weights of each class can be set as 1/(total number in class)). I've tried to do this in pyMVPA using the following code: wts=[ 1/numnonresp, 1/numresp] wts_labels=[0,1] clf = LinearCSVMC(weight=wts, weight_label=wts_labels) I then embed the classifier in a crossvalidation call which includes a feature selector. The code runs without error but the performance of the classifier does not alter (at all) regardless of the weights I use (e.g. using weights of [0 100000000000] or whatever. I'm concerned that I have not set this up correctly, and that the weights are not being incorporated into the SVM. I'd appreciate any advice about what I am doing wrong, or even if there is any diagnostic approach I can use to assess whether the SVM is using the weight appropriately. Thanks Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From dinga92 at gmail.com Thu Aug 13 14:31:01 2015 From: dinga92 at gmail.com (Richard Dinga) Date: Thu, 13 Aug 2015 14:31:01 +0000 Subject: [pymvpa] Altering the weights of classes in binary SVM classifier Message-ID: > My sample is not balanced (there happens to have been 22 responders > and 13 non-responders) and is not particularly large. I would like, > if possible, to use all the data and adjust the classifier to the > unbalanced set rather than selecting a subset of the responders. You don't have to downsample, you can upsample by repeating balanced sampling n times, therefore all your data would be used. There is a balancer feature for it in pymvpa. > I've seen recommendations for SVMs in unbalanced data suggesting that > the weights of the outcome can be adjusted to reflect the sample size > (essentially the weights of each class can be set as 1/(total number > in class)). Yes, you can also move a decision threshold for some classifiers that outputs probability, so you will not predict class A if the prob of A is > 0.5 but if the prob of A > is number of A / number of total. I know you can do this with gaussian process clf, but i don't know about others > I've tried to do this in pyMVPA using the following code: > wts=[ 1/numnonresp, 1/numresp] > wts_labels=[0,1] > clf = LinearCSVMC(weight=wts, weight_label=wts_labels) > I then embed the classifier in a crossvalidation call which includes > a feature selector. > The code runs without error but the performance of the classifier > does not alter (at all) regardless of the weights I use (e.g. using > weights of [0 100000000000] or whatever. I'm concerned that I have > not set this up correctly, and that the weights are not being > incorporated into the SVM. It didn't work for me either. You can try implementation from scikit-learn with pymvpa sklearn adaptor. There you can just put class_weight to auto and it should adjust them proportionally to class frequencies automatically Best wishes, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.browning at ndcn.ox.ac.uk Thu Aug 13 15:48:14 2015 From: michael.browning at ndcn.ox.ac.uk (Michael Browning) Date: Thu, 13 Aug 2015 15:48:14 +0000 Subject: [pymvpa] Altering the weights of classes in binary SVM classifier In-Reply-To: References: Message-ID: Super?thanks for the advice, Mike From: Pkg-ExpPsy-PyMVPA [mailto:pkg-exppsy-pymvpa-bounces+michael.browning=psych.ox.ac.uk at lists.alioth.debian.org] On Behalf Of Richard Dinga Sent: 13 August 2015 15:31 To: pkg-exppsy-pymvpa at lists.alioth.debian.org Subject: Re: [pymvpa] Altering the weights of classes in binary SVM classifier > My sample is not balanced (there happens to have been 22 responders > and 13 non-responders) and is not particularly large. I would like, > if possible, to use all the data and adjust the classifier to the > unbalanced set rather than selecting a subset of the responders. You don't have to downsample, you can upsample by repeating balanced sampling n times, therefore all your data would be used. There is a balancer feature for it in pymvpa. > I've seen recommendations for SVMs in unbalanced data suggesting that > the weights of the outcome can be adjusted to reflect the sample size > (essentially the weights of each class can be set as 1/(total number > in class)). Yes, you can also move a decision threshold for some classifiers that outputs probability, so you will not predict class A if the prob of A is > 0.5 but if the prob of A > is number of A / number of total. I know you can do this with gaussian process clf, but i don't know about others > I've tried to do this in pyMVPA using the following code: > wts=[ 1/numnonresp, 1/numresp] > wts_labels=[0,1] > clf = LinearCSVMC(weight=wts, weight_label=wts_labels) > I then embed the classifier in a crossvalidation call which includes > a feature selector. > The code runs without error but the performance of the classifier > does not alter (at all) regardless of the weights I use (e.g. using > weights of [0 100000000000] or whatever. I'm concerned that I have > not set this up correctly, and that the weights are not being > incorporated into the SVM. It didn't work for me either. You can try implementation from scikit-learn with pymvpa sklearn adaptor. There you can just put class_weight to auto and it should adjust them proportionally to class frequencies automatically Best wishes, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronimaimon at gmail.com Thu Aug 13 17:09:14 2015 From: ronimaimon at gmail.com (Roni Maimon) Date: Thu, 13 Aug 2015 20:09:14 +0300 Subject: [pymvpa] Searchlight statistical inference In-Reply-To: References: Message-ID: Thank you so much Richard! This was super helpful! One last question, do you know if the averaging can be done using the command line without sparse ROI's? Maybe by using --scatter-rois 0? or is it the default regardless to the input of scatter-rois? And just to make sure I understand the scatter option: by using the same value here and in the neighborhood size the value of a centroid in the original map is simply the accuracy of it's neighborhood since a centroid of a calculated neighborhood can never(?) be a part of a different neighborhood? On Wed, Aug 12, 2015 at 5:36 PM, Roni Maimon wrote: > > Yaroslav, Thank you very much for the input. > > Richard, in the code you referred to it is stated: > "The values mapped onto each voxel represent the mean accuracy across all classification (spheres) > > a voxel was included in." > > > How is this achieved? I scanned the code and nothing popped out but I must be missing something. > Thanks! > > > > On Wed, Aug 12, 2015 at 3:05 AM, Roni Maimon wrote: >> >> So the full design is I have 4 conditions in 8 runs. 5 blocks of each condition in each run. >> All runs have all the conditions but I'm interested only in two classifications and the differences between these classifications. >> The order of trials is different across runs. >> Some recommend I only permute the labels within runs, is this what you're referring to? Is there a quick way to do that in pyMVPA? >> >> On Wed, Aug 12, 2015 at 2:14 AM, Roni Maimon wrote: >>> >>> Hi, >>> >>> Yaroslav and Richard, thank you so much for the quick and very helpful reply! >>> >>> Though I only received it through the daily summary, so I am sure this is the wrong way to reply. >>> >>> Yaroslav, regarding the permutator "dance", is it necessary in cases where I have several betas in each run? >>> >>> Thanks again for all the help. >>> >>> >>> On Tue, Aug 11, 2015 at 8:18 PM, Roni Maimon wrote: >>>> >>>> Hi all, >>>> I'm rather new to pyMVPA and I would love to get your help and feedback. >>>> I'm trying do understand the different procedures of statistical inference, I can achieve for whole brain searchlight analysis, using pyMVPA. >>>> >>>> I started by implementing the inference at the subject level (attaching the code). Is this how I'm supposed to evaluate the p values of the classifications for a single subject? What is the differences between adding the null_dist to the sl level and the cross validation level? >>>> My code: >>>> clf = LinearCSVMC() >>>> splt = NFoldPartitioner(attr='chunks') >>>> >>>> repeater = Repeater(count=100) >>>> permutator = AttributePermutator('targets', limit={'partitions': 1}, count=1) >>>> null_cv = CrossValidation(clf, ChainNode([splt, permutator],space=splt.get_space()), >>>> postproc=mean_sample()) >>>> null_sl = sphere_searchlight(null_cv, radius=3, space='voxel_indices', >>>> enable_ca=['roi_sizes']) >>>> distr_est = MCNullDist(repeater,tail='left', measure=null_sl, >>>> enable_ca=['dist_samples']) >>>> >>>> cv = CrossValidation(clf,splt, >>>> enable_ca=['stats'], postproc=mean_sample() ) >>>> sl = sphere_searchlight(cv, radius=3, space='voxel_indices', >>>> null_dist=distr_est, >>>> enable_ca=['roi_sizes']) >>>> ds = glm_dataset.copy(deep=False, >>>> sa=['targets','chunks'], >>>> fa=['voxel_indices'], >>>> a=['mapper']) >>>> sl_map = sl(ds) >>>> p_values = distr_est.cdf(sl_map.samples) # IS THIS THE RIGHT WAY?? >>>> >>>> Is there a way to make sure the permutations are exhaustive? >>>> In order to make an inference on the group level I understand I can use GroupClusterThreshold. >>>> Does anyone have a code sample for that? Do I use the MCNullDist's created at the subject level? >>>> >>>> Thanks, >>>> Roni. >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dinga92 at gmail.com Thu Aug 13 23:46:51 2015 From: dinga92 at gmail.com (Richard Dinga) Date: Fri, 14 Aug 2015 01:46:51 +0200 Subject: [pymvpa] Searchlight statistical inference Message-ID: > Thank you so much Richard! This was super helpful! > One last question, do you know if the averaging can be done using the > command line without sparse ROI's? Maybe by using --scatter-rois 0? or is > it the default regardless to the input of scatter-rois? I am sorry, but I don't know. The feature is in a dark area of a codebase :) > And just to make sure I understand the scatter option: by using the same > value here and in the neighborhood size the value of a centroid in the > original map is simply the accuracy of it's neighborhood since a centroid > of a calculated neighborhood can never(?) be a part of a different > neighborhood? That is a good question. I don't know, it is possible that the centroids are different between CV folds, so what was centroid in fold 1, will not be centroid in fold 2. But if it really works that way... :) Best, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From dinga92 at gmail.com Fri Aug 14 15:31:30 2015 From: dinga92 at gmail.com (Richard Dinga) Date: Fri, 14 Aug 2015 17:31:30 +0200 Subject: [pymvpa] within chunk permutation only for training set Message-ID: Hello, I was playing with a permutation schemes. There is one case I don't know how to do If I want to shuffle targets within chunks, I would use limit='chunks' in my permutator, If I want to shuffle targets only in test set, I would use limit={'partitions': 1}, How can I shuffle targets within chunks, but only for the training set? limit={'chunks':[0,1,2,3], 'partitions': 1} is not limiting shuffling within chunks, and limit=['chunks', {'partitions': 1}] doesn't work - RuntimeError: Unhandle condition error (also my spellcheck is shouting, shouldn't it be unhandled instead?) Thanks, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From dinga92 at gmail.com Fri Aug 14 23:20:17 2015 From: dinga92 at gmail.com (Richard Dinga) Date: Sat, 15 Aug 2015 01:20:17 +0200 Subject: [pymvpa] Labels from permutation testing Message-ID: If you don't want to do the "fancy dance", you can do just simply: permutator = AttributePermutator(attr='targets') permuted_dataset = naive_permutation(original_dataset) print "Here are your new labels: ", permuted_dataset.sa.targets then you don't need to put permutator into your CV, just use permuted_dataset instead of orig. one or for the fancy part: permutator = AttributePermutator('targets', limit={partition:1}, count=1) partitioner_permutator = ChainNode([partitioner, permutator]) dataset_would_be_used_during_the_first_cv_itteration = next(partitioner_permutator.generate(ds)) but in this case, you would need to get creative if you would like to use the same labels also for CV. Regarding the importance of keeping the test labels intact, I think the state of the art knowledge is that no one really knows what should be done anyway. Best wishes, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Sat Aug 15 03:18:13 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Fri, 14 Aug 2015 23:18:13 -0400 Subject: [pymvpa] Labels from permutation testing In-Reply-To: References: Message-ID: <20150815031813.GH21055@onerussian.com> On Mon, 20 Jul 2015, Bill Broderick wrote: > Hi all, > I feel like this should be relatively simple, but I can't figure out how to > do it. Is it possible to get at the labels generated by > AttributePermutator? I would like to see what the individual permutations > look like, to make sure it's doing what I think it is, but other than > saving the whole dataset generated by CrossValidation, I can't see a way to > do it. > I'm trying to build a null distribution like the following, so I can save > each permutation, each searchlight separately (with how long the > permutation testing has been taking, I want to make sure there's constant > output in case something crashes and so I can monitor its progress, so I'm > not using MCNullDist). if you were just want to check "in general" on what permutations permutator generates, and didn't have limit={'partitions':1}, count=1 you could just [x.targets for x in permutator.generate(ds)] then if you preseeded RNG the same way before testing permutator (e.g. mvpa2.seed(index_of_subject)) you could thus collect all those generations without running actual analysis pipeline > for i in searchlights: > for j in permutations: > permutator = > AttributePermutator('targets',limit={'partitions':1},count=1) > nf = > NFoldPartitioner(attr=partition_attr,cvtype=leave_x_out,count=fold_num,selection_strategy=fold_select_strategy) > null_cv = > CrossValidation(clf,ChainNode([nf,permutator],space=nf.get_space()),enable_ca='datasets',pass_attr=[('ca.datasets','fa')]) > sl_null = sphere_searchlight(null_cv,radius=3,center_ids=[i]) > null_dist.append(sl_null(ds)) > null_dist=hstack(null_dist) why do you want separate permutations per each center_id? null_dist here than mixes random results across all the searchlights if I see it correctly. IMHO it is better to estimate that distribution per each searchlight center (what would happen anyways if you didn't do this manual "for i in searchlights") once again I feel that trying to keep target labeling in test split might be complicating things for your more than being of any value (do you get notably different significance results in comparison to simple permutations in all runs?). -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From mrctttmnt at gmail.com Mon Aug 17 14:49:11 2015 From: mrctttmnt at gmail.com (marco tettamanti) Date: Mon, 17 Aug 2015 16:49:11 +0200 Subject: [pymvpa] postproc=mean_sample(), is there a bug? In-Reply-To: <51E70999.2050609@gmail.com> References: <51E6559F.5070007@gmail.com> <51E70999.2050609@gmail.com> Message-ID: <55D1F467.9000501@gmail.com> Dear all, I have encountered a strange behaviour in running a Monte Carlo permutation analysis, though this may just be my lack of understanding. Running the analysis without "postproc=mean_sample()" to collect p-values for each fold seems to produce sensible results: ----------------- repeats=100 repeater = Repeater(count=repeats) permutator = AttributePermutator('targets', limit={'partitions': 1}, count=1) null_cv = CrossValidation(clf, ChainNode([partitioner, permutator], space=partitioner.get_space())) distr_est = MCNullDist(repeater, tail='left', measure=null_cv, enable_ca=['dist_samples']) cv_mc = CrossValidation(clf, partitioner, null_dist=distr_est, enable_ca=['confusion', 'stats']) mcerr = cv_mc(fds) In [72]: print cv_mc.ca.stats.stats['ACC'] 0.537037037037 In [73]: p_mc = cv_mc.ca.null_prob In [74]: print np.ravel(p_mc) [ 0.66 0.34 0.59 0.42 0.28 0.14 0.3 0.06 0.64 0.03 0.34 0.26 0.59 0.12 0.34 0.09 0.27 0.1 ] In [75]: print np.mean(np.ravel(p_mc)) 0.309444444444 ----------------- However, the same snippet, but with the addition of "postproc=mean_sample()", yields a quite strange p-value, if compared to the mean p-value across folds. What got me suspicious, is that this p-value always remains constant, independently of whether I repeat the analysis n-times (as if no random permutation was actually occurring), and independently of whether I use a different permutator (e.g. AttributePermutator('targets', count=1, limit='chunks')): ----------------- repeats=100 repeater = Repeater(count=repeats) permutator = AttributePermutator('targets', limit={'partitions': 1}, count=1) null_cv = CrossValidation(clf, ChainNode([partitioner, permutator], space=partitioner.get_space()), postproc=mean_sample()) distr_est = MCNullDist(repeater, tail='left', measure=null_cv, enable_ca=['dist_samples']) cv_mc = CrossValidation(clf, partitioner, postproc=mean_sample(), null_dist=distr_est, enable_ca=['confusion', 'stats']) mcerr = cv_mc(fds) In [50]: print cv_mc.ca.stats.stats['ACC'] 0.537037037037 In [51]: p_mc = cv_mc.ca.null_prob In [52]: print np.ravel(p_mc) [ 0.00980392] In [53]: print np.mean(np.ravel(p_mc)) 0.00980392156863 ----------------- Can anybody reproduce this behaviour or explain what I am doing wrong? Below you find some info on my (toy-)dataset and on my installation. Thank you and all the best! Marco -- Marco Tettamanti, Ph.D. Nuclear Medicine Department & Division of Neuroscience San Raffaele Scientific Institute Via Olgettina 58 I-20132 Milano, Italy Phone ++39-02-26434888 Fax ++39-02-26434892 Email: tettamanti.marco at hsr.it Skype: mtettamanti http://scholar.google.it/citations?user=x4qQl4AAAAAJ In [98]: print fds.summary() Dataset: 108x100 at float32, , , stats: mean=-0.0109682 std=0.992155 var=0.984372 min=-3.62952 max=4.32006 Counts of targets in each chunk: chunks\targets Whatd Whend Whetd --- --- --- 0.0 2 2 2 1.0 2 2 2 2.0 2 2 2 3.0 2 2 2 4.0 2 2 2 5.0 2 2 2 6.0 2 2 2 7.0 2 2 2 8.0 2 2 2 9.0 2 2 2 10.0 2 2 2 11.0 2 2 2 12.0 2 2 2 13.0 2 2 2 14.0 2 2 2 15.0 2 2 2 16.0 2 2 2 17.0 2 2 2 Summary for targets across chunks targets mean std min max #chunks Whatd 2 0 2 2 18 Whend 2 0 2 2 18 Whetd 2 0 2 2 18 Summary for chunks across targets chunks mean std min max #targets 0 2 0 2 2 3 1 2 0 2 2 3 2 2 0 2 2 3 3 2 0 2 2 3 4 2 0 2 2 3 5 2 0 2 2 3 6 2 0 2 2 3 7 2 0 2 2 3 8 2 0 2 2 3 9 2 0 2 2 3 10 2 0 2 2 3 11 2 0 2 2 3 12 2 0 2 2 3 13 2 0 2 2 3 14 2 0 2 2 3 15 2 0 2 2 3 16 2 0 2 2 3 17 2 0 2 2 3 Sequence statistics for 108 entries from set ['Whatd', 'Whend', 'Whetd'] Counter-balance table for orders up to 2: Targets/Order O1 | O2 | Whatd: 35 1 0 | 34 2 0 | Whend: 0 35 1 | 0 34 2 | Whetd: 0 0 35 | 0 0 34 | Correlations: min=-0.5 max=0.96 mean=-0.0093 sum(abs)=47 In [99]: mvpa2.wtf() Out[99]: Current date: 2015-08-17 16:32 PyMVPA: Version: 2.3.1 Hash: d1da5a749dc9cc606bd7f425d93d25464bf43454 Path: /usr/lib/python2.7/dist-packages/mvpa2/__init__.pyc Version control (GIT): GIT information could not be obtained due "/usr/lib/python2.7/dist-packages/mvpa2/.. is not under GIT" SYSTEM: OS: posix Linux 4.1.0-1-amd64 #1 SMP Debian 4.1.3-1 (2015-08-03) Distribution: debian/stretch/sid EXTERNALS: Present: atlas_fsl, cPickle, ctypes, good scipy.stats.rv_continuous._reduce_func(floc,fscale), good scipy.stats.rv_discrete.ppf, griddata, gzip, h5py, hdf5, ipython, joblib, liblapack.so, libsvm, libsvm verbosity control, lxml, matplotlib, mdp, mdp ge 2.4, mock, nibabel, nipy, nose, numpy, numpy_correct_unique, pprocess, pylab, pylab plottable, pywt, pywt wp reconstruct, reportlab, running ipython env, scipy, sg ge 0.6.4, sg ge 0.6.5, sg_fixedcachesize, shogun, shogun.mpd, shogun.svmocas, skl, statsmodels, weave Absent: atlas_pymvpa, cran-energy, elasticnet, glmnet, good scipy.stats.rdist, hcluster, lars, mass, nipy.neurospin, numpydoc, openopt, pywt wp reconstruct fixed, rpy2, shogun.krr, shogun.lightsvm, shogun.svrlight Versions of critical externals: ctypes : 1.1.0 h5py : 2.5.0 hdf5 : 1.8.13 ipython : 2.3.0 lxml : 3.4.4 matplotlib : 1.4.2 mdp : 3.4 mock : 1.3.0 nibabel : 2.0.1 nipy : 0.4.0.dev numpy : 1.8.2 pprocess : 0.5 reportlab : 3.2.0 scipy : 0.14.1 shogun : 3.2.0 shogun:full : 3.2.0_2014-2-17_18:46 shogun:rev : 197120 skl : 0.16.1 Matplotlib backend: TkAgg RUNTIME: PyMVPA Environment Variables: PYTHONPATH : ":/usr/lib/python2.7/lib-old:/usr/local/lib/python2.7/dist-packages:/usr/lib/python2.7/dist-packages/wx-3.0-gtk2:/usr/lib/python2.7/plat-x86_64-linux-gnu:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/lib-dynload:/usr/bin:.:/home/marco/.ipython:/usr/lib/python2.7/dist-packages:/usr/lib/pymodules/python2.7:/usr/lib/python2.7/dist-packages/IPython/extensions:/usr/lib/python2.7:/usr/lib/python2.7/dist-packages/PILcompat:/home/marco/.cache/scipy/python27_compiled:/usr/lib/python2.7/dist-packages/gtk-2.0:/home/marco/data/bicocca/MVPA_IntentionToMove/mvpa_itm" PyMVPA Runtime Configuration: [general] verbose = 1 [externals] have running ipython env = yes have ipython = yes have numpy = yes have scipy = yes have matplotlib = yes have h5py = yes have reportlab = yes have weave = yes have good scipy.stats.rdist = no have good scipy.stats.rv_discrete.ppf = yes have good scipy.stats.rv_continuous._reduce_func(floc,fscale) = yes have pylab = yes have lars = no have elasticnet = no have glmnet = no have skl = yes have ctypes = yes have libsvm = yes have shogun = yes have sg ge 0.6.5 = yes have shogun.mpd = yes have shogun.lightsvm = no have shogun.svrlight = no have shogun.krr = no have shogun.svmocas = yes have sg_fixedcachesize = yes have openopt = no have nibabel = yes have mdp = yes have mdp ge 2.4 = yes have nipy = yes have statsmodels = yes have pywt = yes have cpickle = yes have gzip = yes have cran-energy = no have griddata = yes have nipy.neurospin = no have lxml = yes have atlas_fsl = yes have atlas_pymvpa = no have hcluster = no have hdf5 = yes have joblib = yes have liblapack.so = yes have libsvm verbosity control = yes have mass = no have mock = yes have nose = yes have numpy_correct_unique = yes have numpydoc = no have pprocess = yes have pylab plottable = yes have pywt wp reconstruct = yes have pywt wp reconstruct fixed = no have rpy2 = no have sg ge 0.6.4 = yes Process Information: Name: ipython State: R (running) Tgid: 8970 Ngid: 0 Pid: 8970 PPid: 2053 TracerPid: 0 Uid: 1000 1000 1000 1000 Gid: 1000 1000 1000 1000 FDSize: 256 Groups: 6 7 20 24 25 27 29 30 44 46 100 104 113 114 116 121 124 132 139 999 1000 NStgid: 8970 NSpid: 8970 NSpgid: 8970 NSsid: 2053 VmPeak: 1748880 kB VmSize: 1099484 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 883520 kB VmRSS: 225784 kB VmData: 376948 kB VmStk: 136 kB VmExe: 3220 kB VmLib: 119892 kB VmPTE: 1732 kB VmPMD: 16 kB VmSwap: 0 kB Threads: 11 SigQ: 0/126979 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000001001000 SigCgt: 0000000180000002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 0000003fffffffff Seccomp: 0 Cpus_allowed: ff Cpus_allowed_list: 0-7 Mems_allowed: 00000000,00000001 Mems_allowed_list: 0 voluntary_ctxt_switches: 90582 nonvoluntary_ctxt_switches: 5294 From jbaub at bu.edu Tue Aug 18 18:25:02 2015 From: jbaub at bu.edu (John Baublitz) Date: Tue, 18 Aug 2015 14:25:02 -0400 Subject: [pymvpa] Surface searchlight taking 6 to 8 hours In-Reply-To: References: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> <674EB3D9-FBA9-4F54-A72C-96C283F1DBA0@googlemail.com> Message-ID: Sorry for the delay. I have been traveling. I did not try to load them as both at the same time. I tried to load it as a surface and that threw an error in FreeSurfer (mris is NULL) and then I tried to load it as a volume and it didn't load either (no overlay data found in file). I am still struggling to figure out exactly how I should be loading this data. Is this a better question for FreeSurfer's mailing list? I'm not entirely sure as it seems possible that it could be either a format output issue (PyMVPA) or a FreeSurfer-specific issue. On Thu, Jul 30, 2015 at 5:27 AM, Nick Oosterhof < n.n.oosterhof at googlemail.com> wrote: > > > On 29 Jul 2015, at 20:57, John Baublitz wrote: > > > > Thank you very much for the support. Unfortunately I have tried using > this GIFTI file that it outputs with FreeSurfer as an overlay and surface > > Both at the same time? > > > and it throws errors for all FreeSurfer utils and even AFNI utils. > FreeSurfer mris_convert outputs: > > > > mriseadGIFTIfile: mris is NULL! found when parsing file > f_mvpa_rh.func.gii > > > > This seems to indicate that it is not saving it as a surface file. > Likewise AFNI's gifti_tool outputs: > > > > ** failed to find coordinate/triangle structs > > > > How exactly is the data being stored in the GIFTI file? It seems that it > is not saving it as triangles and coordinates even based on the code you > linked to in the github commit given that the NIFTI intent codes are > neither NIFTI_INTENT_POINTSET nor NIFTI_INTENT_TRIANGLE by default. > > For your current purposes (visualizing surface-based data), consider there > are two types of "surface" GIFTI files: > > 1) "functional" node-data, where each node is associated with the same > number of values. Examples are time series data or statistical maps. > Typical extensions are .func.gii or .time.gii. > 2) "anatomical" surfaces, that have coordinates in 3D space (with > NIFTI_INTENT_POINTSET) and node indices in face information (with > NIFTI_INTENT_TRIANGLE). The typical extension is surf.gii. > > In PyMVPA: > (1) "functional" surface data is handled through mvpa2.datasets.gifti. > Data is stored in a Dataset instance. > (2) "anatomical" surfaces are handled through mvpa2.support.nibabel.surf > (for GIFTI, mvpa2.support.nibabel.surf_gifti). Vertex coordinates and face > indices are stored in a Surface instance (from mvpa2.support.nibabel.surf) > > (I'm aware that documentation about this distinction can be improved in > PyMVPA). > > > I've also run into a problem where the dataset that I've loaded has no > intent codes and unfortunately it appears that this means that the NIFTI > intent code is set to NIFTI_INTENT_NONE. > > Why is that a problem? What are you trying to achieve? If the dataset has > no intent, then NIFTI_INTENT_NONE seems valid to me, as the GIFTI standard > describes this as "Data intent not specified". > > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From billbrod at gmail.com Tue Aug 18 19:05:51 2015 From: billbrod at gmail.com (Bill Broderick) Date: Tue, 18 Aug 2015 15:05:51 -0400 Subject: [pymvpa] Labels from permutation testing In-Reply-To: <20150815031813.GH21055@onerussian.com> References: <20150815031813.GH21055@onerussian.com> Message-ID: > why do you want separate permutations per each center_id? > null_dist here than mixes random results across all the searchlights if > I see it correctly. IMHO it is better to estimate that distribution per > each searchlight center (what would happen anyways if you didn't do this > manual "for i in searchlights") I actually don't want separate permutations for each center_id. That was an oversight. But I do want to run the searchlights separately from each other (so I can parallelize them), which was what I was trying to do. I think I'll preseed the RNG the same across all of the searchlights, and that should do what I want. > once again I feel that trying to keep target labeling in test split > might be complicating things for your more than being of any value (do > you get notably different significance results in comparison to simple > permutations in all runs?). I was a bit uncomfortable with the idea of doing the simple permutation, but re-reading the Stelzer et al (2013) paper where they introduce the cluster thresholding method (which we're planning on using), it looks like that's what they do as well. I haven't had a chance to compare them yet, but I felt like there might be some theoretical issues. Since there doesn't seem to be any consensus (and there's some argument in favor of the simple permutations), I'll start with that way. Thanks, Bill On Fri, Aug 14, 2015 at 11:18 PM, Yaroslav Halchenko wrote: > > > On Mon, 20 Jul 2015, Bill Broderick wrote: > > > Hi all, > > > I feel like this should be relatively simple, but I can't figure out how to > > do it. Is it possible to get at the labels generated by > > AttributePermutator? I would like to see what the individual permutations > > look like, to make sure it's doing what I think it is, but other than > > saving the whole dataset generated by CrossValidation, I can't see a way to > > do it. > > > I'm trying to build a null distribution like the following, so I can save > > each permutation, each searchlight separately (with how long the > > permutation testing has been taking, I want to make sure there's constant > > output in case something crashes and so I can monitor its progress, so I'm > > not using MCNullDist). > > if you were just want to check "in general" on what permutations permutator > generates, and didn't have limit={'partitions':1}, count=1 you could just > > [x.targets for x in permutator.generate(ds)] > > then if you preseeded RNG the same way before testing permutator > (e.g. mvpa2.seed(index_of_subject)) you could thus collect all those > generations without running actual analysis pipeline > > > for i in searchlights: > > for j in permutations: > > permutator = > > AttributePermutator('targets',limit={'partitions':1},count=1) > > nf = > > NFoldPartitioner(attr=partition_attr,cvtype=leave_x_out,count=fold_num,selection_strategy=fold_select_strategy) > > null_cv = > > CrossValidation(clf,ChainNode([nf,permutator],space=nf.get_space()),enable_ca='datasets',pass_attr=[('ca.datasets','fa')]) > > sl_null = sphere_searchlight(null_cv,radius=3,center_ids=[i]) > > null_dist.append(sl_null(ds)) > > null_dist=hstack(null_dist) > > why do you want separate permutations per each center_id? > null_dist here than mixes random results across all the searchlights if > I see it correctly. IMHO it is better to estimate that distribution per > each searchlight center (what would happen anyways if you didn't do this > manual "for i in searchlights") > > > once again I feel that trying to keep target labeling in test split > might be complicating things for your more than being of any value (do > you get notably different significance results in comparison to simple > permutations in all runs?). > -- > Yaroslav O. Halchenko, Ph.D. > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Research Scientist, Psychological and Brain Sciences Dept. > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa From n.n.oosterhof at googlemail.com Wed Aug 19 13:07:39 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Wed, 19 Aug 2015 15:07:39 +0200 Subject: [pymvpa] GIFTI i/o issue [was: Surface searchlight taking 6 to 8 hours] In-Reply-To: References: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> <674EB3D9-FBA9-4F54-A72C-96C283F1DBA0@googlemail.com> Message-ID: <2AA60F76-621C-4425-B8E9-BF06E75CE667@googlemail.com> > On 18 Aug 2015, at 20:25, John Baublitz wrote: > > Sorry for the delay. I have been traveling. I did not try to load them as both at the same time. I tried to load it as a surface and that threw an error in FreeSurfer (mris is NULL) This error suggests you are trying to use the surface file as an anatomical file (with vertices and faces), and that the vertex and face information is not found. See utils/gifti_local.c [1]. Is that correct? Note that if you want to save an overlay (functional data) from a PyMVPA Dataset with a .samples field, you should use datasets/gifti. Anatomical surface files are handled through modules in mvpa2.support.nibabel. > I am still struggling to figure out exactly how I should be loading this data. Could you please provide more details about what type of data (anatomical or functional) you are trying to export from PyMVPA and import in FreeSurfer? [1] https://github.com/solleo/freesurfer/blob/master/utils/gifti_local.c From axel.vadim at gmail.com Thu Aug 20 08:59:44 2015 From: axel.vadim at gmail.com (Vadim Axel) Date: Thu, 20 Aug 2015 11:59:44 +0300 Subject: [pymvpa] Interpreting representation similarity results Message-ID: Hi, Very simple question: to what extent representation similarity can be interpreted as similarity of cognitive processing? Consider a toy sample, where I have two experiments. In Exp.1 there is task A and baseline1. In Exp.2 there there is task B and baseline2. For each experiment, I generate t-contrasts: A > baseline1 and B > baseline2. To check for similarity between tasks A and B, I can run conjunction analysis (spatial overlap). For stronger evidence, I can for each experiment, extract t-values for some predefined ROIs. Then, I run Pearson correlation across voxels within a ROI. Using across subjects statistics I can show that in some ROIs the correlation between experiments is above 0. Can this result be interpreted, as having similarity of cognitive processing during two tasks? Also, does someone know about papers that examined similarity between experiments using a contrast (and not Haxby_2001_like_style of patterns of single faces vs cats). In my case, Exps 1 and 2 have very different designs, so A and B cannot be compared directly. In general, good references for citing are highly appreciated. Thanks a lot, Vadim -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.n.oosterhof at googlemail.com Thu Aug 20 11:18:08 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Thu, 20 Aug 2015 13:18:08 +0200 Subject: [pymvpa] Interpreting representation similarity results In-Reply-To: References: Message-ID: > On 20 Aug 2015, at 10:59, Vadim Axel wrote: > > Very simple question: to what extent representation similarity can be interpreted as similarity of cognitive processing? > > Consider a toy sample, where I have two experiments. In Exp.1 there is task A and baseline1. In Exp.2 there there is task B and baseline2. For each experiment, I generate t-contrasts: A > baseline1 and B > baseline2. To check for similarity between tasks A and B, I can run conjunction analysis (spatial overlap). For stronger evidence, I can for each experiment, extract t-values for some predefined ROIs. Then, I run Pearson correlation across voxels within a ROI. Using across subjects statistics I can show that in some ROIs the correlation between experiments is above 0. Can this result be interpreted, as having similarity of cognitive processing during two tasks? It would indicate that *something* is similar (at a pattern level) between the two tasks. You may possibly interpret this as cognitive processing, but cognitive processing is a rather broad concept. Pattern similarity can arise through a variety of different mechanisms, including trivial ones. > Also, does someone know about papers that examined similarity between experiments using a contrast (and not Haxby_2001_like_style of patterns of single faces vs cats). In my case, Exps 1 and 2 have very different designs, so A and B cannot be compared directly. In general, good references for citing are highly appreciated. This may be considered as shameless self-promotion, but I have done some work on executing versus observing different manual actions [1], and imagery and execution/observation of such actions [2]. [1] Oosterhof, N. N., Wiggett, A. J., Diedrichsen, J., tipper, S. P. & Downing, P. E. Surface-based information mapping reveals crossmodal vision-action representations in human parietal and occipitotemporal cortex. J. Neurophysiol. 104, 1077?1089 (2010). [2] Oosterhof, N. N., Tipper, S. P. & Downing, P. E. Visuo-motor imagery of specific manual actions: A multi-variate pattern analysis fMRI study. Neuroimage 63, 262?271 (2012). From axel.vadim at gmail.com Thu Aug 20 12:25:15 2015 From: axel.vadim at gmail.com (Vadim Axel) Date: Thu, 20 Aug 2015 15:25:15 +0300 Subject: [pymvpa] Interpreting representation similarity results In-Reply-To: References: Message-ID: Thanks for the answer. Suppose, I can similarity between two tasks a) only in specific region, but not other regions and b) do not get similarity in this region when I use some control task. Do you see a trivial, non-cognitive explanation to this? Thanks for refs. So, you show that vision and action have similar neural representation. Representation is obviously the straightforward interpretation, whereas cognitive processing is a next step. In my case, I have also baseline because I check similarity between A > baseline1 is similar and B > baseline2. But conceptually, I think it is all the same. On Thu, Aug 20, 2015 at 2:18 PM, Nick Oosterhof < n.n.oosterhof at googlemail.com> wrote: > > > On 20 Aug 2015, at 10:59, Vadim Axel wrote: > > > > Very simple question: to what extent representation similarity can be > interpreted as similarity of cognitive processing? > > > > Consider a toy sample, where I have two experiments. In Exp.1 there is > task A and baseline1. In Exp.2 there there is task B and baseline2. For > each experiment, I generate t-contrasts: A > baseline1 and B > baseline2. > To check for similarity between tasks A and B, I can run conjunction > analysis (spatial overlap). For stronger evidence, I can for each > experiment, extract t-values for some predefined ROIs. Then, I run Pearson > correlation across voxels within a ROI. Using across subjects statistics I > can show that in some ROIs the correlation between experiments is above 0. > Can this result be interpreted, as having similarity of cognitive > processing during two tasks? > > It would indicate that *something* is similar (at a pattern level) between > the two tasks. You may possibly interpret this as cognitive processing, but > cognitive processing is a rather broad concept. Pattern similarity can > arise through a variety of different mechanisms, including trivial ones. > > > Also, does someone know about papers that examined similarity between > experiments using a contrast (and not Haxby_2001_like_style of patterns of > single faces vs cats). In my case, Exps 1 and 2 have very different > designs, so A and B cannot be compared directly. In general, good > references for citing are highly appreciated. > > This may be considered as shameless self-promotion, but I have done some > work on executing versus observing different manual actions [1], and > imagery and execution/observation of such actions [2]. > > [1] Oosterhof, N. N., Wiggett, A. J., Diedrichsen, J., tipper, S. P. & > Downing, P. E. Surface-based information mapping reveals crossmodal > vision-action representations in human parietal and occipitotemporal > cortex. J. Neurophysiol. 104, 1077?1089 (2010). > [2] Oosterhof, N. N., Tipper, S. P. & Downing, P. E. Visuo-motor imagery > of specific manual actions: A multi-variate pattern analysis fMRI study. > Neuroimage 63, 262?271 (2012). > > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.n.oosterhof at googlemail.com Fri Aug 21 08:59:15 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Fri, 21 Aug 2015 10:59:15 +0200 Subject: [pymvpa] Interpreting representation similarity results In-Reply-To: References: Message-ID: <283FAA69-9E92-427C-A9D6-63464B7D56C1@googlemail.com> > On 20 Aug 2015, at 14:25, Vadim Axel wrote: > > Suppose, I can similarity between two tasks a) only in specific region, but not other regions and b) do not get similarity in this region when I use some control task. Do you see a trivial, non-cognitive explanation to this? With respect to the control task: - how specific you can be about inferences of the main task versus control depends on how good the control is. *Anything* that is different between the main task and the control task could potentially explain such effects. This may include differences in low-level and high-level features for the stimuli, memory and attention demands, task difficulty, predictability of conditions, etc. As you did not specify what types of tasks you used, I cannot be more specific about potential trivial explanations. - showing that task A gives a significant effect but task C (control) does not, is rather weak and uninteresting. This can be a case of p=0.049 versus p=0.051 (with alpha=0.05). More informative is whether task A shows a stronger effect (similarity, in your case) than task C, for example through a paired t-test. - interpreting BOLD signals in terms of cognitive mechanisms is not straightforward. It may be possible that in a region, certain neural processing is not detectable in the BOLD signal, even when single unit recordings show that such processing does take place there. Finding BOLD pattern differences between two tasks clearly suggests differences in processing at the neural level, but the step to cognitive mechanisms is more difficult. > > Thanks for refs. So, you show that vision and action have similar neural representation. It goes a step further. The first reference shows that for two different actions A and B, the neural pattern of A when performed (executed) is more similar to neural pattern when A is observed than to the neural pattern when B is observed. In other words, it shows cross-modal (across vision and execution), action-specific patterns. The second refs shows a similar effect for imagery versus execution and observation. From axel.vadim at gmail.com Fri Aug 21 10:45:18 2015 From: axel.vadim at gmail.com (Vadim Axel) Date: Fri, 21 Aug 2015 13:45:18 +0300 Subject: [pymvpa] Interpreting representation similarity results In-Reply-To: <283FAA69-9E92-427C-A9D6-63464B7D56C1@googlemail.com> References: <283FAA69-9E92-427C-A9D6-63464B7D56C1@googlemail.com> Message-ID: I think we understood something different by "trivial" term :) I observed correlated neural activity for two tasks in some region. For me one trivial explanation would be for example, that all the brain increases its activity during the task (e.g., general arousal or brain vasculature). To rule out this, I show that correlation is only in a specific region. Another trivial explanation, is that anatomical connectivity of my specific region is that it is always activated similarly. To rule out this one, I show that there is a task which does not result in correlated activity in this region. The trivial you refer to are the not-well-controlled experimental manipulation. So, for example, if I hypothetically get correlated activity in the FFA for white faces and white clocks, I cannot say that cognitive processing of faces and clocks is similar, because it can be processing of white color which is similar. However, the latter one is still "cognitive processing", and it was similar between tasks. It was just not the processing I was interested. Clearly, the control conditions here need to be very specific. Makes sense? Do you have more trivial explanations of first type? On Fri, Aug 21, 2015 at 11:59 AM, Nick Oosterhof < n.n.oosterhof at googlemail.com> wrote: > > > On 20 Aug 2015, at 14:25, Vadim Axel wrote: > > > > Suppose, I can similarity between two tasks a) only in specific region, > but not other regions and b) do not get similarity in this region when I > use some control task. Do you see a trivial, non-cognitive explanation to > this? > > With respect to the control task: > > - how specific you can be about inferences of the main task versus control > depends on how good the control is. *Anything* that is different between > the main task and the control task could potentially explain such effects. > This may include differences in low-level and high-level features for the > stimuli, memory and attention demands, task difficulty, predictability of > conditions, etc. As you did not specify what types of tasks you used, I > cannot be more specific about potential trivial explanations. > > - showing that task A gives a significant effect but task C (control) does > not, is rather weak and uninteresting. This can be a case of p=0.049 versus > p=0.051 (with alpha=0.05). More informative is whether task A shows a > stronger effect (similarity, in your case) than task C, for example through > a paired t-test. > > - interpreting BOLD signals in terms of cognitive mechanisms is not > straightforward. It may be possible that in a region, certain neural > processing is not detectable in the BOLD signal, even when single unit > recordings show that such processing does take place there. Finding BOLD > pattern differences between two tasks clearly suggests differences in > processing at the neural level, but the step to cognitive mechanisms is > more difficult. > > > > > Thanks for refs. So, you show that vision and action have similar neural > representation. > > It goes a step further. The first reference shows that for two different > actions A and B, the neural pattern of A when performed (executed) is more > similar to neural pattern when A is observed than to the neural pattern > when B is observed. In other words, it shows cross-modal (across vision and > execution), action-specific patterns. The second refs shows a similar > effect for imagery versus execution and observation. > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.n.oosterhof at googlemail.com Sun Aug 23 11:21:20 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Sun, 23 Aug 2015 13:21:20 +0200 Subject: [pymvpa] Interpreting representation similarity results In-Reply-To: References: <283FAA69-9E92-427C-A9D6-63464B7D56C1@googlemail.com> Message-ID: <22736AFA-66B8-4E5D-B780-B78159FBA769@googlemail.com> > On 21 Aug 2015, at 12:45, Vadim Axel wrote: > > I think we understood something different by "trivial" term :) > > I observed correlated neural activity for two tasks in some region. For me one trivial explanation would be for example, that all the brain increases its activity during the task (e.g., general arousal or brain vasculature). To rule out this, I show that correlation is only in a specific region. Another trivial explanation, is that anatomical connectivity of my specific region is that it is always activated similarly. To rule out this one, I show that there is a task which does not result in correlated activity in this region. That indeed suggests specificity in both location and task, which is good. As I wrote before, however, it may be even more convincing if the control task shows a significant weaker correlated activity than the task of interest (the p=0.049 versus p=0.051 case for alpha=0.05). > > The trivial you refer to are the not-well-controlled experimental manipulation. So, for example, if I hypothetically get correlated activity in the FFA for white faces and white clocks, I cannot say that cognitive processing of faces and clocks is similar, because it can be processing of white color which is similar. However, the latter one is still "cognitive processing", and it was similar between tasks. It was just not the processing I was interested. Clearly, the control conditions here need to be very specific. > > Makes sense? Do you have more trivial explanations of first type? That all makes sense and it seems we think along the same lines. Other 'trivial' explanations of the 'first type' did not come to mind. To summarise what we both said earlier, showing spatial specificity and task specificity are important for claims about the involvement of a particular region - and how specific these claims can be depends largely on how good the controls are. Good luck with the analysis! From jbaub at bu.edu Mon Aug 24 20:12:05 2015 From: jbaub at bu.edu (John Baublitz) Date: Mon, 24 Aug 2015 16:12:05 -0400 Subject: [pymvpa] GIFTI i/o issue [was: Surface searchlight taking 6 to 8 hours] In-Reply-To: <2AA60F76-621C-4425-B8E9-BF06E75CE667@googlemail.com> References: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> <674EB3D9-FBA9-4F54-A72C-96C283F1DBA0@googlemail.com> <2AA60F76-621C-4425-B8E9-BF06E75CE667@googlemail.com> Message-ID: I am trying to export functional data from PyMVPA. Our lab is specifically trying to load functional data from the analysis (a list of vertices and a value associated with each vertex from the analysis) and visualize it in FreeSurfer. As I understand it, I have been saving it as functional data (using map2gifti) based on the update to PyMVPA that you committed. Unfortunately there seems to be an issue where FreeSurfer will not load an overlay with an extension .gii. I am happy to email FreeSurfer if this is more of a FreeSurfer issue but it seems that the .gii file output by PyMVPA does not really work with any FreeSurfer utils, not just the visualization as I mentioned with mris_convert. I am still unsure of where this issue is coming from: FreeSurfer or PyMVPA. On Wed, Aug 19, 2015 at 9:07 AM, Nick Oosterhof < n.n.oosterhof at googlemail.com> wrote: > > > On 18 Aug 2015, at 20:25, John Baublitz wrote: > > > > Sorry for the delay. I have been traveling. I did not try to load them > as both at the same time. I tried to load it as a surface and that threw an > error in FreeSurfer (mris is NULL) > > This error suggests you are trying to use the surface file as an > anatomical file (with vertices and faces), and that the vertex and face > information is not found. See utils/gifti_local.c [1]. > > Is that correct? Note that if you want to save an overlay (functional > data) from a PyMVPA Dataset with a .samples field, you should use > datasets/gifti. Anatomical surface files are handled through modules in > mvpa2.support.nibabel. > > > I am still struggling to figure out exactly how I should be loading this > data. > > Could you please provide more details about what type of data (anatomical > or functional) you are trying to export from PyMVPA and import in > FreeSurfer? > > [1] https://github.com/solleo/freesurfer/blob/master/utils/gifti_local.c > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.n.oosterhof at googlemail.com Tue Aug 25 16:18:43 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Tue, 25 Aug 2015 18:18:43 +0200 Subject: [pymvpa] GIFTI i/o issue In-Reply-To: References: <76693982-4DF9-44C0-9814-9547A98F7E6F@googlemail.com> <674EB3D9-FBA9-4F54-A72C-96C283F1DBA0@googlemail.com> <2AA60F76-621C-4425-B8E9-BF06E75CE667@googlemail.com> Message-ID: <52E80E78-0B07-484E-A9A0-CDA8F528006B@googlemail.com> > On 24 Aug 2015, at 22:12, John Baublitz wrote: > > I am trying to export functional data from PyMVPA. Our lab is specifically trying to load functional data from the analysis (a list of vertices and a value associated with each vertex from the analysis) and visualize it in FreeSurfer. As I understand it, I have been saving it as functional data (using map2gifti) based on the update to PyMVPA that you committed. Unfortunately there seems to be an issue where FreeSurfer will not load an overlay with an extension .gii. I am happy to email FreeSurfer if this is more of a FreeSurfer issue but it seems that the .gii file output by PyMVPA does not really work with any FreeSurfer utils, not just the visualization as I mentioned with mris_convert. I am still unsure of where this issue is coming from: FreeSurfer or PyMVPA. It seems it is due to a combination of the two: - FreeSurfer seems to require the presence of anatomical data (vertices and faces) in a GIFTI file, even for functional data. - The recent updates in PyMVPA could result in writing GIFTI-incompatible data, in particular when data was provided as float64 or int64 data. GIFTI only supports 8 and 32 bit data. I've added support to the map2gifti function for a "surface" argument, which takes a anatomical Surface object (from mvpa2.support.nibabel.surf) or a filename of an anatomical surface. Also, data is casted to 32 bit representations. Storing GIFTI overlay data together with the anatomical data seems to make mris_convert happy. This is currently submitted in a PR [1]. For example, if ds is a PyMVPA dataset, you can do: from mvpa2.suite import * anat_surface=surf.from_any('pial.gii') map2gifti(ds, filename='data_for_mris_convert.gii', surface=anat_surface) and then mris_convert data_for_mris_convert.gii data.asc Can you see if this PR fixes the GIFTI issues for you? [1] https://github.com/PyMVPA/PyMVPA/pull/357 From mrctttmnt at gmail.com Fri Aug 28 11:48:25 2015 From: mrctttmnt at gmail.com (marco tettamanti) Date: Fri, 28 Aug 2015 13:48:25 +0200 Subject: [pymvpa] Confusion Matrix for each Node with sphere_gnbsearchlight Message-ID: <55E04A89.1050500@gmail.com> Dear all, is it possible to obtain confusion matrices for all nodes with "sphere_gnbsearchlight", as was suggested before with "sphere_searchlight": slcvte = CrossValidation(clf, partitioner, errorfx=None, postproc=ChainNode([Confusion(labels=fds.UT)])) class KeepConfusionMatrix(Node): def _call(self, fds): out = np.zeros(1, dtype=object) out[0] = (fds.samples) return out slcvte.postproc.append(KeepConfusionMatrix()) slght = sphere_searchlight(slcvte, radius=slradius, space='voxel_indices', nproc=4, postproc=mean_sample()) slght_map = slght(fds) Thank you and best wishes, Marco -- Marco Tettamanti, Ph.D. Nuclear Medicine Department & Division of Neuroscience San Raffaele Scientific Institute Via Olgettina 58 I-20132 Milano, Italy Phone ++39-02-26434888 Fax ++39-02-26434892 Email: tettamanti.marco at hsr.it Skype: mtettamanti -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Fri Aug 28 13:16:38 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Fri, 28 Aug 2015 09:16:38 -0400 Subject: [pymvpa] Confusion Matrix for each Node with sphere_gnbsearchlight In-Reply-To: <55E04A89.1050500@gmail.com> References: <55E04A89.1050500@gmail.com> Message-ID: <20150828131638.GN19455@onerussian.com> On Fri, 28 Aug 2015, marco tettamanti wrote: > Dear all, > is it possible to obtain confusion matrices for all nodes with > "sphere_gnbsearchlight", as was suggested before with > "sphere_searchlight": > slcvte = CrossValidation(clf, partitioner, errorfx=None, > postproc=ChainNode([Confusion(labels=fds.UT)])) > class KeepConfusionMatrix(Node): > def _call(self, fds): > out = np.zeros(1, dtype=object) > out[0] = (fds.samples) > return out > slcvte.postproc.append(KeepConfusionMatrix()) > slght = sphere_searchlight(slcvte, radius=slradius, space='voxel_indices', > nproc=4, postproc=mean_sample()) > slght_map = slght(fds) quick an possible partial reply 1. "not sure" -- if it pukes then probably not, although judging from the code I foresaw arbitrary shape of the errorfx output 2. but you could make sphere_gnbsearchlight to return labels (not errors) and then post-process to get those confusion matrices. Just specify errorfx=None to it (not to CV). But you could also try passing errorfx=ConfusionMatrixError and see how that goes Please share what you discover/end up with. mvpa2/tests/test_usecases.py has more of usecase demos for gnb searchlights which might come handy -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From mrctttmnt at gmail.com Fri Aug 28 15:28:55 2015 From: mrctttmnt at gmail.com (marco tettamanti) Date: Fri, 28 Aug 2015 17:28:55 +0200 Subject: [pymvpa] Confusion Matrix for each Node with sphere_gnbsearchlight In-Reply-To: <55E04A89.1050500@gmail.com> References: <55E04A89.1050500@gmail.com> Message-ID: <55E07E37.7030500@gmail.com> **Dear Yaroslav, thank you very much for your reply. I have made several attempts, trying to guess a solution, but it seems I always get a 'TypeError: 'NoneType' object is not callable'. Any further advice is greatly appreciated! Best, Marco Case 1: slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, space='voxel_indices', errorfx=None, postproc=mean_sample()) slght_map = slght(fds) In [70]: slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, space='voxel_indices', errorfx=None, postproc=mean_sample()) In [71]: slght_map = slght(fds) [SLC] DBG: Phase 1. Initializing partitions using on , , > [SLC] DBG: Phase 2. Blocking data for 18 splits and 3 labels [SLC] DBG: Phase 3. Computing statistics for 54 blocks [SLC] DBG: Phase 4. Deducing neighbors information for 111 ROIs [SLC] DBG: Phase 4b. Converting neighbors to sparse matrix representation [SLC] DBG: Phase 5. Major loop [SLC] DBG: Split 0 out of 18 [SLC] DBG: 'Training' is done [SLC] DBG: Doing 'Searchlight' [SLC] DBG: Assessing accuracies --------------------------------------------------------------------------- TypeError Traceback (most recent call last) in () ----> 1 slght_map = slght(fds) /usr/lib/python2.7/dist-packages/mvpa2/base/learner.pyc in __call__(self, ds) 257 "used and auto training is disabled." 258 % str(self)) --> 259 return super(Learner, self).__call__(ds) 260 261 /usr/lib/python2.7/dist-packages/mvpa2/base/node.pyc in __call__(self, ds) 119 120 self._precall(ds) --> 121 result = self._call(ds) 122 result = self._postcall(ds, result) 123 /usr/lib/python2.7/dist-packages/mvpa2/measures/searchlight.pyc in _call(self, dataset) 141 142 # pass to subclass --> 143 results = self._sl_call(dataset, roi_ids, nproc) 144 145 if 'mapper' in dataset.a: /usr/lib/python2.7/dist-packages/mvpa2/measures/adhocsearchlightbase.pyc in _sl_call(self, dataset, roi_ids, nproc) 513 # error functions without a chance to screw up 514 for i, fpredictions in enumerate(predictions.T): --> 515 results[isplit, i] = errorfx(fpredictions, targets) 516 517 TypeError: 'NoneType' object is not callable Similarly for other cases and combinations of them: Case 2: slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, space='voxel_indices', errorfx=ConfusionMatrixError(), postproc=mean_sample()) slght_map = slght(fds) Case3: class KeepConfusionMatrix(Node): def _call(self, fds): out = np.zeros(1, dtype=object) out[0] = (fds.samples) return out slght = sphere_gnbsearchlight(clf, partitioner, errorfx=None, radius=slradius, space='voxel_indices', postproc=ChainNode([Confusion(labels=fds.UT)])) slght.postproc.append(KeepConfusionMatrix()) slght_map = slght(fds) Case4: class KeepConfusionMatrix(Node): def _call(self, fds): out = np.zeros(1, dtype=object) out[0] = (fds.samples) return out slght = sphere_gnbsearchlight(clf, partitioner, errorfx=None, radius=slradius, space='voxel_indices', postproc=ChainNode([mean_sample(),Confusion(labels=fds.UT)])) slght.postproc.append(KeepConfusionMatrix()) slght_map = slght(fds) Case5: class KeepConfusionMatrix(Node): def _call(self, fds): out = np.zeros(1, dtype=object) out[0] = (fds.samples) return out slght = sphere_gnbsearchlight(clf, partitioner, errorfx=ConfusionMatrixError(), radius=slradius, space='voxel_indices', postproc=ChainNode([mean_sample(),Confusion(labels=fds.UT)])) slght.postproc.append(KeepConfusionMatrix()) slght_map = slght(fds) > Yaroslav Halchenko debian at onerussian.com > Fri Aug 28 13:16:38 UTC 2015 > quick an possible partial reply > > 1. "not sure" -- if it pukes then probably not, although judging from > the code I foresaw arbitrary shape of the errorfx output > > 2. but you could make sphere_gnbsearchlight to return labels (not > errors) and then post-process to get those confusion matrices. Just > specify errorfx=None to it (not to CV). But you could also try > passing errorfx=ConfusionMatrixError and see how that goes > > Please share what you discover/end up with. > mvpa2/tests/test_usecases.py has more of usecase demos for gnb > searchlights which might come handy > > -- > Yaroslav O. Halchenko, Ph.D. > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Research Scientist, Psychological and Brain Sciences Dept. > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW:http://www.linkedin.com/in/yarik > On 08/28/2015 01:48 PM, marco tettamanti wrote: > Dear all, > is it possible to obtain confusion matrices for all nodes with > "sphere_gnbsearchlight", as was suggested before with "sphere_searchlight": > > slcvte = CrossValidation(clf, partitioner, errorfx=None, > postproc=ChainNode([Confusion(labels=fds.UT)])) > class KeepConfusionMatrix(Node): > def _call(self, fds): > out = np.zeros(1, dtype=object) > out[0] = (fds.samples) > return out > > slcvte.postproc.append(KeepConfusionMatrix()) > slght = sphere_searchlight(slcvte, radius=slradius, space='voxel_indices', > nproc=4, postproc=mean_sample()) > slght_map = slght(fds) > > > Thank you and best wishes, > Marco > -- > Marco Tettamanti, Ph.D. > Nuclear Medicine Department & Division of Neuroscience > San Raffaele Scientific Institute > Via Olgettina 58 > I-20132 Milano, Italy > Phone ++39-02-26434888 > Fax ++39-02-26434892 > Email:tettamanti.marco at hsr.it > Skype: mtettamanti -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Fri Aug 28 16:15:09 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Fri, 28 Aug 2015 12:15:09 -0400 Subject: [pymvpa] Confusion Matrix for each Node with sphere_gnbsearchlight In-Reply-To: <55E07E37.7030500@gmail.com> References: <55E04A89.1050500@gmail.com> <55E07E37.7030500@gmail.com> Message-ID: <20150828161509.GS19455@onerussian.com> On Fri, 28 Aug 2015, marco tettamanti wrote: > Dear Yaroslav, > thank you very much for your reply. I have made several attempts, trying > to guess a solution, but it seems I always get a > 'TypeError: 'NoneType' object is not callable'. oh shoot... forgotten that this one was implemented after the last 2.4.0 release: in upstream/2.4.0-34-g55e147e this June... we should release I guess. what system are you on and what version of pymvpa currently? if you could use/try the one from git directly... ? > Case 1: > slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, > space='voxel_indices', errorfx=None, postproc=mean_sample()) not the problem here BUT there should be no mean_sample() if errorfx is None -- you wouldn't want to average labels ;) -- Yaroslav O. Halchenko, Ph.D. http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org Research Scientist, Psychological and Brain Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From mrctttmnt at gmail.com Fri Aug 28 18:23:07 2015 From: mrctttmnt at gmail.com (marco tettamanti) Date: Fri, 28 Aug 2015 20:23:07 +0200 Subject: [pymvpa] Confusion Matrix for each Node with sphere_gnbsearchlight In-Reply-To: <55E07E37.7030500@gmail.com> References: <55E04A89.1050500@gmail.com> <55E07E37.7030500@gmail.com> Message-ID: <55E0A70B.1070802@gmail.com> Thanks again! I am on Debian testing (well, reverted on stable now, because of troubles with gcc5) and have version 2.3.1. I will give a try to the one from git. Best, Marco PyMVPA: Version: 2.3.1 Hash: d1da5a749dc9cc606bd7f425d93d25464bf43454 Path: /usr/lib/python2.7/dist-packages/mvpa2/__init__.pyc Version control (GIT): GIT information could not be obtained due "/usr/lib/python2.7/dist-packages/mvpa2/.. is not under GIT" SYSTEM: OS: posix Linux 4.1.0-1-amd64 #1 SMP Debian 4.1.3-1 (2015-08-03) Distribution: debian/stretch/sid > *Yaroslav Halchenko* debian at onerussian.com > > /Fri Aug 28 16:15:09 UTC 2015/ > -------------------------------------------------------------------------------- > On Fri, 28 Aug 2015, marco tettamanti wrote: > > >/ Dear Yaroslav, > />/ thank you very much for your reply. I have made several attempts, trying > />/ to guess a solution, but it seems I always get a > />/ 'TypeError: 'NoneType' object is not callable'. > / > oh shoot... forgotten that this one was implemented after the last 2.4.0 > release: in upstream/2.4.0-34-g55e147e this June... we should release I > guess. what system are you on and what version of pymvpa currently? > if you could use/try the one from git directly... ? > > >/ Case 1: > />/ slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, > />/ space='voxel_indices', errorfx=None, postproc=mean_sample()) > / > not the problem here BUT there should be no mean_sample() if errorfx is > None -- you wouldn't want to average labels ;) > > -- > Yaroslav O. Halchenko, Ph.D. > http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Research Scientist, Psychological and Brain Sciences Dept. > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW:http://www.linkedin.com/in/yarik > On 08/28/2015 05:28 PM, marco tettamanti wrote: > Dear Yaroslav, > thank you very much for your reply. I have made several attempts, trying to > guess a solution, but it seems I always get a > 'TypeError: 'NoneType' object is not callable'. > > Any further advice is greatly appreciated! > Best, > Marco > > > Case 1: > slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, > space='voxel_indices', errorfx=None, postproc=mean_sample()) > slght_map = slght(fds) > > In [70]: slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, > space='voxel_indices', errorfx=None, postproc=mean_sample()) > > In [71]: slght_map = slght(fds) > [SLC] DBG: Phase 1. Initializing partitions using > on chunks,targets,time_coords,time_indices>, , imgaffine,imghdr,imgtype,mapper,voxel_dim,voxel_eldim>> > [SLC] DBG: Phase 2. Blocking data for 18 splits and 3 labels > [SLC] DBG: Phase 3. Computing statistics for 54 blocks > [SLC] DBG: Phase 4. Deducing neighbors information for 111 ROIs > [SLC] DBG: Phase 4b. Converting neighbors to sparse matrix > representation > [SLC] DBG: Phase 5. Major loop > [SLC] DBG: Split 0 out of 18 > [SLC] DBG: 'Training' is done > [SLC] DBG: Doing 'Searchlight' > [SLC] DBG: Assessing accuracies > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > in () > ----> 1 slght_map = slght(fds) > > /usr/lib/python2.7/dist-packages/mvpa2/base/learner.pyc in __call__(self, ds) > 257 "used and auto training is > disabled." > 258 % str(self)) > --> 259 return super(Learner, self).__call__(ds) > 260 > 261 > > /usr/lib/python2.7/dist-packages/mvpa2/base/node.pyc in __call__(self, ds) > 119 > 120 self._precall(ds) > --> 121 result = self._call(ds) > 122 result = self._postcall(ds, result) > 123 > > /usr/lib/python2.7/dist-packages/mvpa2/measures/searchlight.pyc in > _call(self, dataset) > 141 > 142 # pass to subclass > --> 143 results = self._sl_call(dataset, roi_ids, nproc) > 144 > 145 if 'mapper' in dataset.a: > > /usr/lib/python2.7/dist-packages/mvpa2/measures/adhocsearchlightbase.pyc > in _sl_call(self, dataset, roi_ids, nproc) > 513 # error functions without a chance to screw up > 514 for i, fpredictions in enumerate(predictions.T): > --> 515 results[isplit, i] = errorfx(fpredictions, > targets) > 516 > 517 > > TypeError: 'NoneType' object is not callable > > > > > Similarly for other cases and combinations of them: > > Case 2: > slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, > space='voxel_indices', errorfx=ConfusionMatrixError(), postproc=mean_sample()) > slght_map = slght(fds) > > > Case3: > class KeepConfusionMatrix(Node): > def _call(self, fds): > out = np.zeros(1, dtype=object) > out[0] = (fds.samples) > return out > > slght = sphere_gnbsearchlight(clf, partitioner, errorfx=None, radius=slradius, > space='voxel_indices', postproc=ChainNode([Confusion(labels=fds.UT)])) > slght.postproc.append(KeepConfusionMatrix()) > slght_map = slght(fds) > > > Case4: > class KeepConfusionMatrix(Node): > def _call(self, fds): > out = np.zeros(1, dtype=object) > out[0] = (fds.samples) > return out > > slght = sphere_gnbsearchlight(clf, partitioner, errorfx=None, radius=slradius, > space='voxel_indices', > postproc=ChainNode([mean_sample(),Confusion(labels=fds.UT)])) > slght.postproc.append(KeepConfusionMatrix()) > slght_map = slght(fds) > > > > Case5: > class KeepConfusionMatrix(Node): > def _call(self, fds): > out = np.zeros(1, dtype=object) > out[0] = (fds.samples) > return out > > slght = sphere_gnbsearchlight(clf, partitioner, > errorfx=ConfusionMatrixError(), radius=slradius, space='voxel_indices', > postproc=ChainNode([mean_sample(),Confusion(labels=fds.UT)])) > slght.postproc.append(KeepConfusionMatrix()) > slght_map = slght(fds) > > > >> Yaroslav Halchenko debian at onerussian.com >> Fri Aug 28 13:16:38 UTC 2015 >> quick an possible partial reply >> >> 1. "not sure" -- if it pukes then probably not, although judging from >> the code I foresaw arbitrary shape of the errorfx output >> >> 2. but you could make sphere_gnbsearchlight to return labels (not >> errors) and then post-process to get those confusion matrices. Just >> specify errorfx=None to it (not to CV). But you could also try >> passing errorfx=ConfusionMatrixError and see how that goes >> >> Please share what you discover/end up with. >> mvpa2/tests/test_usecases.py has more of usecase demos for gnb >> searchlights which might come handy >> >> -- >> Yaroslav O. Halchenko, Ph.D. >> http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org >> Research Scientist, Psychological and Brain Sciences Dept. >> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 >> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 >> WWW:http://www.linkedin.com/in/yarik >> > > On 08/28/2015 01:48 PM, marco tettamanti wrote: >> Dear all, >> is it possible to obtain confusion matrices for all nodes with >> "sphere_gnbsearchlight", as was suggested before with "sphere_searchlight": >> >> slcvte = CrossValidation(clf, partitioner, errorfx=None, >> postproc=ChainNode([Confusion(labels=fds.UT)])) >> class KeepConfusionMatrix(Node): >> def _call(self, fds): >> out = np.zeros(1, dtype=object) >> out[0] = (fds.samples) >> return out >> >> slcvte.postproc.append(KeepConfusionMatrix()) >> slght = sphere_searchlight(slcvte, radius=slradius, space='voxel_indices', >> nproc=4, postproc=mean_sample()) >> slght_map = slght(fds) >> >> >> Thank you and best wishes, >> Marco >> -- >> Marco Tettamanti, Ph.D. >> Nuclear Medicine Department & Division of Neuroscience >> San Raffaele Scientific Institute >> Via Olgettina 58 >> I-20132 Milano, Italy >> Phone ++39-02-26434888 >> Fax ++39-02-26434892 >> Email:tettamanti.marco at hsr.it >> Skype: mtettamanti > -------------- next part -------------- An HTML attachment was scrubbed... URL: From basile.pinsard at gmail.com Fri Aug 28 18:40:47 2015 From: basile.pinsard at gmail.com (basile pinsard) Date: Fri, 28 Aug 2015 14:40:47 -0400 Subject: [pymvpa] Confusion Matrix for each Node with sphere_gnbsearchlight In-Reply-To: <55E0A70B.1070802@gmail.com> References: <55E04A89.1050500@gmail.com> <55E07E37.7030500@gmail.com> <55E0A70B.1070802@gmail.com> Message-ID: I wanted to do the same and had to make some changes to PyMVPA here: https://github.com/bpinsard/PyMVPA/tree/gnbsearchlight_confusiontable using it with: errorfx = ConfusionMatrix(labels=ds.uniquetargets) slght = GNBSearchlight(clf, prtnr, qe, errorfx=errorfx) On Fri, Aug 28, 2015 at 2:23 PM, marco tettamanti wrote: > Thanks again! > I am on Debian testing (well, reverted on stable now, because of troubles > with gcc5) and have version 2.3.1. > I will give a try to the one from git. > Best, > Marco > > > PyMVPA: > Version: 2.3.1 > Hash: d1da5a749dc9cc606bd7f425d93d25464bf43454 > Path: /usr/lib/python2.7/dist-packages/mvpa2/__init__.pyc > Version control (GIT): > GIT information could not be obtained due > "/usr/lib/python2.7/dist-packages/mvpa2/.. is not under GIT" > SYSTEM: > OS: posix Linux 4.1.0-1-amd64 #1 SMP Debian 4.1.3-1 > (2015-08-03) > Distribution: debian/stretch/sid > > > *Yaroslav Halchenko* debian at onerussian.com > > *Fri Aug 28 16:15:09 UTC 2015* > ------------------------------ > > On Fri, 28 Aug 2015, marco tettamanti wrote: > > >* Dear Yaroslav, > *>* thank you very much for your reply. I have made several attempts, trying > *>* to guess a solution, but it seems I always get a > *>* 'TypeError: 'NoneType' object is not callable'. > * > oh shoot... forgotten that this one was implemented after the last 2.4.0 > release: in upstream/2.4.0-34-g55e147e this June... we should release I > guess. what system are you on and what version of pymvpa currently? > if you could use/try the one from git directly... ? > > >* Case 1: > *>* slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, > *>* space='voxel_indices', errorfx=None, postproc=mean_sample()) > * > not the problem here BUT there should be no mean_sample() if errorfx is > None -- you wouldn't want to average labels ;) > > -- > Yaroslav O. Halchenko, Ph.D.http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Research Scientist, Psychological and Brain Sciences Dept. > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > > > > On 08/28/2015 05:28 PM, marco tettamanti wrote: > > Dear Yaroslav, > thank you very much for your reply. I have made several attempts, trying > to guess a solution, but it seems I always get a > 'TypeError: 'NoneType' object is not callable'. > > Any further advice is greatly appreciated! > Best, > Marco > > > Case 1: > slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, > space='voxel_indices', errorfx=None, postproc=mean_sample()) > slght_map = slght(fds) > > In [70]: slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, > space='voxel_indices', errorfx=None, postproc=mean_sample()) > > In [71]: slght_map = slght(fds) > [SLC] DBG: Phase 1. Initializing partitions using > on chunks,targets,time_coords,time_indices>, , imgaffine,imghdr,imgtype,mapper,voxel_dim,voxel_eldim>> > [SLC] DBG: Phase 2. Blocking data for 18 splits and 3 labels > [SLC] DBG: Phase 3. Computing statistics for 54 blocks > [SLC] DBG: Phase 4. Deducing neighbors information for 111 ROIs > [SLC] DBG: Phase 4b. Converting neighbors to sparse matrix > representation > [SLC] DBG: Phase 5. Major loop > [SLC] DBG: Split 0 out of 18 > [SLC] DBG: 'Training' is done > [SLC] DBG: Doing 'Searchlight' > [SLC] DBG: Assessing accuracies > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > in () > ----> 1 slght_map = slght(fds) > > /usr/lib/python2.7/dist-packages/mvpa2/base/learner.pyc in __call__(self, > ds) > 257 "used and auto training is > disabled." > 258 % str(self)) > --> 259 return super(Learner, self).__call__(ds) > 260 > 261 > > /usr/lib/python2.7/dist-packages/mvpa2/base/node.pyc in __call__(self, ds) > 119 > 120 self._precall(ds) > --> 121 result = self._call(ds) > 122 result = self._postcall(ds, result) > 123 > > /usr/lib/python2.7/dist-packages/mvpa2/measures/searchlight.pyc in > _call(self, dataset) > 141 > 142 # pass to subclass > --> 143 results = self._sl_call(dataset, roi_ids, nproc) > 144 > 145 if 'mapper' in dataset.a: > > /usr/lib/python2.7/dist-packages/mvpa2/measures/adhocsearchlightbase.pyc > in _sl_call(self, dataset, roi_ids, nproc) > 513 # error functions without a chance to screw up > 514 for i, fpredictions in enumerate(predictions.T): > --> 515 results[isplit, i] = errorfx(fpredictions, > targets) > 516 > 517 > > TypeError: 'NoneType' object is not callable > > > > > Similarly for other cases and combinations of them: > > Case 2: > slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, > space='voxel_indices', errorfx=ConfusionMatrixError(), > postproc=mean_sample()) > slght_map = slght(fds) > > > Case3: > class KeepConfusionMatrix(Node): > def _call(self, fds): > out = np.zeros(1, dtype=object) > out[0] = (fds.samples) > return out > > slght = sphere_gnbsearchlight(clf, partitioner, errorfx=None, > radius=slradius, space='voxel_indices', > postproc=ChainNode([Confusion(labels=fds.UT)])) > slght.postproc.append(KeepConfusionMatrix()) > slght_map = slght(fds) > > > Case4: > class KeepConfusionMatrix(Node): > def _call(self, fds): > out = np.zeros(1, dtype=object) > out[0] = (fds.samples) > return out > > slght = sphere_gnbsearchlight(clf, partitioner, errorfx=None, > radius=slradius, space='voxel_indices', > postproc=ChainNode([mean_sample(),Confusion(labels=fds.UT)])) > slght.postproc.append(KeepConfusionMatrix()) > slght_map = slght(fds) > > > > Case5: > class KeepConfusionMatrix(Node): > def _call(self, fds): > out = np.zeros(1, dtype=object) > out[0] = (fds.samples) > return out > > slght = sphere_gnbsearchlight(clf, partitioner, > errorfx=ConfusionMatrixError(), radius=slradius, space='voxel_indices', > postproc=ChainNode([mean_sample(),Confusion(labels=fds.UT)])) > slght.postproc.append(KeepConfusionMatrix()) > slght_map = slght(fds) > > > > Yaroslav Halchenko debian at onerussian.com > Fri Aug 28 13:16:38 UTC 2015 > > quick an possible partial reply > > 1. "not sure" -- if it pukes then probably not, although judging from > the code I foresaw arbitrary shape of the errorfx output > > 2. but you could make sphere_gnbsearchlight to return labels (not > errors) and then post-process to get those confusion matrices. Just > specify errorfx=None to it (not to CV). But you could also try > passing errorfx=ConfusionMatrixError and see how that goes > > Please share what you discover/end up with. > mvpa2/tests/test_usecases.py has more of usecase demos for gnb > searchlights which might come handy > > -- > Yaroslav O. Halchenko, Ph.D.http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Research Scientist, Psychological and Brain Sciences Dept. > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > > On 08/28/2015 01:48 PM, marco tettamanti wrote: > > Dear all, > is it possible to obtain confusion matrices for all nodes with > "sphere_gnbsearchlight", as was suggested before with "sphere_searchlight": > > slcvte = CrossValidation(clf, partitioner, errorfx=None, > postproc=ChainNode([Confusion(labels=fds.UT)])) > class KeepConfusionMatrix(Node): > def _call(self, fds): > out = np.zeros(1, dtype=object) > out[0] = (fds.samples) > return out > > slcvte.postproc.append(KeepConfusionMatrix()) > slght = sphere_searchlight(slcvte, radius=slradius, space='voxel_indices', > nproc=4, postproc=mean_sample()) > slght_map = slght(fds) > > > Thank you and best wishes, > Marco > > -- > Marco Tettamanti, Ph.D. > Nuclear Medicine Department & Division of Neuroscience > San Raffaele Scientific Institute > Via Olgettina 58 > I-20132 Milano, Italy > Phone ++39-02-26434888 > Fax ++39-02-26434892 > Email: tettamanti.marco at hsr.it > Skype: mtettamanti > > > > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -- Basile Pinsard *PhD candidate, * Laboratoire d'Imagerie Biom?dicale, UMR S 1146 / UMR 7371, Sorbonne Universit?s, UPMC, INSERM, CNRS *Brain-Cognition-Behaviour Doctoral School **, *ED3C*, *UPMC, Sorbonne Universit?s Biomedical Sciences Doctoral School, Faculty of Medicine, Universit? de Montr?al CRIUGM, Universit? de Montr?al -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrctttmnt at gmail.com Sat Aug 29 07:36:11 2015 From: mrctttmnt at gmail.com (marco tettamanti) Date: Sat, 29 Aug 2015 09:36:11 +0200 Subject: [pymvpa] Confusion Matrix for each Node with sphere_gnbsearchlight In-Reply-To: <4D7DD337-3FD9-4E9F-81D0-2EA3F00BCC7A@gmail.com> References: <4D7DD337-3FD9-4E9F-81D0-2EA3F00BCC7A@gmail.com> Message-ID: <55E160EB.4050401@gmail.com> Dear Basile, thank you for sharing, I will try this out! Best, Marco On 08/29/2015 09:26 AM, Marco Tettamanti wrote: > Date: Fri, 28 Aug 2015 14:40:47 -0400 > From: basile pinsard > To: Development and support of PyMVPA > Subject: Re: [pymvpa] Confusion Matrix for each Node with sphere_gnbsearchlight > > I wanted to do the same and had to make some changes to PyMVPA here: > https://github.com/bpinsard/PyMVPA/tree/gnbsearchlight_confusiontable > using it with: > errorfx = ConfusionMatrix(labels=ds.uniquetargets) > slght = GNBSearchlight(clf, prtnr, qe, errorfx=errorfx) > > On Fri, Aug 28, 2015 at 2:23 PM, marco tettamanti > wrote: > > Thanks again! I am on Debian testing (well, reverted on stable now, > because of troubles with gcc5) and have version 2.3.1. I will give a try > to the one from git. Best, Marco PyMVPA: Version: 2.3.1 Hash: > d1da5a749dc9cc606bd7f425d93d25464bf43454 Path: > /usr/lib/python2.7/dist-packages/mvpa2/__init__.pyc Version control (GIT): > GIT information could not be obtained due > "/usr/lib/python2.7/dist-packages/mvpa2/.. is not under GIT" SYSTEM: OS: > posix Linux 4.1.0-1-amd64 #1 SMP Debian 4.1.3-1 (2015-08-03) Distribution: > debian/stretch/sid *Yaroslav Halchenko* debian at onerussian.com > ?Subject=Re%3A%20%5Bpymvpa%5D%20Confusion%20Matrix%20for%20each%20Node%20with%0A%20sphere_gnbsearchlight&In-Reply-To=%3C20150828161509.GS19455%40onerussian.com > %3E> *Fri Aug 28 16:15:09 UTC 2015* > -------------------------------------------------------------------------------- > On Fri, 28 Aug 2015, marco tettamanti wrote: > > * Dear Yaroslav, > > *>* thank you very much for your reply. I have made several attempts, > trying *>* to guess a solution, but it seems I always get a *>* > 'TypeError: 'NoneType' object is not callable'. * oh shoot... forgotten > that this one was implemented after the last 2.4.0 release: in > upstream/2.4.0-34-g55e147e this June... we should release I guess. what > system are you on and what version of pymvpa currently? if you could > use/try the one from git directly... ? > > * Case 1: > > *>* slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, *>* > space='voxel_indices', errorfx=None, postproc=mean_sample()) * not the > problem here BUT there should be no mean_sample() if errorfx is None -- > you wouldn't want to average labels ;) -- Yaroslav O. Halchenko, > Ph.D.http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org > Research Scientist, Psychological and Brain Sciences Dept. Dartmouth > College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 > (603) 646-9834 Fax: +1 (603) 646-1419 WWW: > http://www.linkedin.com/in/yarik On 08/28/2015 05:28 PM, marco tettamanti > wrote: Dear Yaroslav, thank you very much for your reply. I have made > several attempts, trying to guess a solution, but it seems I always get a > 'TypeError: 'NoneType' object is not callable'. Any further advice is > greatly appreciated! Best, Marco Case 1: slght = > sphere_gnbsearchlight(clf, partitioner, radius=slradius, > space='voxel_indices', errorfx=None, postproc=mean_sample()) slght_map = > slght(fds) In [70]: slght = sphere_gnbsearchlight(clf, partitioner, > radius=slradius, space='voxel_indices', errorfx=None, > postproc=mean_sample()) In [71]: slght_map = slght(fds) [SLC] DBG: Phase > 1. Initializing partitions using on 108x111 at float32, , voxel_indices>, imgaffine,imghdr,imgtype,mapper,voxel_dim,voxel_eldim>> [SLC] DBG: Phase > 2. Blocking data for 18 splits and 3 labels [SLC] DBG: Phase 3. Computing > statistics for 54 blocks [SLC] DBG: Phase 4. Deducing neighbors > information for 111 ROIs [SLC] DBG: Phase 4b. Converting neighbors to > sparse matrix representation [SLC] DBG: Phase 5. Major loop [SLC] DBG: > Split 0 out of 18 [SLC] DBG: 'Training' is done [SLC] DBG: Doing > 'Searchlight' [SLC] DBG: Assessing accuracies > -------------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > in () ----> 1 slght_map = > slght(fds) /usr/lib/python2.7/dist-packages/mvpa2/base/learner.pyc in > __call__(self, ds) 257 "used and auto training is disabled." 258 % > str(self)) --> 259 return super(Learner, self).__call__(ds) 260 261 > /usr/lib/python2.7/dist-packages/mvpa2/base/node.pyc in __call__(self, ds) > 119 120 self._precall(ds) --> 121 result = self._call(ds) 122 result = > self._postcall(ds, result) 123 > /usr/lib/python2.7/dist-packages/mvpa2/measures/searchlight.pyc in > _call(self, dataset) 141 142 # pass to subclass --> 143 results = > self._sl_call(dataset, roi_ids, nproc) 144 145 if 'mapper' in dataset.a: > /usr/lib/python2.7/dist-packages/mvpa2/measures/adhocsearchlightbase.pyc > in _sl_call(self, dataset, roi_ids, nproc) 513 # error functions without a > chance to screw up 514 for i, fpredictions in enumerate(predictions.T): > --> 515 results[isplit, i] = errorfx(fpredictions, targets) 516 517 > TypeError: 'NoneType' object is not callable Similarly for other cases and > combinations of them: Case 2: slght = sphere_gnbsearchlight(clf, > partitioner, radius=slradius, space='voxel_indices', > errorfx=ConfusionMatrixError(), postproc=mean_sample()) slght_map = > slght(fds) Case3: class KeepConfusionMatrix(Node): def _call(self, fds): > out = np.zeros(1, dtype=object) out[0] = (fds.samples) return out slght = > sphere_gnbsearchlight(clf, partitioner, errorfx=None, radius=slradius, > space='voxel_indices', postproc=ChainNode([Confusion(labels=fds.UT)])) > slght.postproc.append(KeepConfusionMatrix()) slght_map = slght(fds) Case4: > class KeepConfusionMatrix(Node): def _call(self, fds): out = np.zeros(1, > dtype=object) out[0] = (fds.samples) return out slght = > sphere_gnbsearchlight(clf, partitioner, errorfx=None, radius=slradius, > space='voxel_indices', > postproc=ChainNode([mean_sample(),Confusion(labels=fds.UT)])) > slght.postproc.append(KeepConfusionMatrix()) slght_map = slght(fds) Case5: > class KeepConfusionMatrix(Node): def _call(self, fds): out = np.zeros(1, > dtype=object) out[0] = (fds.samples) return out slght = > sphere_gnbsearchlight(clf, partitioner, errorfx=ConfusionMatrixError(), > radius=slradius, space='voxel_indices', > postproc=ChainNode([mean_sample(),Confusion(labels=fds.UT)])) > slght.postproc.append(KeepConfusionMatrix()) slght_map = slght(fds) > Yaroslav Halchenko debian at onerussian.com Fri > Aug 28 13:16:38 UTC 2015 quick an possible partial reply 1. "not sure" -- > if it pukes then probably not, although judging from the code I foresaw > arbitrary shape of the errorfx output 2. but you could make > sphere_gnbsearchlight to return labels (not errors) and then post-process > to get those confusion matrices. Just specify errorfx=None to it (not to > CV). But you could also try passing errorfx=ConfusionMatrixError and see > how that goes Please share what you discover/end up with. > mvpa2/tests/test_usecases.py has more of usecase > demos for gnb searchlights which might come handy -- Yaroslav O. > Halchenko, Ph.D.http://neuro.debian.net http://www.pymvpa.org > http://www.fail2ban.org Research Scientist, Psychological and Brain > Sciences Dept. Dartmouth College, 419 Moore Hall, Hinman Box 6207, > Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: > http://www.linkedin.com/in/yarik On 08/28/2015 01:48 PM, marco tettamanti > wrote: Dear all, is it possible to obtain confusion matrices for all nodes > with "sphere_gnbsearchlight", as was suggested before with > "sphere_searchlight": slcvte = CrossValidation(clf, partitioner, > errorfx=None, postproc=ChainNode([Confusion(labels=fds.UT)])) class > KeepConfusionMatrix(Node): def _call(self, fds): out = np.zeros(1, > dtype=object) out[0] = (fds.samples) return out > slcvte.postproc.append(KeepConfusionMatrix()) slght = > sphere_searchlight(slcvte, radius=slradius, space='voxel_indices', > nproc=4, postproc=mean_sample()) slght_map = slght(fds) Thank you and best > wishes, Marco -- Marco Tettamanti, Ph.D. Nuclear Medicine Department & > Division of Neuroscience San Raffaele Scientific Institute Via Olgettina > 58 I-20132 Milano, Italy Phone ++39-02-26434888 Fax ++39-02-26434892 > Email: tettamanti.marco at hsr.it Skype: mtettamanti > -------------------------------------------------------------------------------- > Pkg-ExpPsy-PyMVPA mailing list Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > > > > > > -- > Basile Pinsard > > *PhD > candidate, * > Laboratoire d'Imagerie Biom?dicale, UMR S 1146 / UMR 7371, Sorbonne > Universit?s, UPMC, INSERM, CNRS > *Brain-Cognition-Behaviour Doctoral School **, *ED3C*, *UPMC, Sorbonne > Universit?s > Biomedical Sciences Doctoral School, Faculty of Medicine, Universit? de > Montr?al > CRIUGM, Universit? de Montr?al > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > -------------------------------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrctttmnt at gmail.com Sat Aug 29 15:12:45 2015 From: mrctttmnt at gmail.com (marco tettamanti) Date: Sat, 29 Aug 2015 17:12:45 +0200 Subject: [pymvpa] Confusion Matrix for each Node with sphere_gnbsearchlight In-Reply-To: <55E0A70B.1070802@gmail.com> References: <55E04A89.1050500@gmail.com> <55E07E37.7030500@gmail.com> <55E0A70B.1070802@gmail.com> Message-ID: <55E1CBED.6090101@gmail.com> Dear Yaroslav, I have dowloaded master version 2.4.0 from git, and I can now get classification labels with errorfx=None. The problem is now that "errorfx= " within "sphere_gnbsearchlight" behaves less flexibly: errorfx=None errorfx=mean_mismatch_error are ok, but: errorfx=mean_match_accuracy errorfx=ConfusionMatrixError(labels=fds.UT)) give an error of type: ValueError: Collectable 'cvfolds' with length [108] does not match the required length [18] of collection ''. As used for example in: slght = sphere_gnbsearchlight(slghtclf, partitioner, radius=slradius, space='voxel_indices', errorfx=mean_match_accuracy, postproc=mean_sample()) slght_map = slght(fds) Thank you and best wishes, Marco On 08/28/2015 08:23 PM, marco tettamanti wrote: > Thanks again! > I am on Debian testing (well, reverted on stable now, because of troubles with > gcc5) and have version 2.3.1. > I will give a try to the one from git. > Best, > Marco > > > PyMVPA: > Version: 2.3.1 > Hash: d1da5a749dc9cc606bd7f425d93d25464bf43454 > Path: /usr/lib/python2.7/dist-packages/mvpa2/__init__.pyc > Version control (GIT): > GIT information could not be obtained due > "/usr/lib/python2.7/dist-packages/mvpa2/.. is not under GIT" > SYSTEM: > OS: posix Linux 4.1.0-1-amd64 #1 SMP Debian 4.1.3-1 (2015-08-03) > Distribution: debian/stretch/sid > > >> *Yaroslav Halchenko* debian at onerussian.com >> >> /Fri Aug 28 16:15:09 UTC 2015/ >> -------------------------------------------------------------------------------- >> On Fri, 28 Aug 2015, marco tettamanti wrote: >> >> >/ Dear Yaroslav, >> />/ thank you very much for your reply. I have made several attempts, trying >> />/ to guess a solution, but it seems I always get a >> />/ 'TypeError: 'NoneType' object is not callable'. >> / >> oh shoot... forgotten that this one was implemented after the last 2.4.0 >> release: in upstream/2.4.0-34-g55e147e this June... we should release I >> guess. what system are you on and what version of pymvpa currently? >> if you could use/try the one from git directly... ? >> >> >/ Case 1: >> />/ slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, >> />/ space='voxel_indices', errorfx=None, postproc=mean_sample()) >> / >> not the problem here BUT there should be no mean_sample() if errorfx is >> None -- you wouldn't want to average labels ;) >> >> -- >> Yaroslav O. Halchenko, Ph.D. >> http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org >> Research Scientist, Psychological and Brain Sciences Dept. >> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 >> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 >> WWW:http://www.linkedin.com/in/yarik >> > > > > On 08/28/2015 05:28 PM, marco tettamanti wrote: >> Dear Yaroslav, >> thank you very much for your reply. I have made several attempts, trying to >> guess a solution, but it seems I always get a >> 'TypeError: 'NoneType' object is not callable'. >> >> Any further advice is greatly appreciated! >> Best, >> Marco >> >> >> Case 1: >> slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, >> space='voxel_indices', errorfx=None, postproc=mean_sample()) >> slght_map = slght(fds) >> >> In [70]: slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, >> space='voxel_indices', errorfx=None, postproc=mean_sample()) >> >> In [71]: slght_map = slght(fds) >> [SLC] DBG: Phase 1. Initializing partitions using >> on > chunks,targets,time_coords,time_indices>, , > imgaffine,imghdr,imgtype,mapper,voxel_dim,voxel_eldim>> >> [SLC] DBG: Phase 2. Blocking data for 18 splits and 3 labels >> [SLC] DBG: Phase 3. Computing statistics for 54 blocks >> [SLC] DBG: Phase 4. Deducing neighbors information for 111 ROIs >> [SLC] DBG: Phase 4b. Converting neighbors to sparse matrix >> representation >> [SLC] DBG: Phase 5. Major loop >> [SLC] DBG: Split 0 out of 18 >> [SLC] DBG: 'Training' is done >> [SLC] DBG: Doing 'Searchlight' >> [SLC] DBG: Assessing accuracies >> --------------------------------------------------------------------------- >> TypeError Traceback (most recent call last) >> in () >> ----> 1 slght_map = slght(fds) >> >> /usr/lib/python2.7/dist-packages/mvpa2/base/learner.pyc in __call__(self, ds) >> 257 "used and auto training is >> disabled." >> 258 % str(self)) >> --> 259 return super(Learner, self).__call__(ds) >> 260 >> 261 >> >> /usr/lib/python2.7/dist-packages/mvpa2/base/node.pyc in __call__(self, ds) >> 119 >> 120 self._precall(ds) >> --> 121 result = self._call(ds) >> 122 result = self._postcall(ds, result) >> 123 >> >> /usr/lib/python2.7/dist-packages/mvpa2/measures/searchlight.pyc in >> _call(self, dataset) >> 141 >> 142 # pass to subclass >> --> 143 results = self._sl_call(dataset, roi_ids, nproc) >> 144 >> 145 if 'mapper' in dataset.a: >> >> /usr/lib/python2.7/dist-packages/mvpa2/measures/adhocsearchlightbase.pyc >> in _sl_call(self, dataset, roi_ids, nproc) >> 513 # error functions without a chance to screw up >> 514 for i, fpredictions in enumerate(predictions.T): >> --> 515 results[isplit, i] = errorfx(fpredictions, >> targets) >> 516 >> 517 >> >> TypeError: 'NoneType' object is not callable >> >> >> >> >> Similarly for other cases and combinations of them: >> >> Case 2: >> slght = sphere_gnbsearchlight(clf, partitioner, radius=slradius, >> space='voxel_indices', errorfx=ConfusionMatrixError(), postproc=mean_sample()) >> slght_map = slght(fds) >> >> >> Case3: >> class KeepConfusionMatrix(Node): >> def _call(self, fds): >> out = np.zeros(1, dtype=object) >> out[0] = (fds.samples) >> return out >> >> slght = sphere_gnbsearchlight(clf, partitioner, errorfx=None, >> radius=slradius, space='voxel_indices', >> postproc=ChainNode([Confusion(labels=fds.UT)])) >> slght.postproc.append(KeepConfusionMatrix()) >> slght_map = slght(fds) >> >> >> Case4: >> class KeepConfusionMatrix(Node): >> def _call(self, fds): >> out = np.zeros(1, dtype=object) >> out[0] = (fds.samples) >> return out >> >> slght = sphere_gnbsearchlight(clf, partitioner, errorfx=None, >> radius=slradius, space='voxel_indices', >> postproc=ChainNode([mean_sample(),Confusion(labels=fds.UT)])) >> slght.postproc.append(KeepConfusionMatrix()) >> slght_map = slght(fds) >> >> >> >> Case5: >> class KeepConfusionMatrix(Node): >> def _call(self, fds): >> out = np.zeros(1, dtype=object) >> out[0] = (fds.samples) >> return out >> >> slght = sphere_gnbsearchlight(clf, partitioner, >> errorfx=ConfusionMatrixError(), radius=slradius, space='voxel_indices', >> postproc=ChainNode([mean_sample(),Confusion(labels=fds.UT)])) >> slght.postproc.append(KeepConfusionMatrix()) >> slght_map = slght(fds) >> >> >> >>> Yaroslav Halchenko debian at onerussian.com >>> Fri Aug 28 13:16:38 UTC 2015 >>> quick an possible partial reply >>> >>> 1. "not sure" -- if it pukes then probably not, although judging from >>> the code I foresaw arbitrary shape of the errorfx output >>> >>> 2. but you could make sphere_gnbsearchlight to return labels (not >>> errors) and then post-process to get those confusion matrices. Just >>> specify errorfx=None to it (not to CV). But you could also try >>> passing errorfx=ConfusionMatrixError and see how that goes >>> >>> Please share what you discover/end up with. >>> mvpa2/tests/test_usecases.py has more of usecase demos for gnb >>> searchlights which might come handy >>> >>> -- >>> Yaroslav O. Halchenko, Ph.D. >>> http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org >>> Research Scientist, Psychological and Brain Sciences Dept. >>> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 >>> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 >>> WWW:http://www.linkedin.com/in/yarik >>> >> >> On 08/28/2015 01:48 PM, marco tettamanti wrote: >>> Dear all, >>> is it possible to obtain confusion matrices for all nodes with >>> "sphere_gnbsearchlight", as was suggested before with "sphere_searchlight": >>> >>> slcvte = CrossValidation(clf, partitioner, errorfx=None, >>> postproc=ChainNode([Confusion(labels=fds.UT)])) >>> class KeepConfusionMatrix(Node): >>> def _call(self, fds): >>> out = np.zeros(1, dtype=object) >>> out[0] = (fds.samples) >>> return out >>> >>> slcvte.postproc.append(KeepConfusionMatrix()) >>> slght = sphere_searchlight(slcvte, radius=slradius, space='voxel_indices', >>> nproc=4, postproc=mean_sample()) >>> slght_map = slght(fds) >>> >>> >>> Thank you and best wishes, >>> Marco >>> -- >>> Marco Tettamanti, Ph.D. >>> Nuclear Medicine Department & Division of Neuroscience >>> San Raffaele Scientific Institute >>> Via Olgettina 58 >>> I-20132 Milano, Italy >>> Phone ++39-02-26434888 >>> Fax ++39-02-26434892 >>> Email:tettamanti.marco at hsr.it >>> Skype: mtettamanti >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raul at lafuentelab.org Tue Sep 1 21:30:12 2015 From: raul at lafuentelab.org (=?UTF-8?B?UmHDumwgSGVybsOhbmRleg==?=) Date: Tue, 1 Sep 2015 16:30:12 -0500 Subject: [pymvpa] Train and test on different datasets using searchlights Message-ID: Hi, I would like to train with a dataset and test with a different one. But I haven?t been able to find out a straight forward way to do it. I read the very same question from a couple of years ago ( http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/2010q4/001265.html ) On it, the answer was to use a piece of code shown there: cv = CrossValidatedTransferError( TransferError(LinearCSVMC()), CustomSplitter([(0, 1)])) # 0 for training, 1 for testing sl = Searchlight(cv, radius=5) # assuming you have dstrain and dstest dstrain.chunks[:] = 0 dstest.chunks[:] = 1 sl_map = sl(dstrain + dstest) I tried this, but I get the error: NameError: name 'CrossValidatedTransferError' is not defined I?m using a virtual machine with NeuroDebian 8.0 and supposedly pymvpa is up to date, seems like I'm missing some libraries. How can I get them? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From bander33 at jhu.edu Thu Sep 3 14:18:17 2015 From: bander33 at jhu.edu (Brian Anderson) Date: Thu, 3 Sep 2015 14:18:17 +0000 Subject: [pymvpa] searchlight with separate training and testing data Message-ID: Hi, I'm looking to run a searchlight using the leave-one-run-out approach, but with separate trials to be used for training and testing. The experiment involves a 2x2 design, and for the one factor I'd like to split the data such that I train on one level of that factor (using N-1 runs) and test on the other (using the run left out). Which level serves as training data and which serves as testing data would be fixed across all folds of the classification (i.e., the same level would always be used for training and the other always for testing). Is there an option that will allow me to perform the searchlight in this way? Many thanks in advance for any direction you can provide! Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From billbrod at gmail.com Thu Sep 3 16:04:54 2015 From: billbrod at gmail.com (Bill Broderick) Date: Thu, 3 Sep 2015 12:04:54 -0400 Subject: [pymvpa] GroupClusterThreshold memory usage Message-ID: Hi all, I'm trying to run group cluster thresholding using the defaults of GroupClusterThreshold (100000 bootstraps) and I'm running into memory issues. In the documentation, it looks like either n_proc (to split the load across several nodes on our cluster) or n_blocks would help, but it's not clear to me how to use these parameters. Can someone give me a brief example? Thanks, Bill From n.n.oosterhof at googlemail.com Thu Sep 3 16:28:16 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Thu, 3 Sep 2015 18:28:16 +0200 Subject: [pymvpa] GroupClusterThreshold memory usage In-Reply-To: References: Message-ID: > On 03 Sep 2015, at 18:04, Bill Broderick wrote: > > I'm trying to run group cluster thresholding using the defaults of > GroupClusterThreshold (100000 bootstraps) and I'm running into memory > issues. In the documentation, it looks like either n_proc (to split > the load across several nodes on our cluster) or n_blocks would help, > but it's not clear to me how to use these parameters. Peak memory usage is in the order of (n_bootstrap * n_features / n_blocks), where n_features is the number of features of the dataset. For example, if you set n_blocks=1000, then memory consumption will be reduced by about a factor of 1,000 compared to n_blocks=1. I'm not sure how the Parallel module behaves, but it may be the case that using n_proc processes will actually multiply memory demands by a factor of n_proc. If you want to keep memory consumption low, my suggestion would be to start with n_proc=1 and try higher values for n_blocks. From billbrod at gmail.com Thu Sep 3 19:19:10 2015 From: billbrod at gmail.com (Bill Broderick) Date: Thu, 3 Sep 2015 15:19:10 -0400 Subject: [pymvpa] GroupClusterThreshold memory usage In-Reply-To: References: Message-ID: That worked! I assumed I would need to do something else, but just adding n_blocks=1000 brought down my memory usage more than enough. It took about 3 times as long, but I think that will be fine. Thanks! Bill On Thu, Sep 3, 2015 at 12:28 PM, Nick Oosterhof wrote: > >> On 03 Sep 2015, at 18:04, Bill Broderick wrote: >> >> I'm trying to run group cluster thresholding using the defaults of >> GroupClusterThreshold (100000 bootstraps) and I'm running into memory >> issues. In the documentation, it looks like either n_proc (to split >> the load across several nodes on our cluster) or n_blocks would help, >> but it's not clear to me how to use these parameters. > > Peak memory usage is in the order of (n_bootstrap * n_features / n_blocks), where n_features is the number of features of the dataset. > For example, if you set n_blocks=1000, then memory consumption will be reduced by about a factor of 1,000 compared to n_blocks=1. > > I'm not sure how the Parallel module behaves, but it may be the case that using n_proc processes will actually multiply memory demands by a factor of n_proc. If you want to keep memory consumption low, my suggestion would be to start with n_proc=1 and try higher values for n_blocks. > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa From rohit.shinde12194 at gmail.com Sun Sep 6 14:21:39 2015 From: rohit.shinde12194 at gmail.com (Rohit Shinde) Date: Sun, 6 Sep 2015 19:51:39 +0530 Subject: [pymvpa] Contributing toward pyMVPA Message-ID: Hello everyone, I came across pyMVPA while looking for organisations working in Machine Learning. Background: I am proficient in C++, Java, Python and Scheme. I have taken undergrad courses in machine learning and data mining. How can I contribute? Is there any project I can take up? Also what prior knowledge would I need? Thank you, Rohit Shinde. -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.n.oosterhof at googlemail.com Sun Sep 6 15:23:28 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Sun, 6 Sep 2015 17:23:28 +0200 Subject: [pymvpa] Contributing toward pyMVPA In-Reply-To: References: Message-ID: <30B9755A-96F9-4501-95E8-F4CA6BF3C9A5@googlemail.com> > > On 06 Sep 2015, at 16:21, Rohit Shinde wrote: > > Background: I am proficient in C++, Java, Python and Scheme. I have taken undergrad courses in machine learning and data mining. That knowledge will definitely be useful for Machine Learning. We use mostly Python. > How can I contribute? Thanks for being willing to contribute! You could start with reading the developer guidelines [1]. > Is there any project I can take up? We keep a list of issues [2], which gives lots of things (some big, some small) to take up. Suggestions for improvement of the documentation are also welcomed. To get an idea how contributions are handled, you could take a look at the pull request (PR) page [3], in particular the closed PRs. > Also what prior knowledge would I need? To get familiar with PyMVPA, you may consider reading the tutorial [4], or look at example analyses [5]. [1] http://www.pymvpa.org/devguide.html [2] https://github.com/PyMVPA/PyMVPA/issues [3] https://github.com/pulls [4] http://dev.pymvpa.org/tutorial.html [5] http://dev.pymvpa.org/examples.html From rohit.shinde12194 at gmail.com Sun Sep 6 15:26:21 2015 From: rohit.shinde12194 at gmail.com (Rohit Shinde) Date: Sun, 6 Sep 2015 20:56:21 +0530 Subject: [pymvpa] Contributing toward pyMVPA In-Reply-To: References: Message-ID: Hi Yaroslav, I don't have any specific interests as such. I am more inclined towards coding and I intend to get expertise in Machine Learning by contributing to open source organisations such as this one. I have previously contributed to Opencog as part of GSoC 2015. Therefore, I would prefer to implement some algorithm which is needed or something along that line. Any pointers for where to start from? On Sun, Sep 6, 2015 at 8:25 PM, Yaroslav Halchenko wrote: > Hi Rohit, > > Thanks for reaching out! It would be nice to know what your interests are, > e.g. what kind of data you are working with (if any) and specific analysis > approaches you are using and/or problems want to address. > > On September 6, 2015 10:21:39 AM EDT, Rohit Shinde < > rohit.shinde12194 at gmail.com> wrote: > >> Hello everyone, >> >> I came across pyMVPA while looking for organisations working in Machine >> Learning. >> >> Background: I am proficient in C++, Java, Python and Scheme. I have taken >> undergrad courses in machine learning and data mining. How can I >> contribute? Is there any project I can take up? Also what prior >> knowledge would I need? >> >> Thank you, >> Rohit Shinde. >> >> ------------------------------ >> >> Pkg-ExpPsy-PyMVPA mailing list >> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org >> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa >> >> > -- > Sent from a phone which beats iPhone. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raul at lafuentelab.org Wed Sep 23 20:03:58 2015 From: raul at lafuentelab.org (=?UTF-8?B?UmHDumwgSGVybsOhbmRleg==?=) Date: Wed, 23 Sep 2015 15:03:58 -0500 Subject: [pymvpa] Sanity check Message-ID: Hi, I?m trying to evaluate on trial by trial basis how well a region can predict the stimulus being presented to compare it with the participant?s judgment of the stimulus. So I?m training the classifier with data from all the trials on all the runs except by the one that I want to predict. I?m getting really good classifications better than when I was predicting one run using all the others. Supposedly it should be a little better as I?m training with a little more data but I?m worried I?m doing something wrong. Could anyone let me know if I?m making some sort of mistake? I know that there should be a more efficient way to do it but I wanted something easy, this is my code: predictions = [] #this is a vector that will contain the predictions of the classifier for i,dsTest in enumerate(ds): #go through all the trials on ds and separate one to test clf = LinearCSVMC() fclf = FeatureSelectionClassifier(clf, fsel) dsTrain = [] dsTrain.append(ds[0:i]) #separates the training data dsTrain.append(ds[i:-1]) dsTrain = vstack(dsTrain) #stacks it fclf.train(dsTrain) predicted = fclf.predict(dsTest) #stores the prediction predictions.append(dsTest.targets == predicted) #checks whether the prediction was correct print np.mean(predictions) #checks the mean -accuracy of all predictions I would really appreciate any feedback, thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jetzel at wustl.edu Wed Sep 23 21:28:09 2015 From: jetzel at wustl.edu (Jo Etzel) Date: Wed, 23 Sep 2015 16:28:09 -0500 Subject: [pymvpa] Sanity check In-Reply-To: References: Message-ID: <56031969.9010606@wustl.edu> Do you mean that you're getting better performance when you're just leaving out one trial instead of one run? If so, How many runs? How many examples per run? Is everything fully balanced (equal number of training examples in each class) under each cross-validation scheme? Jo On 9/23/2015 3:03 PM, Ra?l Hern?ndez wrote: > Hi, I?m trying to evaluate on trial by trial basis how well a region can > predict the stimulus being presented to compare it with the > participant?s judgment of the stimulus. So I?m training the classifier > with data from all the trials on all the runs except by the one that I > want to predict. > > I?m getting really good classifications better than when I was > predicting one run using all the others. Supposedly it should be a > little better as I?m training with a little more data but I?m worried > I?m doing something wrong. > > > Could anyone let me know if I?m making some sort of mistake? > > > I know that there should be a more efficient way to do it but I wanted > something easy, this is my code: > > > predictions = [] #this is a vector that will contain the predictions of > the classifier > > for i,dsTest in enumerate(ds): #go through all the trials on ds and > separate one to test > > clf = LinearCSVMC() > > fclf = FeatureSelectionClassifier(clf, fsel) > > dsTrain = [] > > dsTrain.append(ds[0:i]) #separates the training data > > dsTrain.append(ds[i:-1]) > > dsTrain = vstack(dsTrain) #stacks it > > fclf.train(dsTrain) > > predicted = fclf.predict(dsTest) #stores the prediction > > predictions.append(dsTest.targets == predicted) #checks whether the > prediction was correct > > print np.mean(predictions) #checks the mean -accuracy of all predictions > > I would really appreciate any feedback, thanks! > > > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -- Joset A. Etzel, Ph.D. Research Analyst Cognitive Control & Psychopathology Lab Washington University in St. Louis http://mvpa.blogspot.com/ From raul at lafuentelab.org Wed Sep 23 21:53:44 2015 From: raul at lafuentelab.org (=?UTF-8?B?UmHDumwgSGVybsOhbmRleg==?=) Date: Wed, 23 Sep 2015 16:53:44 -0500 Subject: [pymvpa] Sanity check In-Reply-To: <56031969.9010606@wustl.edu> References: <56031969.9010606@wustl.edu> Message-ID: Yes, I'm getting better performance when leaving out only one trial instead of the full run. I guess that this is expected because I have more training samples, but the increase of accuracy is well above what I would expect (testing by runs is around 32% and testing by trial is about 49%). I have 4 different stimulus. Each stimulus is repeated 6 times per run and I have 12 runs. When I test a single trial, I'm using the full database, without balancing for the one I used. On Wed, Sep 23, 2015 at 4:28 PM, Jo Etzel wrote: > Do you mean that you're getting better performance when you're just > leaving out one trial instead of one run? > > If so, How many runs? How many examples per run? Is everything fully > balanced (equal number of training examples in each class) under each > cross-validation scheme? > > Jo > > > > On 9/23/2015 3:03 PM, Ra?l Hern?ndez wrote: > >> Hi, I?m trying to evaluate on trial by trial basis how well a region can >> predict the stimulus being presented to compare it with the >> participant?s judgment of the stimulus. So I?m training the classifier >> with data from all the trials on all the runs except by the one that I >> want to predict. >> >> I?m getting really good classifications better than when I was >> predicting one run using all the others. Supposedly it should be a >> little better as I?m training with a little more data but I?m worried >> I?m doing something wrong. >> >> >> Could anyone let me know if I?m making some sort of mistake? >> >> >> I know that there should be a more efficient way to do it but I wanted >> something easy, this is my code: >> >> >> predictions = [] #this is a vector that will contain the predictions of >> the classifier >> >> for i,dsTest in enumerate(ds): #go through all the trials on ds and >> separate one to test >> >> clf = LinearCSVMC() >> >> fclf = FeatureSelectionClassifier(clf, fsel) >> >> dsTrain = [] >> >> dsTrain.append(ds[0:i]) #separates the training data >> >> dsTrain.append(ds[i:-1]) >> >> dsTrain = vstack(dsTrain) #stacks it >> >> fclf.train(dsTrain) >> >> predicted = fclf.predict(dsTest) #stores the prediction >> >> predictions.append(dsTest.targets == predicted) #checks whether the >> prediction was correct >> >> print np.mean(predictions) #checks the mean -accuracy of all predictions >> >> I would really appreciate any feedback, thanks! >> >> >> >> _______________________________________________ >> Pkg-ExpPsy-PyMVPA mailing list >> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org >> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa >> >> > -- > Joset A. Etzel, Ph.D. > Research Analyst > Cognitive Control & Psychopathology Lab > Washington University in St. Louis > http://mvpa.blogspot.com/ > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Thu Sep 24 02:49:49 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Wed, 23 Sep 2015 22:49:49 -0400 Subject: [pymvpa] Sanity check In-Reply-To: References: Message-ID: <20150924024949.GF30459@onerussian.com> On Wed, 23 Sep 2015, Ra?l Hern?ndez wrote: > Hi, I?m trying to evaluate on trial by trial basis how well a region can > predict the stimulus being presented to compare it with the participant?s > judgment of the stimulus. So I?m training the classifier with data from all the > trials on all the runs except by the one that I want to predict. > I?m getting really good classifications better than when I was predicting one > run using all the others. Supposedly it should be a little better as I?m > training with a little more data but I?m worried I?m doing something wrong. > Could anyone let me know if I?m making some sort of mistake? > I know that there should be a more efficient way to do it but I wanted > something easy, this is my code: > predictions = [] #this is a vector that will contain the predictions of the > classifier > for i,dsTest in enumerate(ds): #go through all the trials on ds and separate > one to test > ??? clf = LinearCSVMC() > ??? fclf = FeatureSelectionClassifier(clf, fsel) > ??? dsTrain = [] > ??? dsTrain.append(ds[0:i]) #separates the training data > ??? dsTrain.append(ds[i:-1]) minor but note that :-1 would select all but last $> python -c 'print range(2)[:-1]' [0] you didn't have to do manual splitting but could've simply assigned some attribute like ds.sa['trials'] = np.arange(len(ds)) and made use of NFoldPartitioner(attr='trials') and then CrossValidation... all standard stuff back to more optimistic results, as Jo pointed out, to carry out most trustworthy analysis you should have trained/cross-validated across runs. Also ds.summary() output last tables could provide you some related information on trial orders ... which could also contribute to "optimistic" result (depending on the output of cause.. ;) ) -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From raul at lafuentelab.org Thu Sep 24 03:40:58 2015 From: raul at lafuentelab.org (=?UTF-8?B?UmHDumwgSGVybsOhbmRleg==?=) Date: Wed, 23 Sep 2015 22:40:58 -0500 Subject: [pymvpa] Sanity check In-Reply-To: <20150924024949.GF30459@onerussian.com> References: <20150924024949.GF30459@onerussian.com> Message-ID: Thanks your method is way better than my rudimentary way; I feel like a newbie, I will definitely do it your way. Just another thing after running the analysis, how can I get the predictions for each trial?, I'm interested on the actual prediction, not if it was accurate or not On Wed, Sep 23, 2015 at 9:49 PM, Yaroslav Halchenko wrote: > > On Wed, 23 Sep 2015, Ra?l Hern?ndez wrote: > > > Hi, I?m trying to evaluate on trial by trial basis how well a region can > > predict the stimulus being presented to compare it with the participant?s > > judgment of the stimulus. So I?m training the classifier with data from > all the > > trials on all the runs except by the one that I want to predict. > > > > I?m getting really good classifications better than when I was > predicting one > > run using all the others. Supposedly it should be a little better as I?m > > training with a little more data but I?m worried I?m doing something > wrong. > > > > Could anyone let me know if I?m making some sort of mistake? > > > > I know that there should be a more efficient way to do it but I wanted > > something easy, this is my code: > > > > predictions = [] #this is a vector that will contain the predictions of > the > > classifier > > > for i,dsTest in enumerate(ds): #go through all the trials on ds and > separate > > one to test > > > clf = LinearCSVMC() > > > fclf = FeatureSelectionClassifier(clf, fsel) > > > dsTrain = [] > > > dsTrain.append(ds[0:i]) #separates the training data > > > dsTrain.append(ds[i:-1]) > > minor but note that :-1 would select all but last > > $> python -c 'print range(2)[:-1]' > [0] > > you didn't have to do manual splitting but could've simply assigned some > attribute like > > ds.sa['trials'] = np.arange(len(ds)) > > and made use of NFoldPartitioner(attr='trials') and then > CrossValidation... all standard stuff > > back to more optimistic results, as Jo pointed out, to carry out most > trustworthy analysis you should have trained/cross-validated across runs. > Also ds.summary() output last tables could provide you some related > information on trial orders ... which could also contribute to "optimistic" > result (depending on the output of cause.. ;) ) > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Thu Sep 24 04:45:52 2015 From: debian at onerussian.com (Yaroslav Halchenko) Date: Thu, 24 Sep 2015 00:45:52 -0400 Subject: [pymvpa] Sanity check In-Reply-To: References: <20150924024949.GF30459@onerussian.com> Message-ID: <20150924044552.GG30459@onerussian.com> On Wed, 23 Sep 2015, Ra?l Hern?ndez wrote: > Thanks your method is way better than my rudimentary way; I feel like a > newbie,?I will definitely do it your way. I would recommend going through the tutorial, possibly following the layout of our recent courses (which you might also catch somewhere/sometime) http://www.pymvpa.org/courses.html#chap-courses > Just another thing after running the analysis, how can I get the predictions > for each trial?, I'm interested on the actual prediction, not if it was > accurate or not set errorfx=None to the CrossValidation ... no error would be estimated and actual predictions will be output -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From guangsheng.liang at ttu.edu Thu Sep 24 07:10:20 2015 From: guangsheng.liang at ttu.edu (Liang, Guangsheng) Date: Thu, 24 Sep 2015 07:10:20 +0000 Subject: [pymvpa] LinearCSVMC module not found Message-ID: <44A3EA40-95D8-4B19-A8D0-05B4DE2F49FE@ttu.edu> Dear PyMVPAer, I am a newbie for PyMVPA, and was trying to do pattern analysis on my EEG data, but was experiencing a problem about the module LinearCSVMC. It will be very appreciated if some one could offer me some help. I am using mac Yosemite v10.10.5. I setup the PyMVPA from the source according to the installation tutorial exactly. Everything goes well until I tried to use LinearCSVMC classifier. It returns, "name ?LinearCSVMC? is not defined?. I searched the mail list record, and found there is a thread about the same issue. http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/2015q3/003173.html It seems like the swig version causing this problem, so I used macport to downgrade my swig from 3.0.7 to 3.0.2, but it does not work neither. I reinstalled the PyMVPA and saw warnings and errors, but I am not sure if they caused this problem. In the step ?python setup.py build_ext?, it has, #### ['/usr/bin/clang', '-fno-strict-aliasing', '-fno-common', '-dynamic', '-pipe', '-Os', '-fwrapv', '-DNDEBUG', '-g', '-fwrapv', '-O3', '-Wall', '-Wstrict-prototypes'] ####### Missing compiler_cxx fix for UnixCCompiler In the step ?python setup.py install?, it has, package init file 'mvpa2/tests/badexternals/__init__.py' not found (or not a regular file) package init file 'mvpa2/tests/badexternals/__init__.py' not found (or not a regular file) #### ['/usr/bin/clang', '-fno-strict-aliasing', '-fno-common', '-dynamic', '-pipe', '-Os', '-fwrapv', '-DNDEBUG', '-g', '-fwrapv', '-O3', '-Wall', '-Wstrict-prototypes'] ####### Missing compiler_cxx fix for UnixCCompiler Also, when I was trying to use PyMVPA in iPython, it is very strange to see that ipython returns ?no module named mvpa2?, but in python, mvpa2 can be successfully loaded. Thank you very much for any helps or advises! Carl -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.n.oosterhof at googlemail.com Thu Sep 24 09:06:23 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Thu, 24 Sep 2015 11:06:23 +0200 Subject: [pymvpa] LinearCSVMC module not found In-Reply-To: <44A3EA40-95D8-4B19-A8D0-05B4DE2F49FE@ttu.edu> References: <44A3EA40-95D8-4B19-A8D0-05B4DE2F49FE@ttu.edu> Message-ID: <889746E4-7983-4515-B9A8-57D7A5BD6123@googlemail.com> > On 24 Sep 2015, at 09:10, Liang, Guangsheng wrote: > > I am using mac Yosemite v10.10.5. I setup the PyMVPA from the source according to the installation tutorial exactly. Everything goes well until I tried to use LinearCSVMC classifier. It returns, "name ?LinearCSVMC? is not defined?. [...] > > In the step ?python setup.py build_ext?, it has, > #### ['/usr/bin/clang', '-fno-strict-aliasing', '-fno-common', '-dynamic', '-pipe', '-Os', '-fwrapv', '-DNDEBUG', '-g', '-fwrapv', '-O3', '-Wall', '-Wstrict-prototypes'] ####### > > Missing compiler_cxx fix for UnixCCompiler This may be just a warning message [1]. Did this create any .so files? What happens if you run (in the PyMVPA root directory): find mvpa2 -iname '*.so' On my system (also running Yosemite) I get: $ find mvpa2 -iname '*.so' mvpa2/clfs/libsmlrc/smlrc.so mvpa2/clfs/libsvmc/_svmc.so If that does not return any matches in your case, what is the output of: find . -iname '*.so' > Also, when I was trying to use PyMVPA in iPython, it is very strange to see that ipython returns ?no module named mvpa2?, but in python, mvpa2 can be successfully loaded. This may be a path issue. Did you add the PyMVPA path to the PYTHONPATH? On my system, I use: export PYTHONPATH=/Users/nick/git/PyMVPA:${PYTHONPATH} where "/Users/nick/git/PyMVPA" is the PyMVPA root directory on my system. (by adding this line to ~/.bash_profile, this is run every time a new Terminal window is opened.) From guangsheng.liang at ttu.edu Thu Sep 24 09:45:06 2015 From: guangsheng.liang at ttu.edu (Liang, Guangsheng) Date: Thu, 24 Sep 2015 09:45:06 +0000 Subject: [pymvpa] LinearCSVMC module not found In-Reply-To: <889746E4-7983-4515-B9A8-57D7A5BD6123@googlemail.com> References: <44A3EA40-95D8-4B19-A8D0-05B4DE2F49FE@ttu.edu> <889746E4-7983-4515-B9A8-57D7A5BD6123@googlemail.com> Message-ID: <317FFD42-69A0-43A9-94C5-899A4E804433@ttu.edu> Hi Nick, Thank you very much for your email and help! Yes, I do find smlrc.so, and _svmc.so file after installing anaconda on my computer. The installation of anaconda also fixes my problem on using PyMVPA in iPython. It probably fixed some of the path issues when it was installed on my computer, since Anaconda redirects the default python folder under its folder. However, I still cannot use the LinearCSVMC module. Any thoughts? Thanks again! On 9/24/15, 4:06 AM, "Pkg-ExpPsy-PyMVPA on behalf of Nick Oosterhof" wrote: > >> On 24 Sep 2015, at 09:10, Liang, Guangsheng wrote: >> >> I am using mac Yosemite v10.10.5. I setup the PyMVPA from the source according to the installation tutorial exactly. Everything goes well until I tried to use LinearCSVMC classifier. It returns, "name ?LinearCSVMC? is not defined?. [...] >> >> In the step ?python setup.py build_ext?, it has, >> #### ['/usr/bin/clang', '-fno-strict-aliasing', '-fno-common', '-dynamic', '-pipe', '-Os', '-fwrapv', '-DNDEBUG', '-g', '-fwrapv', '-O3', '-Wall', '-Wstrict-prototypes'] ####### >> >> Missing compiler_cxx fix for UnixCCompiler > >This may be just a warning message [1]. Did this create any .so files? What happens if you run (in the PyMVPA root directory): > > find mvpa2 -iname '*.so' > >On my system (also running Yosemite) I get: > > $ find mvpa2 -iname '*.so' > mvpa2/clfs/libsmlrc/smlrc.so > mvpa2/clfs/libsvmc/_svmc.so > >If that does not return any matches in your case, what is the output of: > > find . -iname '*.so' > >> Also, when I was trying to use PyMVPA in iPython, it is very strange to see that ipython returns ?no module named mvpa2?, but in python, mvpa2 can be successfully loaded. > >This may be a path issue. Did you add the PyMVPA path to the PYTHONPATH? >On my system, I use: > > export PYTHONPATH=/Users/nick/git/PyMVPA:${PYTHONPATH} > >where "/Users/nick/git/PyMVPA" is the PyMVPA root directory on my system. > >(by adding this line to ~/.bash_profile, this is run every time a new Terminal window is opened.) >_______________________________________________ >Pkg-ExpPsy-PyMVPA mailing list >Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org >http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa From n.n.oosterhof at googlemail.com Thu Sep 24 10:57:37 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Thu, 24 Sep 2015 12:57:37 +0200 Subject: [pymvpa] LinearCSVMC module not found In-Reply-To: <317FFD42-69A0-43A9-94C5-899A4E804433@ttu.edu> References: <44A3EA40-95D8-4B19-A8D0-05B4DE2F49FE@ttu.edu> <889746E4-7983-4515-B9A8-57D7A5BD6123@googlemail.com> <317FFD42-69A0-43A9-94C5-899A4E804433@ttu.edu> Message-ID: > On 24 Sep 2015, at 11:45, Liang, Guangsheng wrote: > > Yes, I do find smlrc.so, and _svmc.so file after installing anaconda on my computer. Are these in the ${PYMVPA_ROOT}/mvpa2 directory, or in a ${PYMVPA_ROOT}//build/*/mvpa2 directory? In the latter case, you may have to copy them to the ${PYMVPA_ROOT}/mvpa2 directory. In more detail, as described earlier [1], I noticed that when running python setup.py build_ext on OSX, the output is stored in the build directory. Manually copying the files to the mvpa2 directory made them accessible in PyMVPA. Specifically, the following worked when run from the PyMVPA root directory: for ext in .so .o; do for i in `find build -iname "*${ext}"`; do j=`echo $i | cut -f3- -d/`; cp $i $j; done; done Can you try and see if that resolves the issue? If it does not, can you provide the full error message and also, from within python/ipython, the output of: import mvpa2 mvpa2.wtf() [1] http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/2015q3/003176.html From guangsheng.liang at ttu.edu Thu Sep 24 17:56:32 2015 From: guangsheng.liang at ttu.edu (Liang, Guangsheng) Date: Thu, 24 Sep 2015 17:56:32 +0000 Subject: [pymvpa] LinearCSVMC module not found In-Reply-To: References: <44A3EA40-95D8-4B19-A8D0-05B4DE2F49FE@ttu.edu> <889746E4-7983-4515-B9A8-57D7A5BD6123@googlemail.com> <317FFD42-69A0-43A9-94C5-899A4E804433@ttu.edu> Message-ID: <0A2B2BDA-8B75-4126-821C-79D3031AB99D@ttu.edu> The .so files are found in site-packages/mvpa2/clfs file. Strange thing is, even those files are in the mvpa2 module, when I called out mvpa2.wtf(), it shows that libsvm has not been loaded yet. I used ?from mvpa2.suite import *? to load the module. There are two warnings when I loaded the module, and not sure if those could help: /Users/tumo/anaconda/lib/python2.7/site-packages/numpy/lib/utils.py:95: DeprecationWarning: `scipy.weave` is deprecated, use `weave` instead! warnings.warn(depdoc, DeprecationWarning) /Users/tumo/anaconda/lib/python2.7/site-packages/numpy/lib/utils.py:95: DeprecationWarning: `scipy.linalg.calc_lwork` is deprecated! calc_lwork was an internal module in Scipy and has been removed. Several functions in scipy.linalg.lapack have *_lwork variants that perform the lwork calculation (from Scipy >= 0.15.0), or allow passing in LWORK=-1 argument to perform the computation. Here you are the wtf() output: Current date: 2015-09-24 12:51 PyMVPA: Version: 2.4.0 Hash: $Format:%H$ Path: /Users/tumo/anaconda/lib/python2.7/site-packages/mvpa2/__init__.pyc Version control (GIT): GIT information could not be obtained due "/Users/tumo/anaconda/lib/python2.7/site-packages/mvpa2/.. is not under GIT" SYSTEM: OS: posix Darwin 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 Distribution: 10.10.5/x86_64 EXTERNALS: Present: cPickle, ctypes, good scipy.stats.rv_continuous._reduce_func(floc,fscale), good scipy.stats.rv_discrete.ppf, griddata, gzip, h5py, hdf5, ipython, libsvm verbosity control, lxml, matplotlib, mock, nose, numpy, numpy_correct_unique, pylab, pylab plottable, running ipython env, scipy, skl, statsmodels, weave Absent: atlas_fsl, atlas_pymvpa, cran-energy, elasticnet, glmnet, good scipy.stats.rdist, hcluster, joblib, lars, liblapack.so, libsvm, mass, mdp, mdp ge 2.4, nibabel, nipy, nipy.neurospin, numpydoc, openopt, pprocess, pywt, pywt wp reconstruct, pywt wp reconstruct fixed, reportlab, rpy2, sg ge 0.6.4, sg ge 0.6.5, sg_fixedcachesize, shogun, shogun.krr, shogun.lightsvm, shogun.mpd, shogun.svmocas, shogun.svrlight Versions of critical externals: ctypes : 1.1.0 h5py : 2.5.0 hdf5 : 1.8.15 ipython : 3.2.0 lxml : 3.4.4 matplotlib : 1.4.3 mock : 1.0.1 numpy : 1.9.2 scipy : 0.15.1 skl : 0.16.1 Matplotlib backend: MacOSX RUNTIME: PyMVPA Environment Variables: PYTHONEXECUTABLE : "/Users/tumo/anaconda/bin/python" PyMVPA Runtime Configuration: [general] verbose = 1 [externals] have running ipython env = yes have ipython = yes have numpy = yes have scipy = yes have matplotlib = yes have h5py = yes have reportlab = no have weave = yes have good scipy.stats.rdist = no have good scipy.stats.rv_discrete.ppf = yes have good scipy.stats.rv_continuous._reduce_func(floc,fscale) = yes have pylab = yes have lars = no have elasticnet = no have glmnet = no have skl = yes have ctypes = yes have libsvm = no have shogun = no have openopt = no have nibabel = no have mdp = no have mdp ge 2.4 = no have nipy = no have statsmodels = yes have pywt = no have cpickle = yes have gzip = yes have cran-energy = no have griddata = yes have nipy.neurospin = no have lxml = yes have atlas_fsl = no have atlas_pymvpa = no have hcluster = no have hdf5 = yes have joblib = no have liblapack.so = no have libsvm verbosity control = yes have mass = no have mock = yes have nose = yes have numpy_correct_unique = yes have numpydoc = no have pprocess = no have pylab plottable = yes have pywt wp reconstruct = no have pywt wp reconstruct fixed = no have rpy2 = no have sg ge 0.6.4 = no have sg ge 0.6.5 = no have sg_fixedcachesize = no have shogun.krr = no have shogun.lightsvm = no have shogun.mpd = no have shogun.svmocas = no have shogun.svrlight = no On 9/24/15, 5:57 AM, "Pkg-ExpPsy-PyMVPA on behalf of Nick Oosterhof" on behalf of n.n.oosterhof at googlemail.com> wrote: On 24 Sep 2015, at 11:45, Liang, Guangsheng > wrote: Yes, I do find smlrc.so, and _svmc.so file after installing anaconda on my computer. Are these in the ${PYMVPA_ROOT}/mvpa2 directory, or in a ${PYMVPA_ROOT}//build/*/mvpa2 directory? In the latter case, you may have to copy them to the ${PYMVPA_ROOT}/mvpa2 directory. In more detail, as described earlier [1], I noticed that when running python setup.py build_ext on OSX, the output is stored in the build directory. Manually copying the files to the mvpa2 directory made them accessible in PyMVPA. Specifically, the following worked when run from the PyMVPA root directory: for ext in .so .o; do for i in `find build -iname "*${ext}"`; do j=`echo $i | cut -f3- -d/`; cp $i $j; done; done Can you try and see if that resolves the issue? If it does not, can you provide the full error message and also, from within python/ipython, the output of: import mvpa2 mvpa2.wtf() [1] http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/2015q3/003176.html _______________________________________________ Pkg-ExpPsy-PyMVPA mailing list Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa -------------- next part -------------- An HTML attachment was scrubbed... URL: From billbrod at gmail.com Fri Sep 25 21:09:12 2015 From: billbrod at gmail.com (Bill Broderick) Date: Fri, 25 Sep 2015 17:09:12 -0400 Subject: [pymvpa] Problems interpreting GroupClusterThresholding results Message-ID: Hi all, I've run GroupClusterThresholding on my permutation results (50 permutations per subject), and I'm having trouble interpreting the results. If I'm understanding this correctly, clthr.fa.featurewise_thresh is the threshold value that a searchlight needs in order to be considered significant, right? So if a value in the real results is less than the corresponding value here, it should be considered for adding to a cluster? What then is fa.clusters_featurewise_thresh? Because there are many more voxels for me that are less than featurewise_thresh, but only a small handful show up in clusters_featurewise_thresh... Thanks, Bill -------------- next part -------------- An HTML attachment was scrubbed... URL: From dinga92 at gmail.com Mon Sep 28 09:21:13 2015 From: dinga92 at gmail.com (Richard Dinga) Date: Mon, 28 Sep 2015 11:21:13 +0200 Subject: [pymvpa] Problems interpreting GroupClusterThresholding results Message-ID: Hi, I am pretty sure, that they are added to a cluster if they are higher than a threshold. You are looking for voxels with the accuracy significantly higher than a chance. Best, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.n.oosterhof at googlemail.com Mon Sep 28 09:36:53 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Mon, 28 Sep 2015 11:36:53 +0200 Subject: [pymvpa] LinearCSVMC module not found In-Reply-To: <0A2B2BDA-8B75-4126-821C-79D3031AB99D@ttu.edu> References: <44A3EA40-95D8-4B19-A8D0-05B4DE2F49FE@ttu.edu> <889746E4-7983-4515-B9A8-57D7A5BD6123@googlemail.com> <317FFD42-69A0-43A9-94C5-899A4E804433@ttu.edu> <0A2B2BDA-8B75-4126-821C-79D3031AB99D@ttu.edu> Message-ID: > On 24 Sep 2015, at 19:56, Liang, Guangsheng wrote: > > The .so files are found in site-packages/mvpa2/clfs file. > Strange thing is, even those files are in the mvpa2 module, when I called out mvpa2.wtf(), it shows that libsvm has not been loaded yet. I used ?from mvpa2.suite import *? to load the module. That's a bit strange. You are not able to use the SVM classifier then? If not, what happens if you run svm_margin.py in doc/examples? > > There are two warnings when I loaded the module, and not sure if those could help: > > /Users/tumo/anaconda/lib/python2.7/site-packages/numpy/lib/utils.py:95: DeprecationWarning: `scipy.weave` is deprecated, use `weave` instead! > warnings.warn(depdoc, DeprecationWarning) > /Users/tumo/anaconda/lib/python2.7/site-packages/numpy/lib/utils.py:95: DeprecationWarning: `scipy.linalg.calc_lwork` is deprecated! > > > calc_lwork was an internal module in Scipy and has been removed. > > > Several functions in scipy.linalg.lapack have *_lwork variants > that perform the lwork calculation (from Scipy >= 0.15.0), or > allow passing in LWORK=-1 argument to perform the computation. I don't get those warnings, but looking at your wtf() output it seems your packages are quite recent. It does not seem to me that this would be the cause of SVM not working (but I could be wrong). From billbrod at gmail.com Mon Sep 28 12:59:20 2015 From: billbrod at gmail.com (Bill Broderick) Date: Mon, 28 Sep 2015 08:59:20 -0400 Subject: [pymvpa] Problems interpreting GroupClusterThresholding results In-Reply-To: References: Message-ID: Hi, Oh, I had skimmed over the page and had not noticed the algorithm was expecting accuracy maps, so I was passing it error maps (thus lower is better), hence my confusion. I'm performing searchlight support vector regression, not classification, so my error goes from 0 to 2 instead of 0 to 1. Can I simply take `2 - error` (so higher is better) or does the algorithm require the values to lie between 0 and 1? The docs say the accuracy maps must be the result of classification, but is there a specific reason regression won't work? Thanks, Bill On Sep 28, 2015 5:21 AM, "Richard Dinga" wrote: > Hi, > I am pretty sure, that they are added to a cluster if they are higher than > a threshold. > You are looking for voxels with the accuracy significantly higher than a > chance. > > Best, > Richard > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dinga92 at gmail.com Mon Sep 28 14:42:00 2015 From: dinga92 at gmail.com (Richard Dinga) Date: Mon, 28 Sep 2015 16:42:00 +0200 Subject: [pymvpa] Problems interpreting GroupClusterThresholding results Message-ID: > Hi, > > Oh, I had skimmed over the page and had not noticed the algorithm was > expecting accuracy maps, so I was passing it error maps (thus lower is > better), hence my confusion. > I'm performing searchlight support vector regression, not classification, > so my error goes from 0 to 2 instead of 0 to 1. Can I simply take `2 - > error` (so higher is better) or does the algorithm require the values to > lie between 0 and 1? You can do that, as long as higher is better. Algorithm will just sort them and find nth biggest value. > The docs say the accuracy maps must be the result of > classification, but is there a specific reason regression won't work? I have no idea :) Original paper doesn't mention regression. Will p-values and FDR still make sense? -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.hanke at gmail.com Mon Sep 28 14:51:08 2015 From: michael.hanke at gmail.com (Michael Hanke) Date: Mon, 28 Sep 2015 16:51:08 +0200 Subject: [pymvpa] Problems interpreting GroupClusterThresholding results In-Reply-To: References: Message-ID: Hi, On Mon, Sep 28, 2015 at 4:42 PM, Richard Dinga wrote: > > I'm performing searchlight support vector regression, not classification, > > so my error goes from 0 to 2 instead of 0 to 1. Can I simply take `2 - > > error` (so higher is better) or does the algorithm require the values to > > lie between 0 and 1? > > You can do that, as long as higher is better. Algorithm will just sort them and find nth biggest value. > > > The docs say the accuracy maps must be the result of > > classification, but is there a specific reason regression won't work? > > I have no idea :) Original paper doesn't mention regression. Will p-values and FDR still make sense? > > Right now I cannot think of a reason we it wouldn't work -- as long as "bigger is better". The algorithm converts pretty much anything into probabilities/frequencies. 1. Value corresponding to a particular probability under H0 -> threshold. 2. frequency with which a particular blob size has been observered in a group average map under H0. Should work. Michael -- Michael Hanke http://mih.voxindeserto.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From guangsheng.liang at ttu.edu Mon Sep 28 16:28:08 2015 From: guangsheng.liang at ttu.edu (Liang, Guangsheng) Date: Mon, 28 Sep 2015 16:28:08 +0000 Subject: [pymvpa] LinearCSVMC module not found In-Reply-To: References: <44A3EA40-95D8-4B19-A8D0-05B4DE2F49FE@ttu.edu> <889746E4-7983-4515-B9A8-57D7A5BD6123@googlemail.com> <317FFD42-69A0-43A9-94C5-899A4E804433@ttu.edu> <0A2B2BDA-8B75-4126-821C-79D3031AB99D@ttu.edu> Message-ID: Hi Nick, Thank you for your email! Here is the error returned: /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/quadpack.py:293: UserWarning: Extremely bad integrand behavior occurs at some points of the integration interval. warnings.warn(msg) WARNING: None of SVM implementation libraries was found * Please note: warnings are printed only once, but underlying problem might occur many times * Traceback (most recent call last): File "svm_margin.py", line 30, in from mvpa2.clfs.svm import LinearCSVMC ImportError: cannot import name LinearCSVMC It seems like the SVM has not been loaded in the python, but I do see the libsvmc and libsmlrc folder in the mvpa2 folder. On 9/28/15, 4:36 AM, "Pkg-ExpPsy-PyMVPA on behalf of Nick Oosterhof" on behalf of n.n.oosterhof at googlemail.com> wrote: On 24 Sep 2015, at 19:56, Liang, Guangsheng > wrote: The .so files are found in site-packages/mvpa2/clfs file. Strange thing is, even those files are in the mvpa2 module, when I called out mvpa2.wtf(), it shows that libsvm has not been loaded yet. I used ?from mvpa2.suite import *? to load the module. That's a bit strange. You are not able to use the SVM classifier then? If not, what happens if you run svm_margin.py in doc/examples? There are two warnings when I loaded the module, and not sure if those could help: /Users/tumo/anaconda/lib/python2.7/site-packages/numpy/lib/utils.py:95: DeprecationWarning: `scipy.weave` is deprecated, use `weave` instead! warnings.warn(depdoc, DeprecationWarning) /Users/tumo/anaconda/lib/python2.7/site-packages/numpy/lib/utils.py:95: DeprecationWarning: `scipy.linalg.calc_lwork` is deprecated! calc_lwork was an internal module in Scipy and has been removed. Several functions in scipy.linalg.lapack have *_lwork variants that perform the lwork calculation (from Scipy >= 0.15.0), or allow passing in LWORK=-1 argument to perform the computation. I don't get those warnings, but looking at your wtf() output it seems your packages are quite recent. It does not seem to me that this would be the cause of SVM not working (but I could be wrong). _______________________________________________ Pkg-ExpPsy-PyMVPA mailing list Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.n.oosterhof at googlemail.com Mon Sep 28 16:38:05 2015 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Mon, 28 Sep 2015 18:38:05 +0200 Subject: [pymvpa] LinearCSVMC module not found In-Reply-To: References: <44A3EA40-95D8-4B19-A8D0-05B4DE2F49FE@ttu.edu> <889746E4-7983-4515-B9A8-57D7A5BD6123@googlemail.com> <317FFD42-69A0-43A9-94C5-899A4E804433@ttu.edu> <0A2B2BDA-8B75-4126-821C-79D3031AB99D@ttu.edu> Message-ID: > On 28 Sep 2015, at 18:28, Liang, Guangsheng wrote: > > Here is the error returned: > > /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/quadpack.py:293: UserWarning: Extremely bad integrand behavior occurs at some points of the > integration interval. > warnings.warn(msg) > WARNING: None of SVM implementation libraries was found > * Please note: warnings are printed only once, but underlying problem might occur many times * > Traceback (most recent call last): > File "svm_margin.py", line 30, in > from mvpa2.clfs.svm import LinearCSVMC > ImportError: cannot import name LinearCSVMC Can you please verify that the following files exist on your machine? /Users/tumo/anaconda/lib/python2.7/site-packages/mvpa2/clfs/libsmlrc/smlrc.so /Users/tumo/anaconda/lib/python2.7/site-packages/mvpa2/clfs/libsmlrc/smlrc.o /Users/tumo/anaconda/lib/python2.7/site-packages/mvpa2/clfs/libsvmc/_svmc.so /Users/tumo/anaconda/lib/python2.7/site-packages/mvpa2/clfs/libsvmc/_svmc.o Maybe it's a path issue. What is the output of: python -c "import sys;print('\n'.join(sys.path))" From guangsheng.liang at ttu.edu Mon Sep 28 17:04:01 2015 From: guangsheng.liang at ttu.edu (Liang, Guangsheng) Date: Mon, 28 Sep 2015 17:04:01 +0000 Subject: [pymvpa] LinearCSVMC module not found In-Reply-To: References: <44A3EA40-95D8-4B19-A8D0-05B4DE2F49FE@ttu.edu> <889746E4-7983-4515-B9A8-57D7A5BD6123@googlemail.com> <317FFD42-69A0-43A9-94C5-899A4E804433@ttu.edu> <0A2B2BDA-8B75-4126-821C-79D3031AB99D@ttu.edu> Message-ID: <7396F1EE-0BF8-4C86-8EAC-BA1AF8F9553E@ttu.edu> Well, I don?t see the smlrc.o and _svmc.o files. Probably these cause the problems? I only find smlr.o and svmc_wrap.o in the PyMVPA_root file but cannot find _svmc.o file. Shall I manually copy those two files (smlr.o, svmc_wrap.o) into PyMVPA files? On 9/28/15, 11:38 AM, "Pkg-ExpPsy-PyMVPA on behalf of Nick Oosterhof" wrote: > >> On 28 Sep 2015, at 18:28, Liang, Guangsheng wrote: >> >> Here is the error returned: >> >> /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/scipy/integrate/quadpack.py:293: UserWarning: Extremely bad integrand behavior occurs at some points of the >> integration interval. >> warnings.warn(msg) >> WARNING: None of SVM implementation libraries was found >> * Please note: warnings are printed only once, but underlying problem might occur many times * >> Traceback (most recent call last): >> File "svm_margin.py", line 30, in >> from mvpa2.clfs.svm import LinearCSVMC >> ImportError: cannot import name LinearCSVMC > >Can you please verify that the following files exist on your machine? > >/Users/tumo/anaconda/lib/python2.7/site-packages/mvpa2/clfs/libsmlrc/smlrc.so >/Users/tumo/anaconda/lib/python2.7/site-packages/mvpa2/clfs/libsmlrc/smlrc.o >/Users/tumo/anaconda/lib/python2.7/site-packages/mvpa2/clfs/libsvmc/_svmc.so >/Users/tumo/anaconda/lib/python2.7/site-packages/mvpa2/clfs/libsvmc/_svmc.o > >Maybe it's a path issue. What is the output of: > > python -c "import sys;print('\n'.join(sys.path))" > > > >_______________________________________________ >Pkg-ExpPsy-PyMVPA mailing list >Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org >http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa From billbrod at gmail.com Tue Sep 29 14:51:43 2015 From: billbrod at gmail.com (Bill Broderick) Date: Tue, 29 Sep 2015 10:51:43 -0400 Subject: [pymvpa] Problems interpreting GroupClusterThresholding results In-Reply-To: References: Message-ID: Hi all, Thanks for the pointers! The algorithm runs find with error between 0 and 2, though I'm running into some issues because I forgot to include the mapper for my dataset in the permutation dataset and when I combined all the subject-level actual results into one dataset. So PyMVPA is stuck using the IdentityMapper, which definitely doesn't work. It would be helpful if GroupClusterThreshold printed a warning if the datasets it's training or called on don't have any mappers, since it's unlikely that a user will actually use this on a dataset without any mappers, right? Thanks, Bill On Mon, Sep 28, 2015 at 10:51 AM, Michael Hanke wrote: > Hi, > > On Mon, Sep 28, 2015 at 4:42 PM, Richard Dinga wrote: > >> > I'm performing searchlight support vector regression, not classification, >> > so my error goes from 0 to 2 instead of 0 to 1. Can I simply take `2 - >> > error` (so higher is better) or does the algorithm require the values to >> > lie between 0 and 1? >> >> You can do that, as long as higher is better. Algorithm will just sort them and find nth biggest value. >> >> > The docs say the accuracy maps must be the result of >> > classification, but is there a specific reason regression won't work? >> >> I have no idea :) Original paper doesn't mention regression. Will p-values and FDR still make sense? >> >> Right now I cannot think of a reason we it wouldn't work -- as long as > "bigger is better". > > The algorithm converts pretty much anything into > probabilities/frequencies. 1. Value corresponding to a particular > probability under H0 -> threshold. 2. frequency with which a particular > blob size has been observered in a group average map under H0. > > Should work. > > Michael > > > > > -- > Michael Hanke > http://mih.voxindeserto.de > > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: