From mih at debian.org Wed Jul 19 05:11:40 2017 From: mih at debian.org (Michael Hanke) Date: Wed, 19 Jul 2017 07:11:40 +0200 Subject: [pymvpa] Coordinator and Software Developer - brain imaging research computing Message-ID: <20170719051140.uwuuom6cepbzmarv@meiner> The CBBS (http://www.cbbs.eu) is establishing a common computing platform for its members and affiliated institutions. Research institutions in Magdeburg (Otto-von-Guericke-University, the Leibniz-Institute for Neurobiology) employ a multitude of research-dedicated MRI-scanners (Siemens Magnetom (7 Tesla), Siemens Prisma, Siemens Verio, Siemens Skyra, and Philips Achieva dStream). The computing platform aims to facilitate brain imaging research by offering high throughput computing capabilities for the analysis of large amounts of neuroimaging data, with uniform access to common data processing algorithms as well as cutting edge method developments that are fully integrated with the data acquisition and data storage facilities in Magdeburg. Particular emphasis is put on improved accessibility of provided technologies through structured graphical user interfaces and default parameterization to help translate state-of-the-art data analysis technologies into clinical research and application. The successful applicant will make key contributions to the design, implementation, documentation, and maintenance of the computing platform. The applicant will work with research labs in Magdeburg to identify relevant common data processing workflows, and help integrate existing implementations with the computing framework. The applicant will also work with the providers of general purpose computing and storage resources in Magdeburg to produce a robust, sustainable, and maximally efficient system. More information on the position is available at http://tinyurl.com/y9xm55sq -- J.-Prof. Dr. Michael Hanke Psychoinformatik Labor, Institut f?r Psychologie II Otto-von-Guericke-Universit?t Magdeburg, Universit?tsplatz 2, Geb.23 Tel.: +49(0)391-67-18481 GPG: 4096R/C073D2287FFB9E9B From llin90 at illinois.edu Wed Aug 2 04:22:06 2017 From: llin90 at illinois.edu (Lin, Lynda C) Date: Wed, 2 Aug 2017 04:22:06 +0000 Subject: [pymvpa] Classification raining on run1, testing on run2 only (no cross validation) Message-ID: <6CC27C46CBD65B41999489EF63FB180F439DBBBF@CITESMBX4.ad.uillinois.edu> Hello, I'm new to pyMVPA and have read through all of the tutorial and other documentation, but can't seem to figure out what is probably a very simple question: is there a way to get the individual confusion matrices for each classification (each run) that results from using the halfpartitioner generator? I'm guessing it has something to do with the attributes/parameters of the generator? I'm trying to do this both for a whole brain classification and for a searchlight. I have 2 runs/chunks (each run has 72 trials and each trial is associated with an Ingroup/Outgroup target). But I only want it to train on run 1 and test on run 2 (I'm not interested in the results from train run 2 and test run 1 and thus would just want to look at the confusion matrix for train run1, test run2 - ie an Ingroup/Outgroup classification with 72 trials rather than 144 classifications). For the whole brain my goal is to calculate the TPR for Ingroup and Outgroup targets only training on run 1 and testing on run 2. I've tried it three different ways and I'm getting different results for each way so just wanted to know if any of these ways is valid: 1) Using the manual split example from the tutorial and calling the "training_stats" conditional attribute in the classifier In the tutorial we can get the individual accuracies for each run through cv_results.samples but I'm interested in the TPR (True Positive Rate) for Ingroup and Outgroup separately so I'm looking to print the confusion matrix to calculate these numbers ds_split1 = ds[ds.sa.chunks == 1.] ds_split2 = ds[ds.sa.chunks == 2.] clf = LinearCSVMC(enable_ca=['training_stats']) clf.set_postproc(BinaryFxNode(mean_mismatch_error,'targets')) clf.train(ds_split1) err = clf(ds_split2) clf.ca.training_stats.as_string(description=True) 2) Using the HalfPartitioner function's "count" argument clf = LinearCSVMC(enable_ca=['training_stats']) #The training_stats confusion matrix from this method doesn't match the one above hpart = HalfPartitioner(count=1, attr='chunks') cvte = CrossValidation(clf,hpart,errorfx=lambda p,t: np.mean(p==t),enable_ca=['stats']) cv_results = cvte(ds) cvte.ca.stats.as_string(description=True) 3) Doing manual counters of the predicted vs actual targets ds_split1 = ds[ds.sa.chunks == 1.] ds_split2 = ds[ds.sa.chunks == 2.] clf = LinearCSVMC() clf.train(ds_split1) predictions=clf.predict(ds_split2.samples) prediction_values=predictions==ds_split2.sa.targets counter=0 for stimulus in ds_split2.sa.targets: current_prediction_value=prediction_values[counter] print current_prediction_value if stimulus=='I': #Ingroup if current_prediction_value==True: num_correct_ingroup+=1.0 counter+=1 else: counter+=1 elif stimulus=='O': #Outgroup if current_prediction_value==True: num_correct_outgroup+=1.0 counter+=1 else: counter+=1 sensitivity_ingroup=float(num_correct_ingroup/36.0) sensitivity_outgroup=float(num_correct_outgroup/36.0) I'm getting different results (Ingroup/Outgroup TPRs) for each of these methods so just wondering which, if any, of the above mentioned methods would be the correct method for getting the confusion matrices or TPRs for Ingroup/Outgroup only training on run 1 and testing on run 2? The last method I wouldn't be able to use for a searchlight but might be valid for a wholebrain? The confusion matrix that I get from method 1 has highly accurate predictions, which makes me doubt that's the confusion matrix I'm looking for. Thank you for your help! -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrewsilva19 at gmail.com Wed Aug 16 07:31:08 2017 From: andrewsilva19 at gmail.com (Andrew Silva) Date: Wed, 16 Aug 2017 00:31:08 -0700 Subject: [pymvpa] different classifier parameters for different conditions Message-ID: Hello pyMVPA experts, I'm relatively new to MVPA, and an issue came up that I'd appreciate feedback from. I want to classify based on the visual angle of a stimulus. I have four different stimulus conditions corresponding to different ways of presenting the visual angle. I also have theoretical apriori predictions that classification accuracy should follow condA > condB > condC > condD. The desire is to get the highest possible classification accuracy (fairly) for each condition. So, I will run the classification many times, each time with different classifier parameters (for example, with a C-SVM I will use different C values). My question is this: Obviously not all conditions respond to a given C value in the same way, so different C values are "optimal" for different conditions. Therefore, is it correct to report the classification performance for all conditions using the same classifier parameter, or is it correct to "optimize" each condition's performance independently, such that each condition potentially uses a different classifier parameter? I greatly appreciate your thoughts on this question - my gut tells me that all conditions should use the same parameters, but I can't find a source that definitively says so. Thanks again, Andy From c.brauchli at psychologie.uzh.ch Fri Sep 1 19:26:22 2017 From: c.brauchli at psychologie.uzh.ch (c.brauchli at psychologie.uzh.ch) Date: Fri, 1 Sep 2017 21:26:22 +0200 Subject: [pymvpa] Invariant features in surface-based searchlight analysis In-Reply-To: References: , Message-ID: Hello pyMVPA experts, I am currently trying to set up a surface based searchlight analyses as documented in the pymvpa manual (http://www.pymvpa.org/examples/searchlight_surf.html), but on structural data. I am trying to classify two groups (A,B) with ten subjects each. My input data contains values for "local gyrification" in voxels that lie between the white matter (%s.smoothwm.asc) and pial (%s.pial.asc) surfaces. All other voxels are zero. I masked my input data with my query engines voxel selection ("qe.voxsel.get_mask()") which gives me a final datastructure of (20, 235866). Yet, I have many invariant zero-value voxels, e.g. in regions of the corpus callosum where white and pial surfaces collapse on each other and thus no values for local gyrification are assigned. I tried removing the invariant features with "remove_invariant_features", but this gives me an error when running the searchlight analyses as the new datastructure (20, 206197) does not fit the voxel selection of the queryengine. When I run my searchlight (using LinearCSVMC classifier), I get the following warning for some voxels: # [SLC] DBG: +0:00:08 _______[0%]_______ -1:41:17 ROI 38 (38/27307), 100 # featuresWARNING: Obtained degenerate data with zero norm for training of . # Scaling of C cannot be done. I guess this comes from the invariant voxels in my dataset, yet I see no possibility to exclude them from my analyses. Also, my final accuracies are centered around 0.1 and not as expected around the chance level of 0.5. Are there other classifiers that deal better with invariant features or should I rather run an "old-fashioned" spherical searchlight analyses where I could use "remove_invariant_features"? Thanks in advance for your help! Christian -- Universit?t Z?rich Christian Brauchli, MSc Psychologisches Institut Neuropsychologie Binzm?hlestrasse 14, Box 25 CH-8050 Z?rich +41 44 635 74 51 Telefon +41 44 635 74 09 Telefax www.psychologie.uzh.ch c.brauchli at psychologie.uzh.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From pegahkf at gmail.com Thu Sep 7 20:09:19 2017 From: pegahkf at gmail.com (Pegah Kassraian Fard) Date: Thu, 7 Sep 2017 22:09:19 +0200 Subject: [pymvpa] searchlight analysis Message-ID: Hi, I am new to searchlight analysis, and would appreciate some feedback: - Normally, I perform classification based on beta-files. Is it possible to perform the searchlight on these images, or is it required to input the 4-D epi-volumes? - If the later is true, how are the names/onsets defined as an input for the searchlight? Many thanks, Pegah -------------- next part -------------- An HTML attachment was scrubbed... URL: From effigies at bu.edu Thu Sep 7 20:48:22 2017 From: effigies at bu.edu (Christopher Markiewicz) Date: Thu, 7 Sep 2017 16:48:22 -0400 Subject: [pymvpa] searchlight analysis In-Reply-To: References: Message-ID: Hi Pegah, You can run searchlight on the beta series. If you've run your classifications in PyMVPA, then it should be in the correct format already. If not, you may need to stack your beta files into a 4D series, with an attributes file that labels each "time point" with a condition label and run number. Chris Markiewicz On Thu, Sep 7, 2017 at 4:09 PM, Pegah Kassraian Fard wrote: > Hi, > > I am new to searchlight analysis, and would appreciate some feedback: > > - Normally, I perform classification based on beta-files. Is it possible > to perform the searchlight on these images, or is it required to input the > 4-D epi-volumes? > > - If the later is true, how are the names/onsets defined as an input for > the searchlight? > > Many thanks, > Pegah > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pegahkf at gmail.com Thu Sep 7 21:02:04 2017 From: pegahkf at gmail.com (Pegah Kassraian Fard) Date: Thu, 7 Sep 2017 23:02:04 +0200 Subject: [pymvpa] searchlight analysis In-Reply-To: References: Message-ID: Many thanks. Would it be maybe possible to send you a small script and a simple toy dataset for you to check, or would that be too much work (would be understandable if so), Pegah On Sep 7, 2017 10:59 PM, "Christopher Markiewicz" wrote: Hi Pegah, You can run searchlight on the beta series. If you've run your classifications in PyMVPA, then it should be in the correct format already. If not, you may need to stack your beta files into a 4D series, with an attributes file that labels each "time point" with a condition label and run number. Chris Markiewicz On Thu, Sep 7, 2017 at 4:09 PM, Pegah Kassraian Fard wrote: > Hi, > > I am new to searchlight analysis, and would appreciate some feedback: > > - Normally, I perform classification based on beta-files. Is it possible > to perform the searchlight on these images, or is it required to input the > 4-D epi-volumes? > > - If the later is true, how are the names/onsets defined as an input for > the searchlight? > > Many thanks, > Pegah > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > _______________________________________________ Pkg-ExpPsy-PyMVPA mailing list Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Fri Sep 8 01:41:22 2017 From: debian at onerussian.com (Yaroslav Halchenko) Date: Thu, 7 Sep 2017 21:41:22 -0400 Subject: [pymvpa] searchlight analysis In-Reply-To: References: Message-ID: <20170908014122.5hng3sxzqcfqu4yr@hopa.kiewit.dartmouth.edu> On Thu, 07 Sep 2017, Pegah Kassraian Fard wrote: > Many thanks. Would it be maybe possible to send you a small script and a > simple toy dataset for you to check, or would that be too much work (would > be understandable if so), just post it here (url to dataset), and someone might come of help -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From yoh at onerussian.com Thu Sep 7 22:07:14 2017 From: yoh at onerussian.com (Yaroslav Halchenko) Date: Thu, 07 Sep 2017 18:07:14 -0400 Subject: [pymvpa] searchlight analysis In-Reply-To: References: Message-ID: <55F69F77-CF0F-407C-BF6D-F532BFA267CC@onerussian.com> Quick one... Just give a list of files to fmri_dataset pointing to those 3d volumes, and give a list of targets of matching length On September 7, 2017 4:09:19 PM EDT, Pegah Kassraian Fard wrote: >Hi, > >I am new to searchlight analysis, and would appreciate some feedback: > >- Normally, I perform classification based on beta-files. Is it >possible to >perform the searchlight on these images, or is it required to input the >4-D >epi-volumes? > >- If the later is true, how are the names/onsets defined as an input >for >the searchlight? > >Many thanks, >Pegah -- Sent from a phone which beats iPhone. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivsstud at gmail.com Fri Sep 8 05:08:38 2017 From: ivsstud at gmail.com (=?UTF-8?B?0JjQu9GM0Y8g0KHRi9GB0L7QtdCy?=) Date: Fri, 8 Sep 2017 09:08:38 +0400 Subject: [pymvpa] Granger causality analysis Message-ID: Dear all! I am a informal leader of a small research group, specializing in data analysis, mainly including Granger causality approach (but also other measures like mutual information/transfer entropy, phase synchronization, nonlinear correlation, surrogate generation for testing for significance, etc.), applied for local field potentials in WAG/Rij rats (genetic models of absence epilepsy) and ordinary Wistar rats (studying limbic seizures). The methods are mainly original and written in Python (but not exclusively, also, where is some Pascal, Fortran, SciLab code). If it is interesting, we could integrate our results in the Framework, since we are looking to popularize them. For convenience, please find my researchegate profile: https://www.researchgate.net/profile/Ilya_Sysoev My researcherid: D-5930-2013 Will be pleased to contribute. Best regards, Ilya Sysoev. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pegahkf at gmail.com Fri Sep 8 05:58:24 2017 From: pegahkf at gmail.com (Pegah Kassraian Fard) Date: Fri, 8 Sep 2017 07:58:24 +0200 Subject: [pymvpa] searchlight analysis In-Reply-To: <55F69F77-CF0F-407C-BF6D-F532BFA267CC@onerussian.com> References: <55F69F77-CF0F-407C-BF6D-F532BFA267CC@onerussian.com> Message-ID: That would be great. - Files (whole brain - not yet masked) are here - Targets/labels are in the labels.mat file, it is binary classification, so one classes' samples are labeled by "1", the other classes by "2", the "7" are just the head movement parameters and can be ignored for classification - I have included a struct image of the subject (subject2_struct.nii), as well as a mask for primary sensory cortex (S1_mask.nii) I was attempting to reproduce this: http://www.pymvpa.org/examples/searchlight.html including the plots, but my highest accuracies (1-error) were 1. outside of the brain:)) 2. I was not sure what the necessary background files were (I tried to guess and use similar ones I found elsewhere, but there was a dimension mismatch - is there a resampling method in the toolbox to match the dimensions?) Many thanks, let me know if I can provide my (buggy?) script Pegah On Fri, Sep 8, 2017 at 12:07 AM, Yaroslav Halchenko wrote: > Quick one... Just give a list of files to fmri_dataset pointing to those > 3d volumes, and give a list of targets of matching length > > > On September 7, 2017 4:09:19 PM EDT, Pegah Kassraian Fard < > pegahkf at gmail.com> wrote: >> >> Hi, >> >> I am new to searchlight analysis, and would appreciate some feedback: >> >> - Normally, I perform classification based on beta-files. Is it possible >> to perform the searchlight on these images, or is it required to input the >> 4-D epi-volumes? >> >> - If the later is true, how are the names/onsets defined as an input for >> the searchlight? >> >> Many thanks, >> Pegah >> > > -- > Sent from a phone which beats iPhone. > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.n.oosterhof at googlemail.com Fri Sep 8 11:53:01 2017 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Fri, 8 Sep 2017 13:53:01 +0200 Subject: [pymvpa] searchlight analysis In-Reply-To: References: <55F69F77-CF0F-407C-BF6D-F532BFA267CC@onerussian.com> Message-ID: <81B5B8EF-54EA-4EA4-9E49-7DDB33449977@googlemail.com> > On 8 Sep 2017, at 07:58, Pegah Kassraian Fard wrote: > > That would be great. > > - Files (whole brain - not yet masked) are here > > - Targets/labels are in the labels.mat file, it is binary classification, so one classes' samples are labeled by "1", the other classes by "2", the "7" are just the head movement parameters and can be ignored for classification How did you set the 'chunks' (in .sa.chunks)? Are the different beta files from different runs? Typically in fMRI each run has a unique value for the chunks sample attribute, for example by setting the value of chunks for each sample to the corresponding run number. In any case, setting the chunks correctly is important because they affect how crossvalidation is (or can be) done. From debian at onerussian.com Fri Sep 8 12:53:10 2017 From: debian at onerussian.com (Yaroslav Halchenko) Date: Fri, 8 Sep 2017 08:53:10 -0400 Subject: [pymvpa] searchlight analysis In-Reply-To: References: <55F69F77-CF0F-407C-BF6D-F532BFA267CC@onerussian.com> Message-ID: <20170908125310.syeijb362t3ggnif@hopa.kiewit.dartmouth.edu> On Fri, 08 Sep 2017, Pegah Kassraian Fard wrote: > That would be great. > - Files (whole brain - not yet masked) are here > - Targets/labels are in the labels.mat file, it is binary classification, > so one classes' samples are labeled by "1", the other classes by "2", the > "7" are just the head movement parameters and can be ignored for > classification > - I have included a struct image of the subject (subject2_struct.nii), as > well as a mask for primary sensory cortex (S1_mask.nii) > I was attempting to reproduce this: > http://www.pymvpa.org/examples/searchlight.html > including the plots, but my highest accuracies (1-error) were 1. outside > of the brain:)) ;) that is the fun of using all data (not just masked) -- you could potentially discover interesting aspects of your data/design/code ;-) does the rest of the map look "feasible" given your experiment? but before we get that excited, let's indeed see the code: > 2. I was not sure what the necessary background files were > (I tried to guess and use similar ones I found elsewhere, but there was a > dimension mismatch - is there a resampling method in the toolbox to match > the dimensions?) > Many thanks, let me know if I can provide my (buggy?) script sure -- just paste it in here (make sure no wrapping) or post it somewhere online -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From pegahkf at gmail.com Fri Sep 8 15:52:15 2017 From: pegahkf at gmail.com (Pegah Kassraian Fard) Date: Fri, 8 Sep 2017 17:52:15 +0200 Subject: [pymvpa] searchlight analysis In-Reply-To: <20170908125310.syeijb362t3ggnif@hopa.kiewit.dartmouth.edu> References: <55F69F77-CF0F-407C-BF6D-F532BFA267CC@onerussian.com> <20170908125310.syeijb362t3ggnif@hopa.kiewit.dartmouth.edu> Message-ID: - It's not our model's fault:-)! More seriously, we have checked the basic contrasts in SPM, and I had also performed previously "default" classification (linear SVM over all voxels in a relevant ROI vs. classification in a "non-relevant" area as sanity check, permutation tests of classification labels etc.). I have also picked a very simple design for starters (it is a localizer, two kinds of sensory stimulation delivered in random order, and jittered rest in between) - Regarding chunks: We actually have two runs (separated by the "7" labels in the middle - so 15 x "1" for class 1, 15 x "2" for class 2, then 6 x "7" indicating head-movement parameters, then the next run: again 15 x "1" for class 1, 15 x "2" for class 2, 8 x "7" indicating head-movement parameters and regression intercepts). Though I understood that chunks can also be defined if there is only one run, through sensible partitioning the data (without obviously losing the mapping to the labels/targets etc.) - Below the code (I am new to python, hope it's +/- readable), I have also attached it (in the last step the plotting, which is also not yet functional, is added as well, all the files necessary are in the link as also already previously shared): # from glob import glob import os import numpy as np from mvpa2.suite import * %matplotlib inline # enable debug output for searchlight call if __debug__: debug.active += ["SLC"] # change working directory to 'WB' os.chdir('mypath/WB') # use glob to get the filenames of .nii data into a list nii_fns = glob('beta*.nii') # read data labels = [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 7, 7, 7, 7, 7, 7, 7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 7, 7, 7, 7, 7, 7, 7 ] grps = np.repeat([0, 1], 37, axis=0) # used for `chuncks` db = mvpa2.datasets.mri.fmri_dataset( nii_fns, targets=labels, chunks=grps, mask=None, sprefix='vxl', tprefix='tpref', add_fa=None ) # use only the samples of which labels are 1 or 2 db12 = db[np.array([label in [1, 2] for label in labels], dtype='bool')] # in-place z-score normalization zscore(db12) # choose classifier clf = LinearNuSVMC() # setup measure to be computed by Searchlight # cross-validated mean transfer using an N-fold dataset splitter cv = CrossValidation(clf, NFoldPartitioner()) # define searchlight methods radius_ = 1 sl = sphere_searchlight( cv, radius=radius_, space='vxl_indices', postproc=mean_sample() ) # stripping all attributes from the dataset that are not required for the searchlight analysis db12s = db12.copy( deep=False, sa=['targets', 'chunks'], fa=['vxl_indices'], a=['mapper'] ) # run searchlight sl_map = sl(db12s) # toggle between error rate (searchlight output) and accuracy (for plotting) sl_map.samples *= -1 sl_map.samples += 1 # The result dataset is fully aware of the original dataspace. # Using this information we can map the 1D accuracy maps back into ?brain-space? (using NIfTI image header information from the original input timeseries. niftiresults = map2nifti(sl_map, imghdr=db12.a.imghdr) # find the 3-d coordinates of extrema of error rate or accuracy extremum_i = np.argmax(sl_map.samples[0]) # max extremum_i = np.argmin(sl_map.samples[0]) # min coord = db12s.fa[list(db.fa.keys())[0]][extremum_i] # plotting plot_args = { # 'background' : 'Subject2_Struct.nii', # 'background_mask' : 'brain.nii.gz', # 'overlay_mask' : 'S1_mask.nii', 'do_stretch_colors' : False, 'cmap_bg' : 'gray', 'cmap_overlay' : 'autumn', # YlOrRd_r # pl.cm.autumn 'interactive' : cfg.getboolean('examples', 'interactive', True) } fig = pl.figure(figsize=(12, 4), facecolor='white') subfig = plot_lightbox( overlay=niftiresults, vlim=(0.5, None), slices=range(23,31), fig=fig, background='Subject2_Struct.nii', # background_mask='brain.nii.gz', # overlay_mask='S1_mask.nii', **plot_args ) pl.title('Accuracy distribution for radius %i' % radius_) # On Fri, Sep 8, 2017 at 2:53 PM, Yaroslav Halchenko wrote: > > On Fri, 08 Sep 2017, Pegah Kassraian Fard wrote: > > > That would be great. > > - Files (whole brain - not yet masked) are here > > - Targets/labels are in the labels.mat file, it is binary > classification, > > so one classes' samples are labeled by "1", the other classes by "2", > the > > "7" are just the head movement parameters and can be ignored for > > classification > > - I have included a struct image of the subject > (subject2_struct.nii), as > > well as a mask for primary sensory cortex (S1_mask.nii) > > I was attempting to reproduce this: > > http://www.pymvpa.org/examples/searchlight.html > > including the plots, but my highest accuracies (1-error) were 1. > outside > > of the brain:)) > > ;) that is the fun of using all data (not just masked) -- you could > potentially discover interesting aspects of your data/design/code ;-) > does the rest of the map look "feasible" given your experiment? > > but before we get that excited, let's indeed see the code: > > > 2. I was not sure what the necessary background files were > > (I tried to guess and use similar ones I found elsewhere, but there > was a > > dimension mismatch - is there a resampling method in the toolbox to > match > > the dimensions?) > > Many thanks, let me know if I can provide my (buggy?) script > > sure -- just paste it in here (make sure no wrapping) or post it > somewhere online > > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: search_light.py Type: text/x-python-script Size: 2966 bytes Desc: not available URL: From n.n.oosterhof at googlemail.com Fri Sep 8 16:07:01 2017 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Fri, 8 Sep 2017 18:07:01 +0200 Subject: [pymvpa] searchlight analysis In-Reply-To: References: <55F69F77-CF0F-407C-BF6D-F532BFA267CC@onerussian.com> <20170908125310.syeijb362t3ggnif@hopa.kiewit.dartmouth.edu> Message-ID: <631BA32F-4504-4A00-A143-90BF1BB60544@googlemail.com> Some minor comments inserted: > On 8 Sep 2017, at 17:52, Pegah Kassraian Fard wrote: > > > from glob import glob > import os > import numpy as np > > from mvpa2.suite import * > > %matplotlib inline > > > # enable debug output for searchlight call > if __debug__: > debug.active += ["SLC"] > > > # change working directory to 'WB' > os.chdir('mypath/WB') > > # use glob to get the filenames of .nii data into a list > nii_fns = glob('beta*.nii') > > # read data > > labels = [ > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, > 7, 7, 7, 7, 7, 7, 7, > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, > 7, 7, 7, 7, 7, 7, 7 > ] > grps = np.repeat([0, 1], 37, axis=0) # used for `chuncks` > > db = mvpa2.datasets.mri.fmri_dataset( > nii_fns, targets=labels, chunks=grps, mask=None, sprefix='vxl', tprefix='tpref', add_fa=None > ) Is there a reason not to use a mask? At least a brain mask to avoid stuff stuff like skull and air? > > # use only the samples of which labels are 1 or 2 > db12 = db[np.array([label in [1, 2] for label in labels], dtype='bool')] > > # in-place z-score normalization > zscore(db12) > > # choose classifier > clf = LinearNuSVMC() Have you tried a different classifier, for example Naive Bayes? That one is simpler (though usually a bit less sensitive than SVM / LDA in my experience)? > > # setup measure to be computed by Searchlight > # cross-validated mean transfer using an N-fold dataset splitter > cv = CrossValidation(clf, NFoldPartitioner()) > > # define searchlight methods > radius_ = 1 That's a tiny radius - why not use something like 3? From pegahkf at gmail.com Fri Sep 8 16:16:35 2017 From: pegahkf at gmail.com (Pegah Kassraian Fard) Date: Fri, 8 Sep 2017 18:16:35 +0200 Subject: [pymvpa] searchlight analysis In-Reply-To: <631BA32F-4504-4A00-A143-90BF1BB60544@googlemail.com> References: <55F69F77-CF0F-407C-BF6D-F532BFA267CC@onerussian.com> <20170908125310.syeijb362t3ggnif@hopa.kiewit.dartmouth.edu> <631BA32F-4504-4A00-A143-90BF1BB60544@googlemail.com> Message-ID: Many thanks for the quick feedback. Mask, e.g. the S1 mask, could be used. Originally I ran it on already masked data in fact, now I wanted to provide you with unchanged, original data. Radius and classifier type: Could be changed, though I believe that SVM are well suited for fMRI data (inherent regularization etc.). Missed out on radius, 3 would be more cost-efficient, thx! Though I was/am mostly concerned with having first a correctly working pipeline, hence as for now I have not paid too much attention to different variations of classification. Cheers, Pegah On Fri, Sep 8, 2017 at 6:07 PM, Nick Oosterhof wrote: > Some minor comments inserted: > > > On 8 Sep 2017, at 17:52, Pegah Kassraian Fard wrote: > > > > > > from glob import glob > > import os > > import numpy as np > > > > from mvpa2.suite import * > > > > %matplotlib inline > > > > > > # enable debug output for searchlight call > > if __debug__: > > debug.active += ["SLC"] > > > > > > # change working directory to 'WB' > > os.chdir('mypath/WB') > > > > # use glob to get the filenames of .nii data into a list > > nii_fns = glob('beta*.nii') > > > > # read data > > > > labels = [ > > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > > 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, > > 7, 7, 7, 7, 7, 7, 7, > > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > > 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, > > 7, 7, 7, 7, 7, 7, 7 > > ] > > grps = np.repeat([0, 1], 37, axis=0) # used for `chuncks` > > > > db = mvpa2.datasets.mri.fmri_dataset( > > nii_fns, targets=labels, chunks=grps, mask=None, sprefix='vxl', > tprefix='tpref', add_fa=None > > ) > > Is there a reason not to use a mask? At least a brain mask to avoid stuff > stuff like skull and air? > > > > > # use only the samples of which labels are 1 or 2 > > db12 = db[np.array([label in [1, 2] for label in labels], dtype='bool')] > > > > # in-place z-score normalization > > zscore(db12) > > > > # choose classifier > > clf = LinearNuSVMC() > > Have you tried a different classifier, for example Naive Bayes? That one > is simpler (though usually a bit less sensitive than SVM / LDA in my > experience)? > > > > > # setup measure to be computed by Searchlight > > # cross-validated mean transfer using an N-fold dataset splitter > > cv = CrossValidation(clf, NFoldPartitioner()) > > > > # define searchlight methods > > radius_ = 1 > > That's a tiny radius - why not use something like 3? > > > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Fri Sep 8 16:26:59 2017 From: debian at onerussian.com (Yaroslav Halchenko) Date: Fri, 8 Sep 2017 12:26:59 -0400 Subject: [pymvpa] searchlight analysis In-Reply-To: References: <55F69F77-CF0F-407C-BF6D-F532BFA267CC@onerussian.com> <20170908125310.syeijb362t3ggnif@hopa.kiewit.dartmouth.edu> Message-ID: <20170908162659.4ewvlb73q2fkcjvp@hopa.kiewit.dartmouth.edu> On Fri, 08 Sep 2017, Pegah Kassraian Fard wrote: > from glob import glob > import os > import numpy as np > from mvpa2.suite import * > %matplotlib inline > # enable debug output for searchlight call > if __debug__: > debug.active += ["SLC"] > # change working directory to 'WB' > os.chdir('mypath/WB') > # use glob to get the filenames of .nii data into a list > nii_fns = glob('beta*.nii') glob order iirc might not be guaranteed, so I would sort it to make sure nii_fns = sorted(glob('beta*.nii')) > # read data > labels = [ > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, > 7, 7, 7, 7, 7, 7, 7, > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, > 7, 7, 7, 7, 7, 7, 7 > ] and make sure that above order of the volumes correspond to those labels > grps = np.repeat([0, 1], 37, axis=0) # used for `chuncks` > db = mvpa2.datasets.mri.fmri_dataset( > nii_fns, targets=labels, chunks=grps, mask=None, sprefix='vxl', tprefix='tpref', add_fa=None > ) > # use only the samples of which labels are 1 or 2 > db12 = db[np.array([label in [1, 2] for label in labels], dtype='bool')] more concise way db12 = db.select(sadict={'targets': [1,2]}) > # in-place z-score normalization > zscore(db12) > # choose classifier > clf = LinearNuSVMC() > # setup measure to be computed by Searchlight > # cross-validated mean transfer using an N-fold dataset splitter > cv = CrossValidation(clf, NFoldPartitioner()) > # define searchlight methods > radius_ = 1 > sl = sphere_searchlight( > cv, radius=radius_, space='vxl_indices', > postproc=mean_sample() > ) > # stripping all attributes from the dataset that are not required for the searchlight analysis > db12s = db12.copy( > deep=False, > sa=['targets', 'chunks'], > fa=['vxl_indices'], > a=['mapper'] > ) > # run searchlight > sl_map = sl(db12s) > # toggle between error rate (searchlight output) and accuracy (for plotting) > sl_map.samples *= -1 > sl_map.samples += 1 > # The result dataset is fully aware of the original dataspace. > # Using this information we can map the 1D accuracy maps back into ?brain-space? (using NIfTI image header information from the original input timeseries. > niftiresults = map2nifti(sl_map, imghdr=db12.a.imghdr) > # find the 3-d coordinates of extrema of error rate or accuracy > extremum_i = np.argmax(sl_map.samples[0]) # max > extremum_i = np.argmin(sl_map.samples[0]) # min note that above you override the same extremum_i with 'min' (minimal accuracy ;)) !!! > coord = db12s.fa[list(db.fa.keys())[0]][extremum_i] .keys() order is also not guaranteed... why not to specify 'vxl_indices'? > # plotting > plot_args = { > # 'background' : 'Subject2_Struct.nii', > # 'background_mask' : 'brain.nii.gz', > # 'overlay_mask' : 'S1_mask.nii', > 'do_stretch_colors' : False, > 'cmap_bg' : 'gray', > 'cmap_overlay' : 'autumn', # YlOrRd_r # pl.cm.autumn > 'interactive' : cfg.getboolean('examples', 'interactive', True) > } > fig = pl.figure(figsize=(12, 4), facecolor='white') > subfig = plot_lightbox( > overlay=niftiresults, > vlim=(0.5, None), slices=range(23,31), > fig=fig, > background='Subject2_Struct.nii', > # background_mask='brain.nii.gz', > # overlay_mask='S1_mask.nii', > **plot_args > ) > pl.title('Accuracy distribution for radius %i' % radius_) -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From debian at onerussian.com Fri Sep 8 16:47:45 2017 From: debian at onerussian.com (Yaroslav Halchenko) Date: Fri, 8 Sep 2017 12:47:45 -0400 Subject: [pymvpa] Granger causality analysis In-Reply-To: References: Message-ID: <20170908164745.rvv5nkzlvnokwadk@hopa.kiewit.dartmouth.edu> On Fri, 08 Sep 2017, ???? ?????? wrote: > Dear all! > I am a informal leader of a small research group, specializing in data > analysis, mainly including Granger causality approach (but also other > measures like mutual information/transfer entropy, phase synchronization, > nonlinear correlation, surrogate generation for testing for significance, > etc.), applied for local field potentials in WAG/Rij rats (genetic models > of absence epilepsy) and ordinary Wistar rats (studying limbic seizures). > The methods are mainly original and written in Python (but not > exclusively, also, where is some Pascal, Fortran, SciLab code). If it is > interesting, we could integrate our results in the Framework, since we are > looking to popularize them. > For convenience, please find my researchegate profile: > https://www.researchgate.net/profile/Ilya_Sysoev > My researcherid: > D-5930-2013 > Will be pleased to contribute. and we will be happy to accept your contributions and welcome you to the team ;-) We should probably create a CONTRIBUTING.md, similar to the one we have in other projects such as datalad/duecredit/etc, describing overall layout, use of duecredit, etc... which could help to orient you. We will do that soon(ish). Particular aspect is -- testing. All (new) code should get covered by tests so there is assurance that it works correctly now, and we would be able to maintain it in the working order in the long run. Do you have any particular algorithm, function, ... you would like to start with? ;-) Cheers, -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From n.n.oosterhof at googlemail.com Fri Sep 8 18:16:33 2017 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Fri, 8 Sep 2017 20:16:33 +0200 Subject: [pymvpa] Granger causality analysis In-Reply-To: <20170908164745.rvv5nkzlvnokwadk@hopa.kiewit.dartmouth.edu> References: <20170908164745.rvv5nkzlvnokwadk@hopa.kiewit.dartmouth.edu> Message-ID: > On 8 Sep 2017, at 18:47, Yaroslav Halchenko wrote: > > > On Fri, 08 Sep 2017, ???? ?????? wrote: > >> Dear all! >> [...] >> Will be pleased to contribute. > > and we will be happy to accept your contributions and welcome you to the > team ;-) Absolutely :-) > > We should probably create a CONTRIBUTING.md, A starting point for information on contributing: http://www.pymvpa.org/devguide.html#chap-devguide Also, looking at the closed (usually merged) pull requests may give some insights in typical workflows: https://github.com/PyMVPA/PyMVPA/pulls?q=is%3Aclosed From lapate at gmail.com Sat Sep 9 04:28:14 2017 From: lapate at gmail.com (Regina Lapate) Date: Fri, 8 Sep 2017 21:28:14 -0700 Subject: [pymvpa] additional data shuffling/cleaning after loading up data using fmri_dataset Message-ID: Hello all, I have the following newbie question: --Can one do regular python operations such as shuffling trials (or excluding trials with extreme outlier values) after loading up a dataset (nifti & targets) using *mvpa2.datasets.mri.fmri_dataset*? I assumed so, but upon trying to shuffle trials of a given condition using numpy: *np.random.shuffle(ds[ds.targets==1])* I get the following error message: *TypeError: 'Dataset' object does not support item assignment* Thoughts or suggestions on how to go about this? Thanks in advance for your help, Regina -------------- next part -------------- An HTML attachment was scrubbed... URL: From n.n.oosterhof at googlemail.com Sat Sep 9 12:03:55 2017 From: n.n.oosterhof at googlemail.com (Nick Oosterhof) Date: Sat, 9 Sep 2017 14:03:55 +0200 Subject: [pymvpa] additional data shuffling/cleaning after loading up data using fmri_dataset In-Reply-To: References: Message-ID: <5FFA9939-0D8B-4534-AC63-01EBCB47F06D@googlemail.com> > On 9 Sep 2017, at 06:28, Regina Lapate wrote: > > --Can one do regular python operations such as shuffling trials (or excluding trials with extreme outlier values) after loading up a dataset (nifti & targets) using mvpa2.datasets.mri.fmri_dataset? > > I assumed so, but upon trying to shuffle trials of a given condition using numpy: > np.random.shuffle(ds[ds.targets==1]) Shuffling can mean at least two things in this context: 1) randomly re-order the order of the samples and the associated sample attributes; this can be achieved by simple indexing. For example, if a dataset ds has 4 samples, then ds[[3,2,1,0]] would reverse the order of the samples and the associated sample attributes in .sa. Also, ds[[0,2]] would select the first and third sample. 2) randomly change condition labels (targets), for example to generate a null distribution. AttributePermutator in mvpa2.generators.permutation may be helpful for this. Which one applies to your question? For the second option: My personal preferred strategy would be to split the dataset by unique chunks, then randomly re-assign targets for each sub-dataset, and then stack these sub-datasets back into a big dataset. This seems better than the 'simple' strategy - at least in an fMRI context - because that can break independence assumptions. However I did not find this option available (using strategy='chunks' gave an error). Maybe I missed it - or if not, we may consider adding it. From ivsstud at gmail.com Sun Sep 10 11:33:52 2017 From: ivsstud at gmail.com (=?UTF-8?B?0JjQu9GM0Y8g0KHRi9GB0L7QtdCy?=) Date: Sun, 10 Sep 2017 15:33:52 +0400 Subject: [pymvpa] Granger causality analysis In-Reply-To: <20170908164745.rvv5nkzlvnokwadk@hopa.kiewit.dartmouth.edu> References: <20170908164745.rvv5nkzlvnokwadk@hopa.kiewit.dartmouth.edu> Message-ID: Dear Yaroslav and Nick! Thank you very much for accepting our initiative. I studied the starting points for contribution you mentioned and found that our code now is very far from your standards. However this could be fixed. We will continue to study your code and instructions. What I plan for now is: 1) apply for some grant for support, possible Russian Foundation for Basic Research or Russian Science Foundation (if you are not against this, I am going to mention in the proposal, that you gave your acceptance for the idea in general), this could give us some support in future, but even being successful, only next year; 2) start from the only measure: adapted nonlinear Granger causality, for which we published most of our papers; 3) slowly start making environment and speak to people from the team to find, who is interested in this and can be responsible for some minor tasks; since I do not have any actual "power" to press them, being the ordinary university lecturer, I have to consult with everyone. Best regards, Ilya Sysoev. 2017-09-08 20:47 GMT+04:00 Yaroslav Halchenko : > > On Fri, 08 Sep 2017, ???? ?????? wrote: > > > Dear all! > > I am a informal leader of a small research group, specializing in data > > analysis, mainly including Granger causality approach (but also other > > measures like mutual information/transfer entropy, phase > synchronization, > > nonlinear correlation, surrogate generation for testing for > significance, > > etc.), applied for local field potentials in WAG/Rij rats (genetic > models > > of absence epilepsy) and ordinary Wistar rats (studying limbic > seizures). > > The methods are mainly original and written in Python (but not > > exclusively, also, where is some Pascal, Fortran, SciLab code). If it > is > > interesting, we could integrate our results in the Framework, since > we are > > looking to popularize them. > > For convenience, please find my researchegate profile: > > https://www.researchgate.net/profile/Ilya_Sysoev > > My researcherid: > > D-5930-2013 > > Will be pleased to contribute. > > and we will be happy to accept your contributions and welcome you to the > team ;-) > > We should probably create a CONTRIBUTING.md, similar to the one we have > in other projects such as datalad/duecredit/etc, describing overall > layout, use of duecredit, etc... which could help to orient you. We > will do that soon(ish). Particular aspect is -- testing. All (new) code > should get covered by tests so there is assurance that it works > correctly now, and we would be able to maintain it in the working order > in the long run. > > Do you have any particular algorithm, function, ... you would like to > start with? ;-) > > Cheers, > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa -------------- next part -------------- An HTML attachment was scrubbed... URL: From pegahkf at gmail.com Tue Sep 12 13:38:07 2017 From: pegahkf at gmail.com (Pegah Kassraian Fard) Date: Tue, 12 Sep 2017 15:38:07 +0200 Subject: [pymvpa] searchlight analysis In-Reply-To: References: <55F69F77-CF0F-407C-BF6D-F532BFA267CC@onerussian.com> <20170908125310.syeijb362t3ggnif@hopa.kiewit.dartmouth.edu> <20170908162659.4ewvlb73q2fkcjvp@hopa.kiewit.dartmouth.edu> Message-ID: Many thanks for all the feedback. I get the highest accuracy at dimension [15, 45, 65], not sure if everything is correct - logically in the code now. I am yet classifying whole brain, just as a check. Also, the plotting seems completely off...and the output either non-existent or really weird. Is there any chance anyone could quickly run the code? Or would you have any other suggestions? Best regards, Pegah *- Newest code below* *- Data is here: **https://www.dropbox.com/sh/6qnrt2l2othc83g/AADBpc-eJSK5893Vrz_A8BS1a?dl=0 * Code: # from glob import glob import os import numpy as np from mvpa2.suite import * %matplotlib inline # enable debug output for searchlight call if __debug__: debug.active += ["SLC"] # change working HERE! os.chdir('mypath/S1') # use glob to get the filenames of .nii data into a list nii_fns = glob('beta*.nii') # read data labels = [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 7, 7, 7, 7, 7, 7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 7, 7, 7, 7, 7, 7, 7, 7 ] grps = np.repeat([0, 1], 37, axis=0) # used for `chuncks` sprefix_ = 'vxl' sprefix_indices_key = '_'.join([sprefix_, 'indices']) tprefix_ = 'tpref' db = mvpa2.datasets.mri.fmri_dataset( nii_fns, targets=labels, chunks=grps, mask=None, sprefix=sprefix_, tprefix=tprefix_, add_fa=None ) # use only the samples of which labels are 1 or 2 db12 = db.select(sadict={'targets': [1,2]}) # in-place z-score normalization zscore(db12) # choose classifier clf = LinearNuSVMC() # setup measure to be computed by Searchlight # cross-validated mean transfer using an N-fold dataset splitter cv = CrossValidation(clf, NFoldPartitioner()) # define searchlight methods radius_ = 3 sl = sphere_searchlight( cv, radius=radius_, space=sprefix_indices_key, postproc=mean_sample() ) # stripping all attributes from the dataset that are not required for the searchlight analysis db12s = db12.copy( deep=False, sa=['targets', 'chunks'], fa=[sprefix_indices_key], a=['mapper'] ) # run searchlight sl_map = sl(db12s) # toggle between error rate (searchlight output) and accuracy (for plotting) sl_map.samples *= -1 sl_map.samples += 1 # The result dataset is fully aware of the original dataspace. # Using this information we can map the 1D accuracy maps back into ?brain-space? (using NIfTI image header information from the original input timeseries. niftiresults = map2nifti(sl_map, imghdr=db12.a.imghdr) # find the 3-d coordinates of extrema of error rate or accuracy extremum_i = np.argmax(sl_map.samples[0]) # max extremum_i = np.argmin(sl_map.samples[0]) # min coord = db12s.fa[sprefix_indices_key][extremum_i] # plotting plot_args = { 'background' : 'Struct.nii', 'background_mask' : 'brain.nii', 'overlay_mask' : 'S1_mask.nii', 'do_stretch_colors' : False, 'cmap_bg' : 'gray', 'cmap_overlay' : 'autumn', # YlOrRd_r # pl.cm.autumn 'interactive' : cfg.getboolean('examples', 'interactive', True) } fig = pl.figure(figsize=(12, 4), facecolor='white') subfig = plot_lightbox( overlay=niftiresults, vlim=(0.5, None), slices=range(23,31), fig=fig, background=Struct.nii', background_mask='brain.nii', overlay_mask='S1_mask.nii', **plot_args ) pl.title('Accuracy distribution for radius %i' % radius_) # On Sat, Sep 9, 2017 at 5:29 PM, Pegah Kassraian Fard wrote: > Many thanks for all the feedback. I get the highest accuracy at dimension > [15, 45, 65], not sure if everything is correct - logically in the code now > - the plotting seems completely off. Is there any chance anyone could > quickly run the code? Or would you have any other suggestions? > Best regards, > Pegah > > *Attached:* > *- Newest code* > *- Output* > *- Plot* > > > Code also here: > > # > > from glob import glob > import os > import numpy as np > > from mvpa2.suite import * > > %matplotlib inline > > > # enable debug output for searchlight call > if __debug__: > debug.active += ["SLC"] > > > # change working directory to 'S1' > os.chdir('mypath/S1') > > # use glob to get the filenames of .nii data into a list > nii_fns = glob('beta*.nii') > > > # read data > > labels = [ > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, > 7, 7, 7, 7, 7, 7, > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, > 7, 7, 7, 7, 7, 7, 7, 7 > ] > grps = np.repeat([0, 1], 37, axis=0) # used for `chuncks` > > sprefix_ = 'vxl' > sprefix_indices_key = '_'.join([sprefix_, 'indices']) > tprefix_ = 'tpref' > db = mvpa2.datasets.mri.fmri_dataset( > nii_fns, targets=labels, chunks=grps, mask=None, sprefix=sprefix_, > tprefix=tprefix_, add_fa=None > ) > > > # use only the samples of which labels are 1 or 2 > db12 = db.select(sadict={'targets': [1,2]}) > > # in-place z-score normalization > zscore(db12) > > > # choose classifier > clf = LinearNuSVMC() > > # setup measure to be computed by Searchlight > # cross-validated mean transfer using an N-fold dataset splitter > cv = CrossValidation(clf, NFoldPartitioner()) > > # define searchlight methods > radius_ = 3 > sl = sphere_searchlight( > cv, radius=radius_, space=sprefix_indices_key, > postproc=mean_sample() > ) > > # stripping all attributes from the dataset that are not required for the > searchlight analysis > db12s = db12.copy( > deep=False, > sa=['targets', 'chunks'], > fa=[sprefix_indices_key], > a=['mapper'] > ) > > # run searchlight > sl_map = sl(db12s) > > > # toggle between error rate (searchlight output) and accuracy (for > plotting) > sl_map.samples *= -1 > sl_map.samples += 1 > > # The result dataset is fully aware of the original dataspace. > # Using this information we can map the 1D accuracy maps back into > ?brain-space? (using NIfTI image header information from the original input > timeseries. > niftiresults = map2nifti(sl_map, imghdr=db12.a.imghdr) > > > # find the 3-d coordinates of extrema of error rate or accuracy > extremum_i = np.argmax(sl_map.samples[0]) # max > extremum_i = np.argmin(sl_map.samples[0]) # min > coord = db12s.fa[sprefix_indices_key][extremum_i] > > > # plotting > > plot_args = { > # 'background' : 'Subject2_Struct.nii', > # 'background_mask' : 'brain.nii.gz', > # 'overlay_mask' : 'S1.nii', > 'do_stretch_colors' : False, > 'cmap_bg' : 'gray', > 'cmap_overlay' : 'autumn', # YlOrRd_r # pl.cm.autumn > 'interactive' : cfg.getboolean('examples', 'interactive', True) > } > > fig = pl.figure(figsize=(12, 4), facecolor='white') > > subfig = plot_lightbox( > overlay=niftiresults, > vlim=(0.5, None), slices=range(23,31), > fig=fig, > background='Subject2_Struct.nii', > # background_mask='brain.nii', > overlay_mask='S1_mask.nii', > **plot_args > ) > > pl.title('Accuracy distribution for radius %i' % radius_) > > > > > # > > > On Fri, Sep 8, 2017 at 6:26 PM, Yaroslav Halchenko > wrote: > >> >> On Fri, 08 Sep 2017, Pegah Kassraian Fard wrote: >> >> >> >> >> > from glob import glob >> > import os >> > import numpy as np >> >> > from mvpa2.suite import * >> >> > %matplotlib inline >> >> >> > # enable debug output for searchlight call >> > if __debug__: >> > debug.active += ["SLC"] >> >> >> > # change working directory to 'WB' >> > os.chdir('mypath/WB') >> >> > # use glob to get the filenames of .nii data into a list >> > nii_fns = glob('beta*.nii') >> >> glob order iirc might not be guaranteed, so I would sort it to make sure >> >> nii_fns = sorted(glob('beta*.nii')) >> >> > # read data >> >> > labels = [ >> > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, >> > 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, >> > 7, 7, 7, 7, 7, 7, 7, >> > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, >> > 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, >> > 7, 7, 7, 7, 7, 7, 7 >> > ] >> >> and make sure that above order of the volumes correspond to those labels >> >> > grps = np.repeat([0, 1], 37, axis=0) # used for `chuncks` >> >> > db = mvpa2.datasets.mri.fmri_dataset( >> > nii_fns, targets=labels, chunks=grps, mask=None, sprefix='vxl', >> tprefix='tpref', add_fa=None >> > ) >> >> > # use only the samples of which labels are 1 or 2 >> > db12 = db[np.array([label in [1, 2] for label in labels], dtype='bool')] >> >> more concise way >> >> db12 = db.select(sadict={'targets': [1,2]}) >> >> > # in-place z-score normalization >> > zscore(db12) >> >> > # choose classifier >> > clf = LinearNuSVMC() >> >> > # setup measure to be computed by Searchlight >> > # cross-validated mean transfer using an N-fold dataset splitter >> > cv = CrossValidation(clf, NFoldPartitioner()) >> >> > # define searchlight methods >> > radius_ = 1 >> > sl = sphere_searchlight( >> > cv, radius=radius_, space='vxl_indices', >> > postproc=mean_sample() >> > ) >> >> > # stripping all attributes from the dataset that are not required for >> the searchlight analysis >> > db12s = db12.copy( >> > deep=False, >> > sa=['targets', 'chunks'], >> > fa=['vxl_indices'], >> > a=['mapper'] >> > ) >> >> > # run searchlight >> > sl_map = sl(db12s) >> >> > # toggle between error rate (searchlight output) and accuracy (for >> plotting) >> > sl_map.samples *= -1 >> > sl_map.samples += 1 >> >> > # The result dataset is fully aware of the original dataspace. >> > # Using this information we can map the 1D accuracy maps back into >> ?brain-space? (using NIfTI image header information from the original input >> timeseries. >> > niftiresults = map2nifti(sl_map, imghdr=db12.a.imghdr) >> >> > # find the 3-d coordinates of extrema of error rate or accuracy >> > extremum_i = np.argmax(sl_map.samples[0]) # max >> > extremum_i = np.argmin(sl_map.samples[0]) # min >> >> note that above you override the same extremum_i with 'min' (minimal >> accuracy ;)) !!! >> >> > coord = db12s.fa[list(db.fa.keys())[0]][extremum_i] >> >> .keys() order is also not guaranteed... why not to specify >> 'vxl_indices'? >> >> > # plotting >> > plot_args = { >> > # 'background' : 'Subject2_Struct.nii', >> > # 'background_mask' : 'brain.nii.gz', >> > # 'overlay_mask' : 'S1_mask.nii', >> > 'do_stretch_colors' : False, >> > 'cmap_bg' : 'gray', >> > 'cmap_overlay' : 'autumn', # YlOrRd_r # pl.cm.autumn >> > 'interactive' : cfg.getboolean('examples', 'interactive', True) >> > } >> > fig = pl.figure(figsize=(12, 4), facecolor='white') >> > subfig = plot_lightbox( >> > overlay=niftiresults, >> > vlim=(0.5, None), slices=range(23,31), >> > fig=fig, >> > background='Subject2_Struct.nii', >> > # background_mask='brain.nii.gz', >> > # overlay_mask='S1_mask.nii', >> > **plot_args >> > ) >> > pl.title('Accuracy distribution for radius %i' % radius_) >> >> >> >> >> >> >> >> -- >> Yaroslav O. Halchenko >> Center for Open Neuroscience http://centerforopenneuroscience.org >> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 >> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 >> WWW: http://www.linkedin.com/in/yarik >> >> _______________________________________________ >> Pkg-ExpPsy-PyMVPA mailing list >> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org >> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anna.manelis at gmail.com Wed Sep 13 12:29:43 2017 From: anna.manelis at gmail.com (Anna Manelis) Date: Wed, 13 Sep 2017 08:29:43 -0400 Subject: [pymvpa] trouble importing mvpa2.suite Message-ID: Dear PYMVPA developers, I have installed mvpa2 on my Debian computer, but cannot import mvpa2.suite for some reason. >>> *import mvpa2.suite as mvpa2* Traceback (most recent call last): File "", line 1, in File "/usr/lib/pymodules/python2.7/mvpa2/suite.py", line 98, in from mvpa2.clfs.warehouse import * File "/usr/lib/pymodules/python2.7/mvpa2/clfs/warehouse.py", line 357, in sklPLSRegression = _skl_import('pls', 'PLSRegression') File "/usr/lib/pymodules/python2.7/mvpa2/clfs/warehouse.py", line 313, in _skl_import submod_ = __import__('sklearn.%s' % submod, fromlist=[submod]) ImportError: No module named pls There is no error message for >>> *import mvpa2 * Below is the output of >>> *mvpa2.wtf()* /usr/lib/python2.7/dist-packages/nose/util.py:14: DeprecationWarning: The compiler package is deprecated and removed in Python 3.x. from compiler.consts import CO_GENERATOR Current date: 2017-09-13 08:17 PyMVPA: Version: 2.1.0 Hash: 828f4b4bea2488f8c7e3f6c2446a445d77325338 Path: /usr/lib/pymodules/python2.7/mvpa2/__init__.pyc Version control (GIT): GIT information could not be obtained due "/usr/lib/pymodules/python2.7/mvpa2/.. is not under GIT" SYSTEM: OS: posix Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.89-2 Distribution: debian/7.11 EXTERNALS: Present: atlas_fsl, cPickle, ctypes, good scipy.stats.rdist, good scipy.stats.rv_discrete.ppf, griddata, gzip, h5py, ipython, liblapack.so, libsvm, libsvm verbosity control, lxml, matplotlib, mdp, mdp ge 2.4, nibabel, nose, numpy, pylab, pylab plottable, pywt, pywt wp reconstruct, reportlab, scipy, skl, statsmodels, weave Absent: atlas_pymvpa, cran-energy, elasticnet, glmnet, hcluster, lars, mass, nipy, nipy.neurospin, openopt, pprocess, pywt wp reconstruct fixed, rpy2, running ipython env, sg ge 0.6.4, sg ge 0.6.5, sg_fixedcachesize, shogun, shogun.krr, shogun.lightsvm, shogun.mpd, shogun.svmocas, shogun.svrlight Versions of critical externals: reportlab : 2.5 nibabel : 2.0.2 matplotlib : 1.1.1rc2 scipy : 0.10.1 ipython : 0.13.1 skl : 0.16.1 mdp : 3.4 numpy : 1.6.2 ctypes : 1.1.0 matplotlib : 1.1.1rc2 lxml : 2.3.2 nifti : failed to query due to "nifti is not a known dependency key." numpy : 1.6.2 pywt : 0.2.0 Matplotlib backend: TkAgg RUNTIME: PyMVPA Environment Variables: PYTHONPATH : ":/usr/lib/python2.7/lib-old:/usr/local/lib/python2.7/dist-packages:/data:/usr/lib/python2.7/lib-dynload:/usr/lib/python2.7/plat-linux2:/usr/lib/python2.7/dist-packages/gtk-2.0:/home/manelisa/.python27_compiled:/usr/lib/python2.7/dist-packages:/usr/lib/pymodules/python2.7:/usr/lib/python2.7/dist-packages/IPython/extensions:/usr/lib/python2.7:/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/dist-packages/PIL" PyMVPA Runtime Configuration: [general] verbose = 1 [externals] have running ipython env = no have numpy = yes have scipy = yes have matplotlib = yes have h5py = yes have reportlab = yes have weave = yes have good scipy.stats.rdist = yes have good scipy.stats.rv_discrete.ppf = yes have pylab = yes have lars = no have elasticnet = no have glmnet = no have skl = yes have ctypes = yes have libsvm = yes have shogun = no have lxml = yes have nibabel = yes have atlas_fsl = yes have atlas_pymvpa = no have cpickle = yes have cran-energy = no have griddata = yes have gzip = yes have hcluster = no have ipython = yes have liblapack.so = yes have libsvm verbosity control = yes have mass = no have mdp = yes have mdp ge 2.4 = yes have nipy = no have nipy.neurospin = no have nose = yes have openopt = no have pprocess = no have pylab plottable = yes have pywt = yes have pywt wp reconstruct = yes have pywt wp reconstruct fixed = no have rpy2 = no have sg ge 0.6.4 = no have sg ge 0.6.5 = no have sg_fixedcachesize = no have shogun.krr = no have shogun.lightsvm = no have shogun.mpd = no have shogun.svmocas = no have shogun.svrlight = no have statsmodels = yes Process Information: Name: python State: R (running) Tgid: 11329 Pid: 11329 PPid: 9870 TracerPid: 0 Uid: 1000 1000 1000 1000 Gid: 1000 1000 1000 1000 FDSize: 256 Groups: 24 25 29 30 44 46 104 109 112 1000 VmPeak: 661320 kB VmSize: 661320 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 111160 kB VmRSS: 111160 kB VmData: 128480 kB VmStk: 132 kB VmExe: 2396 kB VmLib: 64668 kB VmPTE: 1224 kB VmSwap: 0 kB Threads: 6 SigQ: 0/257267 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000001001000 SigCgt: 0000000180000002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: ffffffffffffffff Cpus_allowed: f Cpus_allowed_list: 0-3 Mems_allowed: 00000000,00000001 Mems_allowed_list: 0 voluntary_ctxt_switches: 1627 nonvoluntary_ctxt_switches: 217 >>> can't invoke "event" command: application has been destroyed while executing "event generate $w <>" (procedure "ttk::ThemeChanged" line 6) invoked from within "ttk::ThemeChanged" Any advice on how to fix the problem will be greatly appreciated. Thank you, Anna. -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Wed Sep 13 13:21:59 2017 From: debian at onerussian.com (Yaroslav Halchenko) Date: Wed, 13 Sep 2017 09:21:59 -0400 Subject: [pymvpa] trouble importing mvpa2.suite In-Reply-To: References: Message-ID: <20170913132159.fdzpyvo4mdgwj36h@hopa.kiewit.dartmouth.edu> On Wed, 13 Sep 2017, Anna Manelis wrote: > Dear PYMVPA developers, > I have installed mvpa2 on my Debian computer, but cannot import > mvpa2.suite for some reason. > >>> import mvpa2.suite as mvpa2 > Traceback (most recent call last): > ?? File "", line 1, in > ?? File "/usr/lib/pymodules/python2.7/mvpa2/suite.py", line 98, in > > ?????? from mvpa2.clfs.warehouse import * > ?? File "/usr/lib/pymodules/python2.7/mvpa2/clfs/warehouse.py", line 357, > in > ?????? sklPLSRegression = _skl_import('pls', 'PLSRegression') > ?? File "/usr/lib/pymodules/python2.7/mvpa2/clfs/warehouse.py", line 313, > in _skl_import > ?????? submod_ = __import__('sklearn.%s' % submod, fromlist=[submod]) > ImportError: No module named pls incompatibility with sklearn -- some API changes I guess we didn't spot could you Anna (howdoyoudobtw????) provide output of python -c 'import mvpa2; print mvpa2.wtf()' ? -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From anna.manelis at gmail.com Wed Sep 13 13:37:36 2017 From: anna.manelis at gmail.com (Anna Manelis) Date: Wed, 13 Sep 2017 09:37:36 -0400 Subject: [pymvpa] trouble importing mvpa2.suite In-Reply-To: <20170913132159.fdzpyvo4mdgwj36h@hopa.kiewit.dartmouth.edu> References: <20170913132159.fdzpyvo4mdgwj36h@hopa.kiewit.dartmouth.edu> Message-ID: Sure (doingprettywellandhowareyou?). Below is the output (I had a feeling I pasted it in the first e-mail but apparently not) *python -c 'import mvpa2; print mvpa2.wtf()'*/usr/lib/python2.7/dist-packages/nose/util.py:14: DeprecationWarning: The compiler package is deprecated and removed in Python 3.x. from compiler.consts import CO_GENERATOR Current date: 2017-09-13 09:34 PyMVPA: Version: 2.1.0 Hash: 828f4b4bea2488f8c7e3f6c2446a445d77325338 Path: /usr/lib/pymodules/python2.7/mvpa2/__init__.pyc Version control (GIT): GIT information could not be obtained due "/usr/lib/pymodules/python2.7/mvpa2/.. is not under GIT" SYSTEM: OS: posix Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.89-2 Distribution: debian/7.11 EXTERNALS: Present: atlas_fsl, cPickle, ctypes, good scipy.stats.rdist, good scipy.stats.rv_discrete.ppf, griddata, gzip, h5py, ipython, liblapack.so, libsvm, libsvm verbosity control, lxml, matplotlib, mdp, mdp ge 2.4, nibabel, nose, numpy, pylab, pylab plottable, pywt, pywt wp reconstruct, reportlab, scipy, skl, statsmodels, weave Absent: atlas_pymvpa, cran-energy, elasticnet, glmnet, hcluster, lars, mass, nipy, nipy.neurospin, openopt, pprocess, pywt wp reconstruct fixed, rpy2, running ipython env, sg ge 0.6.4, sg ge 0.6.5, sg_fixedcachesize, shogun, shogun.krr, shogun.lightsvm, shogun.mpd, shogun.svmocas, shogun.svrlight Versions of critical externals: nibabel : 2.0.2 reportlab : 2.5 matplotlib : 1.1.1rc2 scipy : 0.10.1 ipython : 0.13.1 skl : 0.16.1 mdp : 3.4 numpy : 1.6.2 ctypes : 1.1.0 matplotlib : 1.1.1rc2 lxml : 2.3.2 nifti : failed to query due to "nifti is not a known dependency key." numpy : 1.6.2 pywt : 0.2.0 Matplotlib backend: TkAgg RUNTIME: PyMVPA Environment Variables: PYTHONPATH : ":/usr/lib/python2.7/lib-old:/usr/local/lib/python2.7/dist-packages:/data:/usr/lib/python2.7/lib-dynload:/usr/lib/python2.7/plat-linux2:/usr/lib/python2.7/dist-packages/gtk-2.0:/usr/lib/python2.7/dist-packages:/usr/lib/pymodules/python2.7:/usr/lib/python2.7/dist-packages/IPython/extensions:/usr/lib/python2.7:/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/dist-packages/PIL" PyMVPA Runtime Configuration: [general] verbose = 1 [externals] have running ipython env = no have numpy = yes have scipy = yes have matplotlib = yes have lxml = yes have nibabel = yes have atlas_fsl = yes have atlas_pymvpa = no have cpickle = yes have cran-energy = no have ctypes = yes have elasticnet = no have glmnet = no have good scipy.stats.rdist = yes have good scipy.stats.rv_discrete.ppf = yes have griddata = yes have gzip = yes have h5py = yes have hcluster = no have ipython = yes have lars = no have liblapack.so = yes have pylab = yes have libsvm = yes have libsvm verbosity control = yes have mass = no have mdp = yes have mdp ge 2.4 = yes have nipy = no have nipy.neurospin = no have nose = yes have openopt = no have pprocess = no have pylab plottable = yes have pywt = yes have pywt wp reconstruct = yes have pywt wp reconstruct fixed = no have reportlab = yes have rpy2 = no have sg ge 0.6.4 = no have sg ge 0.6.5 = no have sg_fixedcachesize = no have shogun = no have shogun.krr = no have shogun.lightsvm = no have shogun.mpd = no have shogun.svmocas = no have shogun.svrlight = no have skl = yes have statsmodels = yes have weave = yes Process Information: Name: python State: R (running) Tgid: 13804 Pid: 13804 PPid: 13722 TracerPid: 0 Uid: 1000 1000 1000 1000 Gid: 1000 1000 1000 1000 FDSize: 256 Groups: 24 25 29 30 44 46 104 109 112 1000 VmPeak: 647452 kB VmSize: 647452 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 107696 kB VmRSS: 107696 kB VmData: 125224 kB VmStk: 132 kB VmExe: 2396 kB VmLib: 64348 kB VmPTE: 1188 kB VmSwap: 0 kB Threads: 6 SigQ: 0/257267 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000001001000 SigCgt: 0000000180000002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: ffffffffffffffff Cpus_allowed: f Cpus_allowed_list: 0-3 Mems_allowed: 00000000,00000001 Mems_allowed_list: 0 voluntary_ctxt_switches: 426 nonvoluntary_ctxt_switches: 123 On Wed, Sep 13, 2017 at 9:21 AM, Yaroslav Halchenko wrote: > > On Wed, 13 Sep 2017, Anna Manelis wrote: > > > Dear PYMVPA developers, > > > I have installed mvpa2 on my Debian computer, but cannot import > > mvpa2.suite for some reason. > > > >>> import mvpa2.suite as mvpa2 > > > Traceback (most recent call last): > > ? File "", line 1, in > > ? File "/usr/lib/pymodules/python2.7/mvpa2/suite.py", line 98, in > > > > ? ? ? from mvpa2.clfs.warehouse import * > > ? File "/usr/lib/pymodules/python2.7/mvpa2/clfs/warehouse.py", line > 357, > > in > > ? ? ? sklPLSRegression = _skl_import('pls', 'PLSRegression') > > ? File "/usr/lib/pymodules/python2.7/mvpa2/clfs/warehouse.py", line > 313, > > in _skl_import > > ? ? ? submod_ = __import__('sklearn.%s' % submod, fromlist=[submod]) > > ImportError: No module named pls > > incompatibility with sklearn -- some API changes I guess we didn't spot > > could you Anna (howdoyoudobtw????) provide output of > > python -c 'import mvpa2; print mvpa2.wtf()' > > ? > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Wed Sep 13 14:17:16 2017 From: debian at onerussian.com (Yaroslav Halchenko) Date: Wed, 13 Sep 2017 10:17:16 -0400 Subject: [pymvpa] trouble importing mvpa2.suite In-Reply-To: References: <20170913132159.fdzpyvo4mdgwj36h@hopa.kiewit.dartmouth.edu> Message-ID: <20170913141716.nu7xaspty7y7rezq@hopa.kiewit.dartmouth.edu> big summary -- system and installation(s) on it quite old 1. easiest resolution(s): remove or disable sklearn remove: depends on how was installed, could be dpkg --purge python-sklearn disable: export MVPA_EXTERNALS_HAVE_SKL=no if you don't use it for your current analysis 2. updates: > PyMVPA: > Version: 2.1.0 even on wheezy which you seems to use we have a backport of 2.6.0 release avail from neurodebian: http://neuro.debian.net/pkgs/python-mvpa2.html > OS: posix Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.89-2 > Distribution: debian/7.11 > skl : 0.16.1 unfortunately for wheezy we do not have newer backport. altogether you might benefit from upgrading from oldoldstable of Debian which you are using ATM > PYTHONPATH : > ":/usr/lib/python2.7/lib-old:/usr/local/lib/python2.7/dist-packages:/data:/usr/lib/python2.7/lib-dynload:/usr/lib/python2.7/plat-linux2:/usr/lib/python2.7/dist-packages/gtk-2.0:/usr/lib/python2.7/dist-packages:/usr/lib/pymodules/python2.7:/usr/lib/python2.7/dist-packages/IPython/extensions:/usr/lib/python2.7:/usr/lib/python2.7/dist-packages/wx-2.8-gtk2-unicode:/usr/lib/python2.7/lib-tk:/usr/lib/python2.7/dist-packages/PIL" there is some danger in the paths! e.g. you could shot yourself in the foot (e.g. having some module under /data -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From anna.manelis at gmail.com Wed Sep 13 16:49:29 2017 From: anna.manelis at gmail.com (Anna Manelis) Date: Wed, 13 Sep 2017 12:49:29 -0400 Subject: [pymvpa] trouble importing mvpa2.suite In-Reply-To: <20170913141716.nu7xaspty7y7rezq@hopa.kiewit.dartmouth.edu> References: <20170913132159.fdzpyvo4mdgwj36h@hopa.kiewit.dartmouth.edu> <20170913141716.nu7xaspty7y7rezq@hopa.kiewit.dartmouth.edu> Message-ID: Thank you very much! I think it works. I ran in terminal: sudo dpkg --purge python-scikits-learn sudo dpkg --purge python-sklearn export MVPA_EXTERNALS_HAVE_SKL=no sudo apt-get install python-mvpa2 sudo apt-get autoremove python-sklearn-lib python Python 2.7.3 (default, Jun 21 2016, 18:38:19) [GCC 4.7.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from mvpa2.tutorial_suite import * /usr/lib/python2.7/dist-packages/nose/util.py:14: DeprecationWarning: The compiler package is deprecated and removed in Python 3.x. from compiler.consts import CO_GENERATOR Should I just disregard this DeprecationWarning? I am afraid to upgrade Debian from wheezy - who knows what's gonna happen Thanks again, Anna. On Wed, Sep 13, 2017 at 10:17 AM, Yaroslav Halchenko wrote: > big summary -- system and installation(s) on it quite old > > 1. easiest resolution(s): > > remove or disable sklearn > > remove: depends on how was installed, could be dpkg --purge > python-sklearn > > disable: export MVPA_EXTERNALS_HAVE_SKL=no > > if you don't use it for your current analysis > > 2. updates: > > > > PyMVPA: > > Version: 2.1.0 > > even on wheezy which you seems to use we have a backport of 2.6.0 > release avail from neurodebian: > > http://neuro.debian.net/pkgs/python-mvpa2.html > > > OS: posix Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.89-2 > > Distribution: debian/7.11 > > > skl : 0.16.1 > > unfortunately for wheezy we do not have newer backport. altogether you > might benefit from upgrading from oldoldstable of Debian which you are > using ATM > > > PYTHONPATH : > > ":/usr/lib/python2.7/lib-old:/usr/local/lib/python2.7/dist- > packages:/data:/usr/lib/python2.7/lib-dynload:/usr/ > lib/python2.7/plat-linux2:/usr/lib/python2.7/dist- > packages/gtk-2.0:/usr/lib/python2.7/dist-packages:/usr/ > lib/pymodules/python2.7:/usr/lib/python2.7/dist-packages/ > IPython/extensions:/usr/lib/python2.7:/usr/lib/python2.7/ > dist-packages/wx-2.8-gtk2-unicode:/usr/lib/python2.7/ > lib-tk:/usr/lib/python2.7/dist-packages/PIL" > > there is some danger in the paths! e.g. you could shot > yourself in the foot (e.g. having some module under /data > > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Wed Sep 13 18:01:37 2017 From: debian at onerussian.com (Yaroslav Halchenko) Date: Wed, 13 Sep 2017 14:01:37 -0400 Subject: [pymvpa] trouble importing mvpa2.suite In-Reply-To: References: <20170913132159.fdzpyvo4mdgwj36h@hopa.kiewit.dartmouth.edu> <20170913141716.nu7xaspty7y7rezq@hopa.kiewit.dartmouth.edu> Message-ID: <20170913180137.7m3vqiutdt2ef5rm@hopa.kiewit.dartmouth.edu> On Wed, 13 Sep 2017, Anna Manelis wrote: > Thank you very much! I think it works. > I ran in terminal: > sudo dpkg --purge python-scikits-learn > sudo dpkg --purge python-sklearn > export MVPA_EXTERNALS_HAVE_SKL=no > sudo apt-get install python-mvpa2 > sudo apt-get autoremove python-sklearn-lib > python > Python 2.7.3 (default, Jun 21 2016, 18:38:19) > [GCC 4.7.2] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> from mvpa2.tutorial_suite import * > /usr/lib/python2.7/dist-packages/nose/util.py:14: DeprecationWarning: The > compiler package is deprecated and removed in Python 3.x. > ?? from compiler.consts import CO_GENERATOR > Should I just disregard this DeprecationWarning? yes > I am afraid to upgrade Debian from wheezy - who knows what's gonna happen ;) you could learn also to use singularity... we have it in neurodebian apt-get install singularity-container singularity pull shub://neurodebian/neurodebian ./neurodebian-neurodebian-master.img and you are in a fresh neurodebian ... let me know how it goes -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From anna.manelis at gmail.com Wed Sep 13 18:52:20 2017 From: anna.manelis at gmail.com (Anna Manelis) Date: Wed, 13 Sep 2017 14:52:20 -0400 Subject: [pymvpa] trouble importing mvpa2.suite In-Reply-To: <20170913180137.7m3vqiutdt2ef5rm@hopa.kiewit.dartmouth.edu> References: <20170913132159.fdzpyvo4mdgwj36h@hopa.kiewit.dartmouth.edu> <20170913141716.nu7xaspty7y7rezq@hopa.kiewit.dartmouth.edu> <20170913180137.7m3vqiutdt2ef5rm@hopa.kiewit.dartmouth.edu> Message-ID: I will try :) Thank you, Anna. On Wed, Sep 13, 2017 at 2:01 PM, Yaroslav Halchenko wrote: > > On Wed, 13 Sep 2017, Anna Manelis wrote: > > > Thank you very much! I think it works. > > > I ran in terminal: > > > sudo dpkg --purge python-scikits-learn > > sudo dpkg --purge python-sklearn > > export MVPA_EXTERNALS_HAVE_SKL=no > > sudo apt-get install python-mvpa2 > > sudo apt-get autoremove python-sklearn-lib > > > python > > Python 2.7.3 (default, Jun 21 2016, 18:38:19) > > [GCC 4.7.2] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > > >>> from mvpa2.tutorial_suite import * > > /usr/lib/python2.7/dist-packages/nose/util.py:14: > DeprecationWarning: The > > compiler package is deprecated and removed in Python 3.x. > > ? from compiler.consts import CO_GENERATOR > > > Should I just disregard this DeprecationWarning? > > yes > > > I am afraid to upgrade Debian from wheezy - who knows what's gonna > happen > > ;) > > you could learn also to use singularity... we have it in neurodebian > apt-get install singularity-container > singularity pull shub://neurodebian/neurodebian > ./neurodebian-neurodebian-master.img > > and you are in a fresh neurodebian ... let me know how it goes > > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pegahkf at gmail.com Fri Sep 15 10:54:15 2017 From: pegahkf at gmail.com (Pegah Kassraian Fard) Date: Fri, 15 Sep 2017 12:54:15 +0200 Subject: [pymvpa] searchlight analysis In-Reply-To: <20170912220744.turhasm4rtvmexii@hopa.kiewit.dartmouth.edu> References: <20170908125310.syeijb362t3ggnif@hopa.kiewit.dartmouth.edu> <20170908162659.4ewvlb73q2fkcjvp@hopa.kiewit.dartmouth.edu> <20170912161952.xarblkntid7icl5y@hopa.kiewit.dartmouth.edu> <20170912201957.ypyakwfoikvgwbo5@hopa.kiewit.dartmouth.edu> <20170912220744.turhasm4rtvmexii@hopa.kiewit.dartmouth.edu> Message-ID: So, now all the code seems to work except for the plotting, still. A very python beginner question: How can I store the coordinates + the corresponding accuracies? (e.g. in Matlab that could be done quite easily with the ''save'' command). Regarding plotting: I stored the outputed nifti, and of course I can open it in MRIcron etc., but I'd prefer to be able to plot the results in a way where the distribution of accuracies over space is visible (as in the plotting example). So, how comes the plotting still is so not-sensible? Here code & attached again how the plot looks like, sorry for all the questions!:-) # plotting plot_args = { 'background' : 'Struct.nii', 'background_mask' : 'brain.nii', 'overlay_mask' : 'S1_mask.nii', 'do_stretch_colors' : False, 'cmap_bg' : 'gray', 'cmap_overlay' : 'autumn', # YlOrRd_r # pl.cm.autumn 'interactive' : cfg.getboolean('examples', 'interactive', True) } fig = pl.figure(figsize=(12, 4), facecolor='white') subfig = plot_lightbox( overlay=niftiresults, vlim=(0.5, None), slices=range(23,31), fig=fig, background='Struct.nii', background_mask='brain.nii', overlay_mask='S1_mask.nii', **plot_args ) pl.title('Accuracy distribution for radius %i' % radius_) pl.show() pl.savefig('plotResults') On Wed, Sep 13, 2017 at 12:07 AM, Yaroslav Halchenko wrote: > > On Tue, 12 Sep 2017, Pegah Kassraian Fard wrote: > > > I would out-comment the line I would not need..so I do use the maximal > > accuracy. > > Attached is a plot.. > > try to plot just a few slices > > or ideally just open that produced .nii.gz in any application you > typically use for your visualization (FSLeyes, AFNI, whatnot) > > > Many thanks > > On Tue, Sep 12, 2017 at 10:19 PM, Yaroslav Halchenko < > yoh at onerussian.com> > > wrote: > > > On Tue, 12 Sep 2017, Pegah Kassraian Fard wrote: > > > >? ? yes yes, when I ran the code I switch between these...so that > > was/is a > > >? ? residuum of how I did it so far:) > > > so your extremum is the minimal accuracy... how useful is that ;)? > > >? ? ? > now. I am yet classifying whole brain, just as a check. > > >? ? ? > Also,?*? the plotting seems completely off...and the > output > > either > > > what is "off" here? screenshot? > > >? ? ? > non-existent or really weird. Is there any chance anyone > > could quickly > > >? ? ? run > > >? ? ? > the code? Or would you have any other suggestions? > > > might try later > > > cheers, > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 2_5321331047995015525.png Type: image/png Size: 85683 bytes Desc: not available URL: From debian at onerussian.com Fri Sep 15 22:23:40 2017 From: debian at onerussian.com (Yaroslav Halchenko) Date: Fri, 15 Sep 2017 18:23:40 -0400 Subject: [pymvpa] searchlight analysis In-Reply-To: References: <20170908162659.4ewvlb73q2fkcjvp@hopa.kiewit.dartmouth.edu> <20170912161952.xarblkntid7icl5y@hopa.kiewit.dartmouth.edu> <20170912201957.ypyakwfoikvgwbo5@hopa.kiewit.dartmouth.edu> <20170912220744.turhasm4rtvmexii@hopa.kiewit.dartmouth.edu> Message-ID: <20170915222340.whvmnkvcpd3hes4m@hopa.kiewit.dartmouth.edu> On Fri, 15 Sep 2017, Pegah Kassraian Fard wrote: > fig = pl.figure(figsize=(12, 4), facecolor='white') try not making it that narrow and it might look much more reasonable... what about figsize=(12,12) ? ;) -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From lapate at gmail.com Sat Sep 16 04:09:16 2017 From: lapate at gmail.com (Regina Lapate) Date: Fri, 15 Sep 2017 21:09:16 -0700 Subject: [pymvpa] Pkg-ExpPsy-PyMVPA Digest, Vol 114, Issue 6 In-Reply-To: References: Message-ID: Hi Nick: Thanks very much for your helpful reply; the operations I wanted to do were of the first type (e.g. shuffling samples of a particular target and chunk type) and all is working now (using the indices). Cheers, Regina > ---------------------------------------------------------------------- > > Message: 1 > Date: Sat, 9 Sep 2017 14:03:55 +0200 > From: Nick Oosterhof > To: Development and support of PyMVPA > > Subject: Re: [pymvpa] additional data shuffling/cleaning after loading > up data using fmri_dataset > Message-ID: <5FFA9939-0D8B-4534-AC63-01EBCB47F06D at googlemail.com> > Content-Type: text/plain; charset=us-ascii > > > > On 9 Sep 2017, at 06:28, Regina Lapate wrote: > > > > --Can one do regular python operations such as shuffling trials (or > excluding trials with extreme outlier values) after loading up a dataset > (nifti & targets) using mvpa2.datasets.mri.fmri_dataset? > > > > I assumed so, but upon trying to shuffle trials of a given condition > using numpy: > > np.random.shuffle(ds[ds.targets==1]) > > Shuffling can mean at least two things in this context: > > 1) randomly re-order the order of the samples and the associated sample > attributes; this can be achieved by simple indexing. For example, if a > dataset ds has 4 samples, then ds[[3,2,1,0]] would reverse the order of > the samples and the associated sample attributes in .sa. Also, ds[[0,2]] > would select the first and third sample. > 2) randomly change condition labels (targets), for example to generate a > null distribution. AttributePermutator in mvpa2.generators.permutation may > be helpful for this. > > Which one applies to your question? > > For the second option: My personal preferred strategy would be to split > the dataset by unique chunks, then randomly re-assign targets for each > sub-dataset, and then stack these sub-datasets back into a big dataset. > This seems better than the 'simple' strategy - at least in an fMRI context > - because that can break independence assumptions. > However I did not find this option available (using strategy='chunks' gave > an error). Maybe I missed it - or if not, we may consider adding it. > > > > > ************************************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anna.manelis at gmail.com Sun Sep 17 20:10:59 2017 From: anna.manelis at gmail.com (Anna Manelis) Date: Sun, 17 Sep 2017 16:10:59 -0400 Subject: [pymvpa] trouble importing mvpa2.suite In-Reply-To: <20170913180137.7m3vqiutdt2ef5rm@hopa.kiewit.dartmouth.edu> References: <20170913132159.fdzpyvo4mdgwj36h@hopa.kiewit.dartmouth.edu> <20170913141716.nu7xaspty7y7rezq@hopa.kiewit.dartmouth.edu> <20170913180137.7m3vqiutdt2ef5rm@hopa.kiewit.dartmouth.edu> Message-ID: I tried to install singularity container sudo apt-get install singularity-container But it failed to install with the error: E: Unable to locate package singularity-container Is singularity-container actually available for good old wheezy? Thank you, Anna. On Wed, Sep 13, 2017 at 2:01 PM, Yaroslav Halchenko wrote: > > On Wed, 13 Sep 2017, Anna Manelis wrote: > > > Thank you very much! I think it works. > > > I ran in terminal: > > > sudo dpkg --purge python-scikits-learn > > sudo dpkg --purge python-sklearn > > export MVPA_EXTERNALS_HAVE_SKL=no > > sudo apt-get install python-mvpa2 > > sudo apt-get autoremove python-sklearn-lib > > > python > > Python 2.7.3 (default, Jun 21 2016, 18:38:19) > > [GCC 4.7.2] on linux2 > > Type "help", "copyright", "credits" or "license" for more information. > > >>> from mvpa2.tutorial_suite import * > > /usr/lib/python2.7/dist-packages/nose/util.py:14: > DeprecationWarning: The > > compiler package is deprecated and removed in Python 3.x. > > ? from compiler.consts import CO_GENERATOR > > > Should I just disregard this DeprecationWarning? > > yes > > > I am afraid to upgrade Debian from wheezy - who knows what's gonna > happen > > ;) > > you could learn also to use singularity... we have it in neurodebian > apt-get install singularity-container > singularity pull shub://neurodebian/neurodebian > ./neurodebian-neurodebian-master.img > > and you are in a fresh neurodebian ... let me know how it goes > > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yoh at onerussian.com Sun Sep 17 21:23:21 2017 From: yoh at onerussian.com (Yaroslav Halchenko) Date: Sun, 17 Sep 2017 17:23:21 -0400 Subject: [pymvpa] trouble importing mvpa2.suite In-Reply-To: References: <20170913132159.fdzpyvo4mdgwj36h@hopa.kiewit.dartmouth.edu> <20170913141716.nu7xaspty7y7rezq@hopa.kiewit.dartmouth.edu> <20170913180137.7m3vqiutdt2ef5rm@hopa.kiewit.dartmouth.edu> Message-ID: <4B408045-DFF9-47D7-9D40-6130720CF357@onerussian.com> In NeuroDebian yes http://neuro.debian.net/pkgs/singularity-container.html?highlight=singularity On September 17, 2017 4:10:59 PM EDT, Anna Manelis wrote: >I tried to install singularity container > >sudo apt-get install singularity-container > >But it failed to install with the error: > >E: Unable to locate package singularity-container > >Is singularity-container actually available for good old wheezy? > >Thank you, >Anna. > > > >On Wed, Sep 13, 2017 at 2:01 PM, Yaroslav Halchenko > >wrote: > >> >> On Wed, 13 Sep 2017, Anna Manelis wrote: >> >> > Thank you very much! I think it works. >> >> > I ran in terminal: >> >> > sudo dpkg --purge python-scikits-learn >> > sudo dpkg --purge python-sklearn >> > export MVPA_EXTERNALS_HAVE_SKL=no >> > sudo apt-get install python-mvpa2 >> > sudo apt-get autoremove python-sklearn-lib >> >> > python >> > Python 2.7.3 (default, Jun 21 2016, 18:38:19) >> > [GCC 4.7.2] on linux2 >> > Type "help", "copyright", "credits" or "license" for more >information. >> > >>> from mvpa2.tutorial_suite import * >> > /usr/lib/python2.7/dist-packages/nose/util.py:14: >> DeprecationWarning: The >> > compiler package is deprecated and removed in Python 3.x. >> > ? from compiler.consts import CO_GENERATOR >> >> > Should I just disregard this DeprecationWarning? >> >> yes >> >> > I am afraid to upgrade Debian from wheezy - who knows what's >gonna >> happen >> >> ;) >> >> you could learn also to use singularity... we have it in neurodebian >> apt-get install singularity-container >> singularity pull shub://neurodebian/neurodebian >> ./neurodebian-neurodebian-master.img >> >> and you are in a fresh neurodebian ... let me know how it goes >> >> -- >> Yaroslav O. Halchenko >> Center for Open Neuroscience http://centerforopenneuroscience.org >> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 >> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 >> WWW: http://www.linkedin.com/in/yarik >> >> _______________________________________________ >> Pkg-ExpPsy-PyMVPA mailing list >> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org >> >http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa >> -- Sent from a phone which beats iPhone. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anna.manelis at gmail.com Sun Sep 17 21:34:06 2017 From: anna.manelis at gmail.com (Anna Manelis) Date: Sun, 17 Sep 2017 17:34:06 -0400 Subject: [pymvpa] trouble importing mvpa2.suite In-Reply-To: <4B408045-DFF9-47D7-9D40-6130720CF357@onerussian.com> References: <20170913132159.fdzpyvo4mdgwj36h@hopa.kiewit.dartmouth.edu> <20170913141716.nu7xaspty7y7rezq@hopa.kiewit.dartmouth.edu> <20170913180137.7m3vqiutdt2ef5rm@hopa.kiewit.dartmouth.edu> <4B408045-DFF9-47D7-9D40-6130720CF357@onerussian.com> Message-ID: My computer is already configured to use NeuroDebian repository, so I hoped to be able to install singularity-container. This is not the case E: Unable to locate package singularity-container I did not have problem installing other packages from NeuroDebian though. On Sun, Sep 17, 2017 at 5:23 PM, Yaroslav Halchenko wrote: > In NeuroDebian yes > http://neuro.debian.net/pkgs/singularity-container.html? > highlight=singularity > > > On September 17, 2017 4:10:59 PM EDT, Anna Manelis > wrote: >> >> >> I tried to install singularity container >> >> sudo apt-get install singularity-container >> >> But it failed to install with the error: >> >> E: Unable to locate package singularity-container >> >> Is singularity-container actually available for good old wheezy? >> >> Thank you, >> Anna. >> >> >> >> On Wed, Sep 13, 2017 at 2:01 PM, Yaroslav Halchenko < >> debian at onerussian.com> wrote: >> >>> >>> On Wed, 13 Sep 2017, Anna Manelis wrote: >>> >>> > Thank you very much! I think it works. >>> >>> > I ran in terminal: >>> >>> > sudo dpkg --purge python-scikits-learn >>> > sudo dpkg --purge python-sklearn >>> > export MVPA_EXTERNALS_HAVE_SKL=no >>> > sudo apt-get install python-mvpa2 >>> > sudo apt-get autoremove python-sklearn-lib >>> >>> > python >>> > Python 2.7.3 (default, Jun 21 2016, 18:38:19) >>> > [GCC 4.7.2] on linux2 >>> > Type "help", "copyright", "credits" or "license" for more >>> information. >>> > >>> from mvpa2.tutorial_suite import * >>> > /usr/lib/python2.7/dist-packages/nose/util.py:14: >>> DeprecationWarning: The >>> > compiler package is deprecated and removed in Python 3.x. >>> > ? from compiler.consts import CO_GENERATOR >>> >>> > Should I just disregard this DeprecationWarning? >>> >>> yes >>> >>> > I am afraid to upgrade Debian from wheezy - who knows what's gonna >>> happen >>> >>> ;) >>> >>> you could learn also to use singularity... we have it in neurodebian >>> apt-get install singularity-container >>> singularity pull shub://neurodebian/neurodebian >>> ./neurodebian-neurodebian-master.img >>> >>> and you are in a fresh neurodebian ... let me know how it goes >>> >>> -- >>> Yaroslav O. Halchenko >>> Center for Open Neuroscience http://centerforopenneuroscience.org >>> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 >>> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 >>> WWW: http://www.linkedin.com/in/yarik >>> >>> _______________________________________________ >>> Pkg-ExpPsy-PyMVPA mailing list >>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org >>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg- >>> exppsy-pymvpa >>> >> >> > -- > Sent from a phone which beats iPhone. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yoh at onerussian.com Mon Sep 18 04:29:46 2017 From: yoh at onerussian.com (Yaroslav Halchenko) Date: Mon, 18 Sep 2017 00:29:46 -0400 Subject: [pymvpa] trouble importing mvpa2.suite In-Reply-To: References: <20170913132159.fdzpyvo4mdgwj36h@hopa.kiewit.dartmouth.edu> <20170913141716.nu7xaspty7y7rezq@hopa.kiewit.dartmouth.edu> <20170913180137.7m3vqiutdt2ef5rm@hopa.kiewit.dartmouth.edu> <4B408045-DFF9-47D7-9D40-6130720CF357@onerussian.com> Message-ID: Just to make sure, did you do apt-get update Any time recently? On September 17, 2017 5:34:06 PM EDT, Anna Manelis wrote: >My computer is already configured to use NeuroDebian repository, so I >hoped >to be able to install singularity-container. This is not the case > >E: Unable to locate package singularity-container > >I did not have problem installing other packages from NeuroDebian >though. > >On Sun, Sep 17, 2017 at 5:23 PM, Yaroslav Halchenko > >wrote: > >> In NeuroDebian yes >> http://neuro.debian.net/pkgs/singularity-container.html? >> highlight=singularity >> >> >> On September 17, 2017 4:10:59 PM EDT, Anna Manelis > >> wrote: >>> >>> >>> I tried to install singularity container >>> >>> sudo apt-get install singularity-container >>> >>> But it failed to install with the error: >>> >>> E: Unable to locate package singularity-container >>> >>> Is singularity-container actually available for good old wheezy? >>> >>> Thank you, >>> Anna. >>> >>> >>> >>> On Wed, Sep 13, 2017 at 2:01 PM, Yaroslav Halchenko < >>> debian at onerussian.com> wrote: >>> >>>> >>>> On Wed, 13 Sep 2017, Anna Manelis wrote: >>>> >>>> > Thank you very much! I think it works. >>>> >>>> > I ran in terminal: >>>> >>>> > sudo dpkg --purge python-scikits-learn >>>> > sudo dpkg --purge python-sklearn >>>> > export MVPA_EXTERNALS_HAVE_SKL=no >>>> > sudo apt-get install python-mvpa2 >>>> > sudo apt-get autoremove python-sklearn-lib >>>> >>>> > python >>>> > Python 2.7.3 (default, Jun 21 2016, 18:38:19) >>>> > [GCC 4.7.2] on linux2 >>>> > Type "help", "copyright", "credits" or "license" for more >>>> information. >>>> > >>> from mvpa2.tutorial_suite import * >>>> > /usr/lib/python2.7/dist-packages/nose/util.py:14: >>>> DeprecationWarning: The >>>> > compiler package is deprecated and removed in Python 3.x. >>>> > ? from compiler.consts import CO_GENERATOR >>>> >>>> > Should I just disregard this DeprecationWarning? >>>> >>>> yes >>>> >>>> > I am afraid to upgrade Debian from wheezy - who knows what's >gonna >>>> happen >>>> >>>> ;) >>>> >>>> you could learn also to use singularity... we have it in >neurodebian >>>> apt-get install singularity-container >>>> singularity pull shub://neurodebian/neurodebian >>>> ./neurodebian-neurodebian-master.img >>>> >>>> and you are in a fresh neurodebian ... let me know how it goes >>>> >>>> -- >>>> Yaroslav O. Halchenko >>>> Center for Open Neuroscience >http://centerforopenneuroscience.org >>>> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH >03755 >>>> Phone: +1 (603) 646-9834 Fax: +1 (603) >646-1419 >>>> WWW: http://www.linkedin.com/in/yarik >>>> >>>> _______________________________________________ >>>> Pkg-ExpPsy-PyMVPA mailing list >>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org >>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg- >>>> exppsy-pymvpa >>>> >>> >>> >> -- >> Sent from a phone which beats iPhone. >> -- Sent from a phone which beats iPhone. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anna.manelis at gmail.com Mon Sep 18 13:23:42 2017 From: anna.manelis at gmail.com (Anna Manelis) Date: Mon, 18 Sep 2017 09:23:42 -0400 Subject: [pymvpa] trouble importing mvpa2.suite In-Reply-To: References: <20170913132159.fdzpyvo4mdgwj36h@hopa.kiewit.dartmouth.edu> <20170913141716.nu7xaspty7y7rezq@hopa.kiewit.dartmouth.edu> <20170913180137.7m3vqiutdt2ef5rm@hopa.kiewit.dartmouth.edu> <4B408045-DFF9-47D7-9D40-6130720CF357@onerussian.com> Message-ID: Yes. I did apt-get update a week ago and today's morning. The result is still the same :( On Mon, Sep 18, 2017 at 12:29 AM, Yaroslav Halchenko wrote: > Just to make sure, did you do > apt-get update > Any time recently? > > > On September 17, 2017 5:34:06 PM EDT, Anna Manelis > wrote: >> >> My computer is already configured to use NeuroDebian repository, so I >> hoped to be able to install singularity-container. This is not the case >> >> E: Unable to locate package singularity-container >> >> I did not have problem installing other packages from NeuroDebian though. >> >> On Sun, Sep 17, 2017 at 5:23 PM, Yaroslav Halchenko >> wrote: >> >>> In NeuroDebian yes >>> http://neuro.debian.net/pkgs/singularity-container.html?high >>> light=singularity >>> >>> >>> On September 17, 2017 4:10:59 PM EDT, Anna Manelis < >>> anna.manelis at gmail.com> wrote: >>>> >>>> >>>> I tried to install singularity container >>>> >>>> sudo apt-get install singularity-container >>>> >>>> But it failed to install with the error: >>>> >>>> E: Unable to locate package singularity-container >>>> >>>> Is singularity-container actually available for good old wheezy? >>>> >>>> Thank you, >>>> Anna. >>>> >>>> >>>> >>>> On Wed, Sep 13, 2017 at 2:01 PM, Yaroslav Halchenko < >>>> debian at onerussian.com> wrote: >>>> >>>>> >>>>> On Wed, 13 Sep 2017, Anna Manelis wrote: >>>>> >>>>> > Thank you very much! I think it works. >>>>> >>>>> > I ran in terminal: >>>>> >>>>> > sudo dpkg --purge python-scikits-learn >>>>> > sudo dpkg --purge python-sklearn >>>>> > export MVPA_EXTERNALS_HAVE_SKL=no >>>>> > sudo apt-get install python-mvpa2 >>>>> > sudo apt-get autoremove python-sklearn-lib >>>>> >>>>> > python >>>>> > Python 2.7.3 (default, Jun 21 2016, 18:38:19) >>>>> > [GCC 4.7.2] on linux2 >>>>> > Type "help", "copyright", "credits" or "license" for more >>>>> information. >>>>> > >>> from mvpa2.tutorial_suite import * >>>>> > /usr/lib/python2.7/dist-packages/nose/util.py:14: >>>>> DeprecationWarning: The >>>>> > compiler package is deprecated and removed in Python 3.x. >>>>> > ? from compiler.consts import CO_GENERATOR >>>>> >>>>> > Should I just disregard this DeprecationWarning? >>>>> >>>>> yes >>>>> >>>>> > I am afraid to upgrade Debian from wheezy - who knows what's >>>>> gonna happen >>>>> >>>>> ;) >>>>> >>>>> you could learn also to use singularity... we have it in neurodebian >>>>> apt-get install singularity-container >>>>> singularity pull shub://neurodebian/neurodebian >>>>> ./neurodebian-neurodebian-master.img >>>>> >>>>> and you are in a fresh neurodebian ... let me know how it goes >>>>> >>>>> -- >>>>> Yaroslav O. Halchenko >>>>> Center for Open Neuroscience http://centerforopenneuroscience.org >>>>> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 >>>>> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 >>>>> WWW: http://www.linkedin.com/in/yarik >>>>> >>>>> _______________________________________________ >>>>> Pkg-ExpPsy-PyMVPA mailing list >>>>> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org >>>>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg- >>>>> exppsy-pymvpa >>>>> >>>> >>>> >>> -- >>> Sent from a phone which beats iPhone. >>> >> >> > -- > Sent from a phone which beats iPhone. > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbannert at tuebingen.mpg.de Thu Sep 21 08:03:10 2017 From: mbannert at tuebingen.mpg.de (Michael Bannert) Date: Thu, 21 Sep 2017 10:03:10 +0200 Subject: [pymvpa] Mean decoding (for searchlight analyses) Message-ID: Dear PyMVPA experts, I would like to compare my MVPC accuracies obtained using all features in my dataset with classification accuracies using only the mean of each vector sample. The idea is to compare how much information about the class label is represented in the mean overall activation level within a brain region and how much (more) information is represented in the fine-grained patterns. For ROI analyses, I could just use a new dataset containing only the mean values per sample and then classify the scalars. For searchlight analyses, however, I cannot think of an easy way to accomplish this. Can you think of a good way to do this? Thanks & best, Michael From anna.manelis at gmail.com Tue Sep 26 17:02:00 2017 From: anna.manelis at gmail.com (Anna Manelis) Date: Tue, 26 Sep 2017 13:02:00 -0400 Subject: [pymvpa] between-subject classification Message-ID: Dear PYMVPA experts, I plan to use PYMVPA to classify 2 groups of subjects (patients and controls). The inputs are betas that come from the FSL- based analysis (i.e., cope.nii.gz). I use the SVM classifier in a specific mask on 100 selected features. Could you please confirm that a short script below is correct: # copes are taken from /subject/feet/reg_standard/stats/ folder for each subject. For this example, they were copied to sub001, sub002, etc folders. There is a total of N=10 subjects (in this example). # I assume that each subject in a group is a chunk. Is that correct? ################################## from glob import glob import os import numpy as np from mvpa2.suite import* labels=['hc','hc','hc','hc','hc','pt','pt','pt','pt','pt'] chunks=[1,2,3,4,5,1,2,3,4,5] # Subject numbering within each group copes=sorted(glob('/path/to/data/sub0*/cope14.nii.gz')) # Collect file names mask_fname="/path/to/data/my_mask.nii.gz" db=fmri_dataset(copes, targets=labels, chunks=chunks, sprefix=None, tprefix=None, mask=mask_fname, add_fa=None) ds_mni = vstack(db) # Setup classifier - is in the Hyperalignment example clf = LinearCSVMC() # feature selection helpers nf = 100 fselector = FixedNElementTailSelector(nf, tail='upper', mode='select', sort=False) sbfs = SensitivityBasedFeatureSelection(OneWayAnova(), fselector, enable_ca=['sensitivities']) # create classifier with automatic feature selection fsclf = FeatureSelectionClassifier(clf, sbfs) cv = CrossValidation(fsclf, NFoldPartitioner(), enable_ca=['stats']) results = cv(ds_mni) print results ################################## Thank you, Anna. -------------- next part -------------- An HTML attachment was scrubbed... URL: From debian at onerussian.com Tue Sep 26 17:27:11 2017 From: debian at onerussian.com (Yaroslav Halchenko) Date: Tue, 26 Sep 2017 13:27:11 -0400 Subject: [pymvpa] between-subject classification In-Reply-To: References: Message-ID: <20170926172711.ojgmd4vmcmv7mxvh@hopa.kiewit.dartmouth.edu> On Tue, 26 Sep 2017, Anna Manelis wrote: > Dear PYMVPA experts, > I plan to use PYMVPA to classify 2 groups of subjects (patients and > controls). The inputs are betas that come from the FSL- based analysis > (i.e., cope.nii.gz). I use the SVM classifier in a specific mask on 100 > selected features. > Could you please confirm that a short script below is correct: > # copes are taken from /subject/feet/reg_standard/stats/ folder for each > subject. For this example, they were copied to sub001, sub002, etc > folders. There is a total of N=10 subjects (in this example). > # I assume that each subject in a group is a chunk. Is that correct? > from glob import glob > import os > import numpy as np > from mvpa2.suite import* > labels=['hc','hc','hc','hc','hc','pt','pt','pt','pt','pt'] > chunks=[1,2,3,4,5,1,2,3,4,5] # Subject numbering within each group > copes=sorted(glob('/path/to/data/sub0*/cope14.nii.gz')) # Collect file looks correctish assuming that your subjects ordered that there is first 5 hc's and then 5 pt's -- Yaroslav O. Halchenko Center for Open Neuroscience http://centerforopenneuroscience.org Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 WWW: http://www.linkedin.com/in/yarik From anna.manelis at gmail.com Tue Sep 26 17:30:36 2017 From: anna.manelis at gmail.com (Anna Manelis) Date: Tue, 26 Sep 2017 13:30:36 -0400 Subject: [pymvpa] between-subject classification In-Reply-To: <20170926172711.ojgmd4vmcmv7mxvh@hopa.kiewit.dartmouth.edu> References: <20170926172711.ojgmd4vmcmv7mxvh@hopa.kiewit.dartmouth.edu> Message-ID: They are (that's why I used sorted()) Thanks a lot! Anna On Tue, Sep 26, 2017 at 1:27 PM, Yaroslav Halchenko wrote: > > On Tue, 26 Sep 2017, Anna Manelis wrote: > > > Dear PYMVPA experts, > > > I plan to use PYMVPA to classify 2 groups of subjects (patients and > > controls). The inputs are betas that come from the FSL- based analysis > > (i.e., cope.nii.gz). I use the SVM classifier in a specific mask on 100 > > selected features. > > > Could you please confirm that a short script below is correct: > > > # copes are taken from /subject/feet/reg_standard/stats/ folder for > each > > subject. For this example, they were copied to sub001, sub002, etc > > folders. There is a total of N=10 subjects (in this example). > > # I assume that each subject in a group is a chunk. Is that correct? > > > > > from glob import glob > > import os > > import numpy as np > > from mvpa2.suite import* > > > labels=['hc','hc','hc','hc','hc','pt','pt','pt','pt','pt'] > > chunks=[1,2,3,4,5,1,2,3,4,5] # Subject numbering within each group > > > copes=sorted(glob('/path/to/data/sub0*/cope14.nii.gz')) # Collect file > > looks correctish assuming that your subjects ordered that there is first > 5 hc's and then 5 pt's > > -- > Yaroslav O. Halchenko > Center for Open Neuroscience http://centerforopenneuroscience.org > Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755 > Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419 > WWW: http://www.linkedin.com/in/yarik > > _______________________________________________ > Pkg-ExpPsy-PyMVPA mailing list > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org > http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa > -------------- next part -------------- An HTML attachment was scrubbed... URL: