[pymvpa] libsvm dense arrays

Scott Gorlin gorlins at MIT.EDU
Tue Sep 30 23:25:06 UTC 2008


> if you are using released version of pymvpa, then you can do dirty hack.
>
> prior to any mvpa imports
>
> import mvpa.base.externals
> mvpa.base.externals._VERIFIED['libsvm'] = False
>
> so it would trigger pymvpa to say that libsvm is not available.
>
> Please let us know how it works for you
>
>   
This works quite well, everything is switched to Shogun and runs quite 
nicely.

But it doesn't seem to matter - at least with some preliminary tests 
(looking at model selection, so the code is going back and forth between 
python<->libsvm/shogun quite a bit) using Shogun nets exactly the same 
run times.  More worrisome is that I get worse cross-validation 
accuracies using Shogun for gamma < 0.5 in an Rbf classifier, though the 
results seem comparable otherwise.

I just switched from the ubuntu Shogun package (0.4) to the current one 
(0.6), so now I'm more confused than ever :)

If time performance is the same, does that indicate Shogun does *not* 
use a dense data representation?

If model selection yields different optimal choices due to (sometimes) 
different cross-validation rates, doesn't that suggest that libsvm and 
shogun are using fairly different code bases?

by no means have I tested this on a robust dataset, but this simply 
doesn't feel right :)






More information about the Pkg-ExpPsy-PyMVPA mailing list