[pymvpa] Penalized logistic regression (PLR)

Richard Dinga dinga92 at gmail.com
Wed Mar 23 10:18:24 UTC 2016


9You can select best parameter by crossvalidation. There are example
scripts for it somewhere. You would probably want to use nested cv to avoid
double dipping.

Plr is using l2 penalization (i think) which is not producing sparse models
but models without extreme weights. If you want sparse models you should
use something with l1 regularization, lasso, elastic net, smlr
On Mar 23, 2016 8:52 AM, "Vincent Taschereau-Dumouchel" <vincenttd at ucla.edu>
wrote:

> Dear All,
>
> I am trying to run a simple logistic regression in pyMVPA and I am having
> some difficulties finding the way to go. I think it might be possible to
> achieve using the penalized logistic regression (PLR) classifier but I ran
> a few tests with different parameters and it is not clear to me which
> should be used. More specifically, if I understand correctly, the penalty
> term lambda should affect the number of features that are selected for the
> classification, but using the get_sensitivity_analyzer function it seems
> that no weights end up with values of 0 either with very small or very high
> lambda values. Anyone can help with this issue?
> Thank you very much for your help!
>
> Vincent
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20160323/172138e2/attachment.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list