[pymvpa] classifier prediction question
Jason Ozubko
jozubko at research.baycrest.org
Fri Jan 17 03:26:43 UTC 2014
Terrific. I just took a quick look but I think this is just what I need.
Thanks!
Cheers,
Jason
On Thu, Jan 16, 2014 at 10:21 PM, Yaroslav Halchenko
<debian at onerussian.com>wrote:
> look into a classifier ca.estimates. Some classifiers (e.g. SMLR, GNB)
> would base their decision on e.g. a posterior probability which would
> then be stored in the clf.ca.estimates for a classifier clf upon making
> a prediction. E.g.
>
> In [18]: clf = mv.SMLR(enable_ca=['predictions'])
>
> In [19]: clf.train(mvtd.datasets['uni3small'])
>
> In [20]: clf.predict(mvtd.datasets['uni3small'])
> Out[20]:
> array(['L0', 'L0', 'L0', 'L0', 'L0', 'L0', 'L0', 'L0', 'L0', 'L0', 'L0',
> 'L0', 'L1', 'L1', 'L1', 'L1', 'L1', 'L1', 'L1', 'L1', 'L1', 'L1',
> 'L1', 'L1', 'L2', 'L2', 'L2', 'L2', 'L2', 'L2', 'L2', 'L2', 'L2',
> 'L2', 'L2', 'L2'],
> dtype='|S2')
>
> In [21]: print clf.ca.estimates
> [[ 9.98840082e-01 7.72142962e-04 3.87774658e-04]
> [ 9.97071204e-01 2.78187822e-03 1.46917290e-04]
> [ 9.89887463e-01 4.86107005e-03 5.25146706e-03]
> [ 9.96544159e-01 1.23337390e-03 2.22246665e-03]
> [ 9.76508361e-01 2.31793063e-03 2.11737084e-02]
> [ 8.52440274e-01 4.06182039e-02 1.06941522e-01]
> [ 9.99943827e-01 1.12451619e-05 4.49279579e-05]
>
>
>
> On Thu, 16 Jan 2014, Jason Ozubko wrote:
>
> > Perhaps a very newbie question but when you call clf.predict is it
> > possible to have the function return more than just a single
> prediction?
> > As in, if I have 4 target labels, is it possible to get, for each test
> > sample, the probability (or some other metric) with which the
> classifier
> > thinks that each of those 4 target labels apply?
> > So for example, if you had target types of "animal", "vegetable",
> > "mineral", and "person" and you trained up a classifier, then with
> > clf.predict I could submit a handful of test samples and get results
> like�
> > ["vegetable"
> > "vegetable"
> > "animal"
> > "person"
> > "mineral"
> > "mineral"]
> > But is there any way to instead get a read out that says something
> like,
> > for the first sample the classifier would have picked vegetable first,
> > then animal, then person, and lastly mineral. �For the second sample
> > however the classifier would have picked vegetable then person, then
> > animal, then mineral? �So I could see not only what option the model
> > predicts but also how close was each test sample to the other options
> as
> > well?
> > Thanks in advance
> > Cheers,
> > Jason
>
> > _______________________________________________
> > Pkg-ExpPsy-PyMVPA mailing list
> > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> >
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>
>
> --
> Yaroslav O. Halchenko, Ph.D.
> http://neuro.debian.net http://www.pymvpa.org http://www.fail2ban.org
> Senior Research Associate, Psychological and Brain Sciences Dept.
> Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
> Phone: +1 (603) 646-9834 Fax: +1 (603) 646-1419
> WWW: http://www.linkedin.com/in/yarik
>
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20140116/aabad015/attachment-0001.html>
More information about the Pkg-ExpPsy-PyMVPA
mailing list