Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[P] PyCM 2.0 released: A general benchmark based comparison of classification models

[P] PyCM 2.0 released: A general benchmark based comparison of classification models

PyCM version 2.0 released

http://www.pycm.ir

https://github.com/sepandhaghighi/pycm

In version 2.0 a method for comparing several confusion matrices is introduced. This option is a combination of several overall and class-based benchmarks. Each of the benchmarks evaluates the performance of the classification algorithm from good to poor and give them a numeric score. The score of good performance is 1 and for the poor performance is 0.

After that, two scores are calculated for each confusion matrices, overall and class based. The overall score is the average of the score of four overall benchmarks which are Landis & Koch, Fleiss, Altman, and Cicchetti. And with a same manner, the class based score is the average of the score of three class-based benchmarks which are Positive Likelihood Ratio Interpretation, Discriminant Power Interpretation, and AUC value Interpretation. It should be notice that if one of the benchmarks returns none for one of the classes, that benchmarks will be eliminate in total averaging. If user set weights for the classes, the averaging over the value of class-based benchmark scores will transform to a weighted average.

If the user set the value of by_class boolean input True, the best confusion matrix is the one with the maximum class-based score. Otherwise, if a confusion matrix obtain the maximum of the both overall and class-based score, that will be the reported as the best confusion matrix but in any other cases the compare object doesn’t select best confusion matrix.

https://i.redd.it/7eacdtt2jhs21.png

Changelog :

  • G-Mean (GM) added #178
  • Index of balanced accuracy (IBA) added #176
  • Optimized precision (OP) added #152
  • Pearson’s C (C) added #180
  • Compare class added #111
  • Parameters recommendation warning added #174
  • ConfusionMatrix equal method added #181
  • Document modified #173
  • stat_print function bug fixed
  • table_print function bug fixed
  • Beta parameter renamed to beta (F_calc function & F_beta method)
  • Parameters recommendation for imbalance dataset modified
  • normalize parameter added to save_html method #183
  • pycm_func.py splitted into pycm_class_func.py and pycm_overall_func.py
  • vector_filter, vector_check, class_check and matrix_check functions moved to pycm_util.py
  • RACC_calc and RACCU_calc functions exception handler modified #187
  • Docstrings modified

submitted by /u/sepandhaghighi
[link] [comments]