Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Which performance metric should be used for cases of *minor* class imbalance?

There has been a lot of discussion with regards to choosing an appropriate performance metric to use for model training and evaluation for classification problems with a moderate to large amount of class imbalance present (e.g. 1, 2, 3, 4, 5).

In these cases, it is generally suggest that one uses a metric like cohen’s kappa or PR AUC in place of accuracy or AUROC, perhaps along with up-sampling or SMOTE.

What are people’s thoughts on cases where there is only a minor class imbalance present in the data? For example, something like a 3:1, 2:1, or even 1.5:1 ratio of major to minor class members? Is it still beneficial (and what would be the cost?) or using a metric geared at addressing larger imbalances in these cases?

Also, somewhat tangential, but are there ever any scenarios where you might want to use one metric for an outer CV loop / model performance evaluation, but a different metric for inner CV (e.g. feature selection / hyperparameter optimization)?

submitted by /u/user381
[link] [comments]