Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[P] cleanlab: accelerating ML and deep learning research with noisy labels

[P] cleanlab: accelerating ML and deep learning research with noisy labels

Hey folks. Today I’ve officially released the cleanlab Python package, after working out the kinks for three years or so. It’s the first standard framework for accelerating ML and deep learning research and software for datasets with label errors. cleanlab has some neat features:

  1. If you have model outputs already (predicted probabilities for your dataset), you can find label errors in one line of code. If you don’t have model outputs, its two lines of code.
  2. If you’re a researcher dealing with datasets with label errors, cleanlab will compute the uncertainty estimation statistics for you (noisy channel, latent prior of true labels, joint distribution of noisy and true labels, etc.)
  3. Training a model (learning with noisy labels) is 3 lines of code.
  4. cleanlab is full of examples — how to find label errors in ImageNet, MNIST, learning with noisy labels, etc.

Full cleanlab announcement and documentation here: [LINK]

GitHub: https://github.com/cgnorthcutt/cleanlab/ PyPI: https://pypi.org/project/cleanlab/

As an example, here is how you can find label errors in a dataset with PyTorch, TensorFlow, scikit-learn, MXNet, FastText, or other framework in 1 line of code.

# Compute psx (n x m matrix of predicted probabilities)# in your favorite framework on your own first, with any classifier.# Be sure to compute psx in an out-of-sample way (e.g. cross-validation)# Label errors are ordered by likelihood of being an error.# First index in the output list is the most likely error.

from cleanlab.pruning import get_noise_indices

ordered_label_errors = get_noise_indices(s=numpy_array_of_noisy_labels,psx=numpy_array_of_predicted_probabilities,sorted_index_method='normalized_margin', # Orders label errors)

cleanlab logo and my cheesy attempt at a slogan.

P.S. If you happen to work at Google, cleanlab is incorporated in the internal code base (as of July 2019).P.P.S. I don’t work there, so you’re on your own if Google’s version strays from the open-source version.

submitted by /u/cgnorthcutt
[link] [comments]