Hey folks. Today I’ve officially released the cleanlab Python package, after working out the kinks for three years or so. It’s the first standard framework for accelerating ML and deep learning research and software for datasets with label errors.
cleanlab has some neat features:
- If you have model outputs already (predicted probabilities for your dataset), you can find label errors in one line of code. If you don’t have model outputs, its two lines of code.
- If you’re a researcher dealing with datasets with label errors,
cleanlab will compute the uncertainty estimation statistics for you (noisy channel, latent prior of true labels, joint distribution of noisy and true labels, etc.)
- Training a model (learning with noisy labels) is 3 lines of code.
cleanlab is full of examples — how to find label errors in ImageNet, MNIST, learning with noisy labels, etc.
cleanlab announcement and documentation here: [LINK]
GitHub: https://github.com/cgnorthcutt/cleanlab/ PyPI: https://pypi.org/project/cleanlab/
As an example, here is how you can find label errors in a dataset with PyTorch, TensorFlow, scikit-learn, MXNet, FastText, or other framework in 1 line of code.
# Compute psx (n x m matrix of predicted probabilities)# in your favorite framework on your own first, with any classifier.# Be sure to compute psx in an out-of-sample way (e.g. cross-validation)# Label errors are ordered by likelihood of being an error.# First index in the output list is the most likely error.
from cleanlab.pruning import get_noise_indices
ordered_label_errors = get_noise_indices(s=numpy_array_of_noisy_labels,psx=numpy_array_of_predicted_probabilities,sorted_index_method='normalized_margin', # Orders label errors)
cleanlab logo and my cheesy attempt at a slogan.
P.S. If you happen to work at Google,
cleanlab is incorporated in the internal code base (as of July 2019).P.P.S. I don’t work there, so you’re on your own if Google’s version strays from the open-source version.