Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[R] Re-implementation + experiment reproduction of MAPLE: Model Agnostic suPervised Local Explanations (skMAPLE)

I recently stumbled upon a very interesting blog post, by Gregory Plumb, which introduces a new technique that can generate a local explanation (that captures global patterns and is based on neighboring examples). The intuition/idea is really simple (which makes it even more awesome). For each of the (testing) points for which you want to generate a prediction, you train a linear model where each training sample gets a different weight. In order to calculate the weight of a training sample, you check how “similar” it is to the sample for which you want to predict by looking into a created decision tree ensemble.

Since the technique is that simple (but effective!), I decided to re-implement it (with an interface similar to sklearn) and reproduce the experiment where they compare it to RF. While initial results are slightly different (due to doing less runs), we still see that MAPLE produces better predictive performances than the Random Forest on which it is based, while being interpretable (we can provide a linear model for each prediction, which can give a local explanation).

The code itself can be found on Github. Hope it is of use to any of you!

Original paper: here

Original code: here

submitted by /u/givdwiel
[link] [comments]