Skip to main content


Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Using Global and Local Surrogate Models

I am working on a POC in which I want to perform some surrogate modeling, but conceptually I have some confusion on this.

A little background: in this POC, I have a Deep Learning network that is performing quite well. In terms of interpretability, I have implemented DeepExplainer to display the SHAP values for local predictions. At a local level, I do feel that this does a good job explaining how exactly the model arrived at the conclusion it did, and how the individual features values contributed to this.

Wanting to take this further, an idea of Surrogate Models was proposed.

From my understanding for Global Surrogate Models are computed as follows: Given a trained black box model, we create an interpretable model using the input into the black box model as the data, and the output of the black box model as the target variable. We now interpret this model

With Local Surrogate Models, my understanding is that at a local level (for individual predictions). This is computed as: taking a local instance of data, perturbing the data, using this perturbed data to train an interpretable model, and interpret this model.

The question I am having a hard time understanding is say I was to create a Global Surrogate Model (say a linear regression). This use this surrogate model to make a specific prediction. Why couldn’t I explain locally how I arrived at this prediction (say I look at the equation of the linear regression for this)?

Any help would be much appreciated!

submitted by /u/Fender6969
[link] [comments]