[D] Parallelizing LIME
I’m using the LIME for images implementation from https://github.com/marcotcr/lime. From what I can see, it seems that LIME works on one sample at a time. Using this in a simple for-loop (PyTorch dataloader) seems inefficient and results in < 20 % GPU utilization on ImageNet val set with Inception v3.
Anyone has any experience with speeding up LIME for better GPU utilization?