Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Explainability of PCA + Random Forest on images

Good morning everyone.

I am working on a project where I have a dataset of images regarding an industrial product and I am currently classifying each image into two classes, let’s say [good, bad].

In order to correctly classify the images, I am currently applying the following techniques in this order: HOG + PCA + Random Forest Classifier.

My performance are really good, but I have the necessity to know a specific information, which is roughly the following: “this area of the image weight X% on the final classification”.

First of all, I am executing HOG, so the “area of the image” could be the HOG Cell. Then, I need to gather some feature importances. in order to get this information, Random Forest comes in handy because it will let you know the feature importances in percentage form, once trained. But here the problem arises: my features comes from a PCA. Now, is possible to execute an inverse transformation on a PCA output array and that will give you back the original features array before the application of PCA (which for me is the output of HOG).

In my scenario, will it make sense to apply the inverse transformation to the array of feature importances? And if not, which approach should I go for?

Thanks in advance!

Edit: added link

submitted by /u/Kivid93
[link] [comments]

Next Meetup




Plug yourself into AI and don't miss a beat


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.