Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I am having trouble understanding a section from the MobileNetV2 paper.
In particular, section 3.2 Linear Bottlenecks, authors talk about how “it is easy to see that in general if a result of a layer transformation ReLU(Bx) has a non-zero volume S, the points mapped to interior S are obtained via a linear transformation B of the input, thus indicating that the part of the input space corresponding to the full dimensional output is limited to a linear transformation.” Is there a simpler way of explaining this?
Can I check my understanding, that this volume S is the volume created by the output tensor of ReLU(Bx), which each “pixel value” is a multi-dimensional vector and a point in the subspace, and all of these points form a volume? And if so, it is not clear to me why the interior points have any relevance to the argument.
submitted by /u/DinoHustler
[link] [comments]