Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
In my (convolutional) neural network, there are some activation maps that behave/look almost exactly like each other (i.e. high activations in response to the same visual pattern), which is undesirable behavior for my current project. What are some techniques one can use to encourage the diversity of features extracted by a neural network?
submitted by /u/seann999
[link] [comments]