Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Hello All! First, thanks in advance 🙂 I’m stumped on this one.
I have a deep convolutional autoencoder, and in the final layer of the encoder, I’m not sure if I should use a 1×1 convolution (I’ve already brought it down to 1 spatial dimension), batch normalization, or an activation function, such as ReLu. If I use ReLu on the thinnest layer, I feel like I’m limiting the capacity of that layer more than I need to, since I’m eliminating all negative potential values. Using batch normalization and/or a convolution seems fine, but then what do I lead with in the decoder? If I create a convolution.batch norm.relu set, then I’m doing two linear transformations in a row, which doesn’t seem right.
Thank you for your insights!!
submitted by /u/DataSciencePenguin
[link] [comments]