[D] Best Encoding Layer for Deep Autoencoder
Hello All! First, thanks in advance 🙂 I’m stumped on this one.
I have a deep convolutional autoencoder, and in the final layer of the encoder, I’m not sure if I should use a 1×1 convolution (I’ve already brought it down to 1 spatial dimension), batch normalization, or an activation function, such as ReLu. If I use ReLu on the thinnest layer, I feel like I’m limiting the capacity of that layer more than I need to, since I’m eliminating all negative potential values. Using batch normalization and/or a convolution seems fine, but then what do I lead with in the decoder? If I create a convolution.batch norm.relu set, then I’m doing two linear transformations in a row, which doesn’t seem right.
Thank you for your insights!!
submitted by /u/DataSciencePenguin
[link] [comments]